I am not sure that designing the blocks independently will be a real possibility. I can think of two examples: first, Apple's A5, despite being, on paper, largely underpowered compared to dual-core SoC from TI or Qualcomm, allow the Iphone4S to come close, or even outperform the competition on certain benchmarks. This is thanks to a careful balance between CPU, GPU and close integration with the SW. Similarly, the Tegra3 showed the potential of ARM cores optimized differently for power vs performance and yet working together. It strikes me of the Tegra3 that despite having 5 cores, only 4 of them can be used in parallel with the 5th off: I understand the reasoning behind, but it is arguably not a very efficient design implementation (e.g.: why not leaving the 5th, low-power core on working on background tasks and throwing the 4 fast cores to the demanding apps?). It seems to me that a coordinated effort between all sections of the SoC could bring significant advantages in terms of performance (and I'm including efficiency). My 2 cents.
You can view it similarly to the transition from assembler programming to high level, object oriented code. The former is more efficient, if done correctly, but takes a lot of time and effort. Plus the result may not be very flexible. OO programming on the other hand trades off some execution efficiency for faster time to market and design flexibility, i.e. make your blocks autonomous enough (with proper interfaces) and it'll be easier to mix and reuse them.
Once you've got there you can still apply some assembler hand-tuning where it matters to improve speed and efficiency. Note that this is rather similar to e.g. Fermis design process.
MrSpadge, I know exactly what you mean and what you're describing. What I said is that I am not sure this would be enough: CPU optimiztion is different than GPU. In the GPU field more or less both AMD and Nvidia do what you describe. In the CPU though Intel does a lot more of tweaking. I have no way to know for sure, of course, but it is hard not to think about Bulldozer, and how it, well, sucks. Large, power hungry and underperforming. ARM has improved its designed over several years and it strikes me that their emphasis on power consumption is unparalleled. So what I said in my previous post is that if AMD plans to do an SoC by slapping together a bunch of IP blocks, it may create an awesome architecture which underperforms in every aspect. Which would be a pity.
You do realize that Apples SoC (largely dependent on Samsung Fab-tools) is faster in some benchmarks because they use another third party GPU? You do realize that it was an AMD division that designed Qualcomms Adreno GPU to begin with? Or that Apple don't write their own GPU drivers to begin with? ARMs Mali is fully synthesized too. The players don't start from scratch regardless if it's hardware or software. Changing out parts such as gpu's aren't hard here. You are not talking about anything to do with design and engineering of the IP, you might be talking about integration and platform, that is software and the combination of the product. That is different teams altogether. Everything in ARM SoC's is pretty much synthesized and can be used anywhere. If you want faster 3D on non iOS devices you can use the same GPU and parts of the driver development done there and clock it higher then Apple does and easily outperform it in gaming benchmarks. It's not for simply UI they have such a strong GPU. Of course a PS Vita is stronger then A5 for example, all you have their is synthesized hardware, adjoining software from those IP vendors and just a minimal firmware from Sony. They don't actually change anything inside the IP blocks which is already designed.
Hey Penti, thanks for trying to tell me what I "might be talking about": there's always a first. Regretfully, I inform you that you missed my point. Why I ado appreciate learning, I might suggest that a less lecturing attitude might help you understand better what you're reading and contribute to a discussion. I said something very simple: "I am not sure that designing the blocks independently will be a real possibility." This has nothing to do with the design of the block itself and with the IP reuse. What I am saying is that just putting a bunch of IP blocks and synthesizing them together does not make a good design. You need to balance the relative "power" of one block with the others, you need to make sure that the blocks work efficiently together, and you need to make sure that all of the above happens smoothly over different operating conditions.
Otherwise you run the risk that you'll have another bulldozer: a complex, revolutionary architecture, that fails in power consumption, peak performance and even cost (huge die size). That's all.
AMD has good ideas on how to improve things, and they're mostly amazing, it's just the execution/implementation takes a lot of time and most of the times are late.
If they can take their three years roadmap and make it in two years (or less), only then they'll have the significant market share.
The biggest reason for this is many of AMD's past leaders completely ignored the sub-laptop and SoC markets. I believe it was former CEO Dirk Meyers that first rejected the idea of making smaller chips for netbooks when Intel was pushing Atom and Nvidia was pushing Tegra, then more recently Meyers and Bergman rejected the idea of AMD entering the mobile phone market.
Now it looks as if Intel has finally bridged the gap in power consumption and performance with their Medfield SoC for both tablets and smartphones and again, AMD has nothing to show for it. Its really too bad too because I think AMD's graphics with an ARM-based SoC could be an interesting product. But now once again, they're way behind the curve as all other major players (Intel, Nvidia, Samsung, Qualcomm etc) have compelling offerings in that mobile and tablet space.
Or maybe they ignored it deliberately. Seeing Intel struggling with Atom, they told themselves, let's focus on something we already mastered. AMD cannot afford to waste one or two development cycles (like Intel did with Atom) to move there.
What would bother me more, is that instead saying, ok we did not cut it this time, let's learn and come back later, they are now trying to give an impression, they will succeed in something else. Few marketing slides will not make it happen.
Idk it just seems awfully short-sighted to ignore that market since that's where all the gadget growth is projected over the next 3-5 years. Tablets and smartphones.
Intel's Atom started off modestly, shooting for mini-ITX HTPCs, same as Nvidia's Tegra as a drop-in mGPU with Atom. Keep in mind, Intel also resisted the idea of Netbooks but at least they had a contingency plan with Atom. Nvidia obviously wanted to get away from the idea of x86 as quickly as possible and embraced ARM.
It looks like AMD is also getting interested in at least the Tablet market, but the roadmap indicates they won't even have something that fits that TDP profile until next year. Just way behind even getting their foot in the door, but I think Reed will at least have them in a better position to compete through more timely execution.
Some of the reactions here seem to assume all at AMD are gonna think. "OK we did the slide show, now let's figure out how to do all this." You bet there's stuff in the works already. You can't come up with these slides based on nothing but ideas. I just hope like many of you that their execution will be timely from now on.
For last 25 years Intel was successful to deliver better and better CPUs with AMD following. Introducing Athlon CPU AMD they took lead and profits limited only by FAB and Intel bribes. Mixing GPUs, CPUs etc. says about INABILITY to create a competitive core. Adding GCN, ARM and I/O doesn't change that.
These slides are very disturbing in that they have tons of "Corporate Speak" platitudes that don't indicate that AMD has any real ideas for the future. This is just corporate boardroom "noise" devoid of any real content. Unfortunately, this furthers the suspicion that AMD is really flailing and has no real Idea of where to go as a company.
Didn't they do this in the past already with Torrenza ? All it achieved were a few very specific products. I don't see this getting past initial designs ...
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
18 Comments
Back to Article
samirsshah - Friday, February 3, 2012 - link
Yes.yankeeDDL - Friday, February 3, 2012 - link
I am not sure that designing the blocks independently will be a real possibility.I can think of two examples: first, Apple's A5, despite being, on paper, largely underpowered compared to dual-core SoC from TI or Qualcomm, allow the Iphone4S to come close, or even outperform the competition on certain benchmarks. This is thanks to a careful balance between CPU, GPU and close integration with the SW.
Similarly, the Tegra3 showed the potential of ARM cores optimized differently for power vs performance and yet working together. It strikes me of the Tegra3 that despite having 5 cores, only 4 of them can be used in parallel with the 5th off: I understand the reasoning behind, but it is arguably not a very efficient design implementation (e.g.: why not leaving the 5th, low-power core on working on background tasks and throwing the 4 fast cores to the demanding apps?).
It seems to me that a coordinated effort between all sections of the SoC could bring significant advantages in terms of performance (and I'm including efficiency).
My 2 cents.
MrSpadge - Friday, February 3, 2012 - link
You can view it similarly to the transition from assembler programming to high level, object oriented code. The former is more efficient, if done correctly, but takes a lot of time and effort. Plus the result may not be very flexible. OO programming on the other hand trades off some execution efficiency for faster time to market and design flexibility, i.e. make your blocks autonomous enough (with proper interfaces) and it'll be easier to mix and reuse them.Once you've got there you can still apply some assembler hand-tuning where it matters to improve speed and efficiency. Note that this is rather similar to e.g. Fermis design process.
yankeeDDL - Saturday, February 4, 2012 - link
MrSpadge, I know exactly what you mean and what you're describing.What I said is that I am not sure this would be enough: CPU optimiztion is different than GPU. In the GPU field more or less both AMD and Nvidia do what you describe. In the CPU though Intel does a lot more of tweaking.
I have no way to know for sure, of course, but it is hard not to think about Bulldozer, and how it, well, sucks. Large, power hungry and underperforming. ARM has improved its designed over several years and it strikes me that their emphasis on power consumption is unparalleled.
So what I said in my previous post is that if AMD plans to do an SoC by slapping together a bunch of IP blocks, it may create an awesome architecture which underperforms in every aspect. Which would be a pity.
Penti - Friday, February 3, 2012 - link
You do realize that Apples SoC (largely dependent on Samsung Fab-tools) is faster in some benchmarks because they use another third party GPU? You do realize that it was an AMD division that designed Qualcomms Adreno GPU to begin with? Or that Apple don't write their own GPU drivers to begin with? ARMs Mali is fully synthesized too. The players don't start from scratch regardless if it's hardware or software. Changing out parts such as gpu's aren't hard here. You are not talking about anything to do with design and engineering of the IP, you might be talking about integration and platform, that is software and the combination of the product. That is different teams altogether. Everything in ARM SoC's is pretty much synthesized and can be used anywhere. If you want faster 3D on non iOS devices you can use the same GPU and parts of the driver development done there and clock it higher then Apple does and easily outperform it in gaming benchmarks. It's not for simply UI they have such a strong GPU. Of course a PS Vita is stronger then A5 for example, all you have their is synthesized hardware, adjoining software from those IP vendors and just a minimal firmware from Sony. They don't actually change anything inside the IP blocks which is already designed.yankeeDDL - Saturday, February 4, 2012 - link
Hey Penti, thanks for trying to tell me what I "might be talking about": there's always a first.Regretfully, I inform you that you missed my point. Why I ado appreciate learning, I might suggest that a less lecturing attitude might help you understand better what you're reading and contribute to a discussion.
I said something very simple: "I am not sure that designing the blocks independently will be a real possibility."
This has nothing to do with the design of the block itself and with the IP reuse.
What I am saying is that just putting a bunch of IP blocks and synthesizing them together does not make a good design. You need to balance the relative "power" of one block with the others, you need to make sure that the blocks work efficiently together, and you need to make sure that all of the above happens smoothly over different operating conditions.
Otherwise you run the risk that you'll have another bulldozer: a complex, revolutionary architecture, that fails in power consumption, peak performance and even cost (huge die size). That's all.
Malih - Friday, February 3, 2012 - link
AMD has good ideas on how to improve things, and they're mostly amazing, it's just the execution/implementation takes a lot of time and most of the times are late.If they can take their three years roadmap and make it in two years (or less), only then they'll have the significant market share.
metafor - Friday, February 3, 2012 - link
So basically they're going to do what ARM via AXI (and previously AMBA) have been able to do for almost a decade now....chizow - Friday, February 3, 2012 - link
The biggest reason for this is many of AMD's past leaders completely ignored the sub-laptop and SoC markets. I believe it was former CEO Dirk Meyers that first rejected the idea of making smaller chips for netbooks when Intel was pushing Atom and Nvidia was pushing Tegra, then more recently Meyers and Bergman rejected the idea of AMD entering the mobile phone market.Now it looks as if Intel has finally bridged the gap in power consumption and performance with their Medfield SoC for both tablets and smartphones and again, AMD has nothing to show for it. Its really too bad too because I think AMD's graphics with an ARM-based SoC could be an interesting product. But now once again, they're way behind the curve as all other major players (Intel, Nvidia, Samsung, Qualcomm etc) have compelling offerings in that mobile and tablet space.
risa2000 - Friday, February 3, 2012 - link
Or maybe they ignored it deliberately. Seeing Intel struggling with Atom, they told themselves, let's focus on something we already mastered. AMD cannot afford to waste one or two development cycles (like Intel did with Atom) to move there.What would bother me more, is that instead saying, ok we did not cut it this time, let's learn and come back later, they are now trying to give an impression, they will succeed in something else. Few marketing slides will not make it happen.
chizow - Friday, February 3, 2012 - link
Idk it just seems awfully short-sighted to ignore that market since that's where all the gadget growth is projected over the next 3-5 years. Tablets and smartphones.Intel's Atom started off modestly, shooting for mini-ITX HTPCs, same as Nvidia's Tegra as a drop-in mGPU with Atom. Keep in mind, Intel also resisted the idea of Netbooks but at least they had a contingency plan with Atom. Nvidia obviously wanted to get away from the idea of x86 as quickly as possible and embraced ARM.
It looks like AMD is also getting interested in at least the Tablet market, but the roadmap indicates they won't even have something that fits that TDP profile until next year. Just way behind even getting their foot in the door, but I think Reed will at least have them in a better position to compete through more timely execution.
cfaalm - Friday, February 3, 2012 - link
Some of the reactions here seem to assume all at AMD are gonna think. "OK we did the slide show, now let's figure out how to do all this." You bet there's stuff in the works already. You can't come up with these slides based on nothing but ideas. I just hope like many of you that their execution will be timely from now on.fujiyama - Saturday, February 4, 2012 - link
For last 25 years Intel was successful to deliver better and better CPUs with AMD following.Introducing Athlon CPU AMD they took lead and profits limited only by FAB and Intel bribes.
Mixing GPUs, CPUs etc. says about INABILITY to create a competitive core.
Adding GCN, ARM and I/O doesn't change that.
silverblue - Saturday, February 4, 2012 - link
Much like Intel slapping a GPU on their CPUs doesn't make their GPUs any good.Sunsmasher - Saturday, February 4, 2012 - link
These slides are very disturbing in that they have tons of "Corporate Speak" platitudes that don't indicate that AMD has any real ideas for the future.This is just corporate boardroom "noise" devoid of any real content.
Unfortunately, this furthers the suspicion that AMD is really flailing and has no real
Idea of where to go as a company.
haplo602 - Sunday, February 5, 2012 - link
Didn't they do this in the past already with Torrenza ? All it achieved were a few very specific products. I don't see this getting past initial designs ...andreamaeB - Monday, February 6, 2012 - link
I like it. Just a few more innovations and it was perfect.hingfingg - Thursday, February 16, 2012 - link
** {{w w w }} {{proxy4biz }} {{ com}} *****