Comments Locked

68 Comments

Back to Article

  • smilingcrow - Friday, November 13, 2009 - link

    There seems to be a lot of confusion over what the Bulldozer block diagram represents. Fudzilla seemed to suggest that is of a complete Octo core CPU made up of two quad core blocks. Techreport seemed to suggest that it represents a ‘true’ dual core CPU block and that actual CPUs will consist of one or more of these which seems wrong as that suggests AMD would release single/dual thread versions of Bulldozer.
    Anandtech’s interpretations seems more logical and the big surprise is that it will seemingly be a 4/8 core/thread design although performance may well be much closer to that of an 8 core. But if it’s up against an Intel 8/16 design surely it will struggle unless the other enhancements can close the IPC gap.

    As for those expecting to see a very high performance GPU (relative to whatever current discrete cards offer) on the same die as the CPU well just forget it. The socket would be enormous and the TDP ridiculous. You’d be looking at up to 350W which sounds impractical.
  • JFAMD - Thursday, November 19, 2009 - link

    Each Bulldozer die will be made up of multiple modules. Each module has 2 integer cores and a shared FPU.

    An 8-core bulldozer die has 4 modules and 8 total integer cores.
    The product will be marketed as an 8-core processor
    The system (hardware)will see 8 cores.
    The OS will see 8 cores.
    The applications will see 8 cores.

    Don't get hung up on modules. They only exist to the designers and in powerpoint slides. They will not be visible to the system or to the software.
  • smilingcrow - Friday, November 20, 2009 - link

    A single Bulldozer "module" looks to the OS like a single processor core with simultaneous multithreading (SMT) enabled.
  • JFAMD - Friday, November 20, 2009 - link

    No, a single bulldozer module looks like two individual cores, it does not look like a hyperthreaded core. The OS cannot see the module, it only sees integer cores.

    With HyperThreading, the job scheduler needs to know how to deal with "full cores" and "shared cores."

    For instance, if you have 4 cores with HT, if you are only running 4 threads, you want them each spread out on to individual cores, you don't want all 4 of them sharing 2 cores while other cores are sitting idle. That is a relatively low level of scheduler complexity (relatively speaking.)

    But when 2 more threads come through, as you start sharing cores, you want to try to optimize the threads around execution. Let's say you have all 4 cores active with threads, 2 "heavy" threads and 2 "light" threads. If you need to "double up" the 2 new threads, you'd rather put them on the 2 cores that have light loads right now. So, this is a much more complex level of complexity.

    So, with HyperThreading, you have an added level of complexity. You have to think about WHERE to put threads vs. just looking for the next available resource.

    Both could be considered the same if you are contending that HyperThreading is unoptimized and there would be shared cores with other cores fully idle. But I do not believe this is the case.

    With Bulldozer, each new thread has the same shot at resources and each resource acts the same, because your schedulers are dedicated.
  • Zool - Monday, November 23, 2009 - link

    I dont know where do you get the bulldozer module thingy. From the picture its quite unclear. If they would have more modules it would be on the picture. The slides say "two tightly linked cores" each with its own L1 cache. But the L2 and L3 are all shared so ading new modules look like ading new L3 and L2 and thats stupid.
    For me from the picture it looks more like amd hit the nail right on the head.
    Maybe the pipelines they reffer are execution units of seweral cores together tied to one L1 cache. If u think about it why would you need 8 complete cpu dies with all the transistors just to hawe 8 times the execution units ? So why not just put together the most main core logic u can in one core and conect it on one L1 cache and the rest is all shared. Those 8 cores would be actualy in 2 super wide cores without the need for os or programers to take care of each physical core.

  • Zool - Monday, November 23, 2009 - link

    It would also reduce the die size for a similar 8 core cpu and with it the cost of course.It would be also much faster than 8 cores with its own L1 and L2 cache comunicating trough external bus (QPI or HyperTransport link). Maybe AMD did learn something from ATI and the gpu-s. Why would you increase the gap betwen cores with increasing the amount of them when u could hide them to software and put them closer right next to each other. Like anyone would need a 16 core nehalem when majority of programs cant use even 4 of them.
  • dew111 - Sunday, May 30, 2010 - link

    This diagram represents one block that acts as two cores. Zool, modern x86 cores need multiple functional units to support efficient out of order execution and keep the IPC high. Modern x86 architectures translate x86 instructions into multiple 'micro-ops' which are simple, RISC-like instructions. These can be executed in parallel much easier. The 'integer scheduler' will only hold micro-ops for one thread. Thus, the pipelines in the diagram are not individual cores. Superscalar (multiple parallel pipelines) design has been common for over 15 years now in CPU's.

    Therefore, as stated in the article, this block will be replicated on-die to create 4, 6, 8+ core processors. The processor would almost certainly have a monolithic L3 cache, and would probably have separate L2 caches for each '2-core' block.
  • Zool - Monday, November 16, 2009 - link

    There isnt to much programs that can use more than 2 cores if u dont count graphic rendering , encodings. On a single computer in benchmarks the only thing that is increasing performance near linear are rendering and encodings for quite a time. For a average user using a average program the intel 8/16 is overkill. There arent to many people runing 8 active working programs at once. And for virtualization double the integer performance is much better.
    Also high performance gpu on cpu wont hapen until they dont solve the bandwith for it.(something over 150 GB/s right now for high end)
    With curent cpu memory bandwith it will be enough for low end gpu and a fast coprocesor.
  • Risforrocket - Monday, November 16, 2009 - link

    I don't think anyone expects this first APU to provide strong graphics. However, it might surprise if the cpu cores are involved sufficiently but that might lead to other disappointments. I am hoping the gpu core will be used not so much for graphics but for the types of computing tasks that it will do well. I will certainly be using a discrete graphics card as I always have. And I'm sure it will come in 4 core and 8 core versions, anything less will be a step back.
  • qcmadness - Friday, November 13, 2009 - link

    I would expect IPC to be enhanced.

    The "good old" K7 integer and floating point units will be redesigned.
  • eek2121 - Thursday, November 12, 2009 - link

    Hi Guys, unfortunately, despite what this article implies, AMD is NOT targeting the high end with bulldozer. A quote from AMD's blog:
    " “Bulldozer” is AMD’s completely new, high performance architecture for the mainstream server, mainstream desktop and notebook PC markets that employs a new approach to multithreaded compute performance for achieving advanced efficiency and throughput. “Bulldozer” is designed to give AMD an exceptional CPU option for linking with GPUs in highly scalable, single-chip Accelerated Processing Unit (APU) configurations. "

    Notice the use of mainstream in "mainstream server, mainstream desktop and notebook PC markets".

    Will bulldozer be faster? Yes it will. Will it target high end/enthusiasts? No it won't. Enthusiasts will be using intel for the forseeable future unforunately.

    Source: http://blogs.amd.com/unprocessed/tag/bulldozer/">http://blogs.amd.com/unprocessed/tag/bulldozer/
  • medi01 - Friday, November 13, 2009 - link

    If the APU approach succeeds it will mean that we have CPU based on AMD + ATI expertise vs Intel + Intel's GPU. (even if Intel decides to buy nVidia, it will take time)

    Performance difference between AMD CPU and Intel CPU is much smaller, than between ATI GPU and Intel GPU. So I don't get, what would stop AMD from getting "top gaming performance" crown.
  • Risforrocket - Monday, November 16, 2009 - link

    AMD's APU concept, as I see it, is a heterogeneous cpu which involves conventional cpu cores and gpu cores. The interesting but perhaps subtle point that needs to be looked at is that there is a trend in gpu design that is leading to a highly parallel floating point (don't know how to say it) graphics-like computing device that can be used for tasks other than graphics. This is exciting and if AMD does this right this can lead to a first strong step in new heterogeneous cpu architectures. This is the next step in the movement towards multi core cpu design.
  • srp49ers - Friday, November 13, 2009 - link

    I dont think Intel would be allowed to buy nvidia by regulators
  • 7Enigma - Friday, November 13, 2009 - link

    TDP?
  • fsdetained - Thursday, November 12, 2009 - link

    You have no concept of business terminology do you?
  • lifeblood - Thursday, November 12, 2009 - link

    How will the CPU / GPU combo work? A low and CPU comes with the low end GPU on die, while the high end CPU comes with a high end GPU on die? Is this the end of discreet graphic cards? Or will you still be able to plug in a discreet card and it will work over PCIe as before?

    This is important for me as I tend to upgrade incrementally, a part at a time as I can afford it (and my wife doesn't notice it).

    Also, will the first Bulldozer suck because it doesn't have the GPU on die? Will it try to use the discreet or IGP video card for FPU stuff at a horrible performance penalty?
  • Mr Perfect - Thursday, November 12, 2009 - link

    The bulldozer slide says it's using a shared L2 cache. Does this mean L2 is shared between the two half-cores, or between two or more of the whole-cores?

    That particular slide doesn't make it clear where the core stops, and where the rest of the CPU(uncore?) begins. O_o
  • GaiaHunter - Friday, November 13, 2009 - link

    It is shared between those 2 cores. I think AMD isn't calling them half-cores.
  • Mr Perfect - Friday, November 13, 2009 - link

    Hmm. I really wish they'd have shown off a block diagram of the whole CPU.

    The way it reads, it sounds like it will be 4 "cores", each with two smaller cores in it(what I called half-cores) making a total of 8 threads. I guess they don't want to show to much of their hand.
  • Etern205 - Thursday, November 12, 2009 - link

    A insider from Intel has leaked the design for their future cpu.
    Build on a 2nm process, quad qpi of 1.3TB/s, 24GB of L3 cache, operates at 1.6Ghz-5GHz, and comes in flavors of dual and all the way to 48 cores.

    The leaked memo states Intel with having much ran out of good codenames, they've all decided to agree to call this one TONKA. :P
  • camylarde - Thursday, November 12, 2009 - link

    Ok, so I am planning to purchase that ATi 5870 next year after all. Its amazing what your 300 $ can do for the shape of a world ;-)
  • fitten - Thursday, November 12, 2009 - link

    But I think it's risky... how much software is available to use the GPU and how much will be by then? Also, there are things like latency and such to consider when doing GPU offloading. So, doing something between no-floating point work and some amount of floating point work that amortizes the latencies and such for setting up the GPU to do the work, especially in a multithreaded program, will likely tank on it, I think (I don't know where the tradeoff is).

    As it is, you can write your programs to use the GPU using OpenCL/etc. but that has to load both your program and data onto the GPU to do the work then offload the data when done (the runtime may handle that stuff for you but it still takes time). What are the latencies involved with that (seriously, I don't know... haven't looked into it that deeply)?
  • pcfxer - Thursday, November 12, 2009 - link

    Twinning the L1 cache, DAMN AMD beat me to that one and I'm a computing architecture MASTER!

    Note to AMD: Optimize the garbage out of the software side - micro-code, perhaps even hire on some GCC and PCC compiler teams to develop smarter linkers/cross-compilers.

    I don't know how Intel manages their microcode but I can assume that their software is more efficient considering the hardware similarities.
  • JVLebbink - Thursday, November 12, 2009 - link

    With these two cores looking a lot the same (removing one integer cluster and not looking at the FPU) would that mean that Bulldozer also has only 90% of the single threaded integer performance of the current Phenom II?

    I wonder how they are going to make up for that with other enhancements?
  • GaiaHunter - Friday, November 13, 2009 - link

    I don't think you can infer that. The pipelines are bound to be different and don't forget L3 cache and new set of instructions "Bulldozer" will have compared to Phenom II.
  • GaiaHunter - Wednesday, November 11, 2009 - link

    There is been some discussion about this on the forums.

    When AMD says 4 Bulldozer cores, does it means 4Cores/8Threads or does it means a pair of bulldozer cores each with its "2 tightly linked together cores" able to do 4 threads?

    Thank you.
  • Anand Lal Shimpi - Thursday, November 12, 2009 - link

    I believe it means 4 cores/8 threads, but I will ask AMD to confirm.

    Take care,
    Anand
  • Anand Lal Shimpi - Thursday, November 12, 2009 - link

    Confirmed. 4 cores/8 threads, each Bulldozer core can handle two threads.

    Take care,
    Anand
  • Eeqmcsq - Thursday, November 12, 2009 - link

    So does this mean the rumors of an 8 core Bulldozer are wrong? That they were referring to an 8 thread Bulldozer? Or is there really an 8 core/16 thread Bulldozer in the works?
  • GaiaHunter - Friday, November 13, 2009 - link

    Sincerely I've been reading contradictory information everywhere.

    From what Anand said in here in response to my question, I would believe Zambezi would be release in 4 cores/8 threads and 8 cores/16 threads flavors.

    But then JF-AMD, a server AMD guy in other forums, stated that "Interlagos" is 16 cores/16 threads, so at this moment I don't know what to think.

    First, I was looking at Bulldozer has a CPU with HT, maybe requiring more hardware than Intel solution and maybe with more performance on the logical thread compared to Intel improvements, but now I'm starting to think that "Bulldozer module" is a dual-core with that actually uses less hardware to accomplish the same thing than current AMD CPUs (I'm not saying anything about how Phenom II compares to Bulldozer, especially because I don't know).

    Really, I think the question I should have asked Anand was "Will Zambezi be released in 8 "Bulldozer Modules"/16 threads or only 4 "Bulldozer Modules"/8 Threads?"

    I think maybe what I need to is review my way of thinking about cores in the AMD case.

    Bulldozer seems to be a twin-core that can't be split, but otherwise is equivalent to the way we see 2 current cores.
  • qcmadness - Friday, November 13, 2009 - link

    I don't think AMD will make next gen CPU without SMT support.

    AMD already stated that the lack of SMT support in K10 as a mistake. And I remember someone in AMD said that they try to put 32-thread in 1 socket at 32nm node. It could be if dual Zambezi MCM is achieved.

  • chiddy - Thursday, November 12, 2009 - link

    Is it likely that bulldozer will be able to compete with sandy bridge/ivy bridge also due out around the same time?
  • GaiaHunter - Thursday, November 12, 2009 - link

    Thank you very much :)
  • iwodo - Wednesday, November 11, 2009 - link

    BullDozer Looks good... However, By the time they have BullDozer Core with an APU, Intel will already be launching IvyBridge.
    While BullDozer looks very well on paper to be better then Intel Nahalem in General Desktop Performance. SandyBridge is designed for improving IPC as well.

    I can only wish AMD could accelerate their Roadmap.
    Just a question pops up on my mind, When was the last time AMD push forward their Roadmap? Intel has a quite a few times pick up pace, while AMD always seems to lag behind themselves.
  • 7Enigma - Thursday, November 12, 2009 - link

    That's actually a very interesting question (when has AMD brought a CPU product to launch before originally expected).

    Anand could you comment?
  • mikeepu - Monday, November 16, 2009 - link

    Wasn't Istanbul and shanghai slightly ahead of schedule?
  • Glenn - Wednesday, November 11, 2009 - link

    If they plan on 2011 we might see it in late 2012 and the world is coming to an end then, I heard?
  • murray13 - Wednesday, November 11, 2009 - link

    The overall look of fusion looks to me like way back in time when they added a 'co-processor' chip. Adding FP power.

    Only now they are taking away FP power and letting it be done by the chip that does it so well.

    Old ideas never really go away.

    K.I.S.S. rears its head again, I just hope performance will be up to the elegance of the architecture.
  • Eeqmcsq - Wednesday, November 11, 2009 - link

    Why does the slide "Heterogeneous Computing" appear twice?
  • Zstream - Wednesday, November 11, 2009 - link

    Better then homosexual computing? We would not have baby Athlons if that be the case :D
  • Denithor - Thursday, November 12, 2009 - link

    AMD chose to call those "baby Athlons" Bobcat...
  • StormyParis - Wednesday, November 11, 2009 - link

    This is totally biased: whay doesn't this articale strat off with an Intel slide ? they always do ! stop the fanboism, Anand ^^
  • jav6454 - Wednesday, November 11, 2009 - link

    I like innovation, but deliverying by 2011 can make AMD fall behind from intel. Although by the look of these designs, AMD seems to have a better multithread solutionthan intel.
  • webmastir - Wednesday, November 11, 2009 - link

    http://digg.com/hardware/AMD_Unveils_Bulldozer_Bob...">http://digg.com/hardware/AMD_Unveils_Bulldozer_Bob...
  • Zstream - Wednesday, November 11, 2009 - link

    No Intel picture for the front page lol.
  • Denithor - Thursday, November 12, 2009 - link

    Beat me to it!
  • jaimeoc - Wednesday, November 11, 2009 - link

    I'm sorry but I'm not familiar with all this acronym fuzz.
    Anybody nows what the difference is between A-pipe and M-pipe in the Bobcat's slide? Can it be one specific pipe for add, one for mult?
  • mczak - Thursday, November 12, 2009 - link

    That would certainly also be my interpretation. One mul and one add pipe hint to a similar design to current K10 - so seems like bobcat won't get the updated fpu from bulldozer (both pipes fmac capable) but get the same integer unit (however, only one instead of two) as bulldozer.
  • mczak - Thursday, November 12, 2009 - link

    That would certainly also be my interpretation. One mul and one add pipe hint to a similar design to current K10 - so seems like bobcat won't get the updated fpu from bulldozer (both pipes fmac capable) but get the same integer unit (however, only one instead of two) as bulldozer.
  • bakerzdosen - Wednesday, November 11, 2009 - link

    I'd LOVE to see AMD come out with a true Nehalem EX competitor, but it doesn't look like 2010 will really be the year where AMD leap frog's Intel. And that's too bad for all of us.
  • yyrkoon - Wednesday, November 11, 2009 - link

    Please explain to me why it is necessary for "AMD to leapfrog Intel". Well first, I suppose some clarification would also be necessary.

    Anyways, it is not necessary for Intel, OR AMD to lead anything. All that should matter to us consumers is that there is enough competitiveness in the market; That *we* the consumer get a good product, for a fair price. You should hope that neither go away, and both stay competitive.
  • strikeback03 - Thursday, November 12, 2009 - link

    Might not be technically possible, but it is nice to think that if AMD were to leapfrog Intel, a) that would be a very good AMD processor; and b) Intel would respond with something on the order of the P4 to C2D jump. Whereas when Intel already holds the lead, we see the C2D to i7 jump.
  • Indigo64 - Wednesday, November 11, 2009 - link

    It's fun to see them leapfrog for sure, but I'm with the poster I'm quoting - as long as the consumer wins. I'm always for AMD AND Intel's success. I can't wait to see what each one has to bring to the table. Innovation FTW.
  • Scali - Thursday, November 12, 2009 - link

    Yea, the rules of the game are pretty simple...
    If AMD wins, Intel will compete on price by offering cheaper alternatives.
    If Intel wins, AMD will compete on price by offering cheaper alternatives.
    If they're tied, they're going for a direct pricewar, and both will have low prices.

    I think in any situation, at least one of the two will be going for maximum price/performance. It doesn't really matter whether that's AMD or Intel, does it? As long as the product does what it says on the tin.
  • Calin - Thursday, November 12, 2009 - link

    Competing on price is financially bad for the company that compete on price. Remember when AMD had the performance crown? Its products were more expensive than Intel's. That ends up with more money for the same products. As soon as Intel leapfrogged AMD, the same products were cheaper, and brought fewer money.
  • JPForums - Thursday, November 12, 2009 - link

    Funny, I seem to remember a different view of when AMD was on top. They offered a top of the line Athlon64-FX for ~$750. The clearly inferior, but competing P4- Extreme from Intel was still priced around $1000.

    In the mainstream, you did pay a premium for AMD chips. However, the premium wasn't that great. Consider that Intel still had to lower the prices of their offerings to compete. However, they still didn't lower prices to the point that they could be called a better value. In other words, AMD's offerings at the time had AT LEAST enough of a performance advantage to justify the premium they sought. Further, demand for the AMD chips were high enough that they couldn't always keep up with demand. It's a wonder prices didn't get significantly higher.

    Now that Intel is back on top, AMD has show itself much more willing to price its products at price points that (for the most part) are justified by their performance. When Intel lowered prices to match the value of AMDs processors, AMD kept lowering prices to maintain the value advantage.

    Notably, Intel's processor lineup suffers an incredible price disparity between the chips that AMD can compete with and the chips for which there is no competition. Just look at the current pricing of the Core i7 series (from newegg).

    Exerts from the 900 series:
    Model 920 950 975
    Pricing $289 $570 $1000

    or if you prefer the 800 series:

    Model 860 870
    Pricing $290 $550

    That all said, if AMD got on top for long enough, they would eventually do the same thing. It isn't wrong of a company to charge a premium for a superior product. Of course using monopolistic and/or illegal business practices to restrict alternatives and trap people with higher costs would be wrong, but that's a different story. As it stands the price of Intel's mid-high grade offerings are competitive. If they wanted to sell more high end chips, they'd lower the prices of these chips to the point that more consumers considered it worth the money over their own mid-high grade chips. It seems that their are enough people willing to pay the premium that Intel isn't worried about it.

    This is why leap frogging is important to the consumer. People are much less willing to pay premiums for chips now if they believe that you competitor will come out with something better in the near future. Just look at ATi and nVidia. nVidia had been dominant so long when the GTX 200 series chips came out that they thought they could charge $650 for the high end single GPU card and people would pay it. ATi then released a competing (not even dominant) 4800 series at a much lower cost. The next high end single GPU card nVidia release was ~$200 cheaper at launch than the previous one. If they release Fermi at $650, I seriously doubt they'll achieve consumer market penetration. Of course, Fermi seems geared towards the scientific and professional markets, so they may not care about consumer market penetration.

    In the long run, you don't want a situation where one competitor is on always on top and the other is constantly competing on price either. Such a situation allows the dominant competitor to sink significant amounts of money in R&D, while the other competitor's budget dwindles and they have an increasingly hard time of keeping up. In the worst case, the dominant competitor gains patents that criple the other competitor's ability to effectively compete.
  • Scali - Friday, November 13, 2009 - link

    The way I recall it, the FX was $1000 aswell (okay, perhaps there were two models, one $750, the other $1000).
    At any rate, I think Intel was in denial initially, and tried to maintain the high prices... or people just continued to pay the high prices anyway, because of Intel's stronger brand recognition.

    However, when Intel launched the Pentium D, they started an all-out pricewar. They were absolute bargains. Intel repeated the trick with the quadcore. The Q6600 was being sold at very low prices, even before AMD actually had a quadcore on the market. A pre-emptive strike, if you will.

    If you ask me, the Core i5 and i7 860 are priced very nicely. They're still at 'mainstream' prices, compared to what we paid a few years ago for mainstream Athlon X2s, Pentium 4/D or Core2. $200-$300 for a CPU that is hugging the outer extremes of performance is just great value.
    Currently AMD is cheaper, but you take a pretty significant dip in performance compared to Core i5/i7.

    You are ignoring the Core2 which is still being sold under the Core i5/i7 models. Core2 is also priced very competitively against AMD.

    It's not like $500-$1000 CPUs only happen because there's no competition. As you said yourself, Athlon FX and Pentium Extreme Edition were sold at those prices aswell, and that was the most competitive period in all of Intel/AMD's history.
    So I think it's a fallacy to claim that prices go up when there's no competition. If AMD has the performance they will just inflate their prices, rather than Intel's prices going down on the high-end.
    Currently they don't have the performance, so AMD's offering is empty in that market segment. The market segment has always been there though, and AMD has been in it.
  • Scali - Thursday, November 12, 2009 - link

    Yea well, we were talking about consumer benefits.
    If AMD is having trouble bringing in enough money, that's their problem.
    They'll either have to adapt and make their products cheaper to produce, or make them perform faster, so they can get enough profit out of them.
    If they fail to do that, well, too bad. History is full of companies that went out of business simply becasue their products weren't bringing in enough profit. That's just business.
  • blyndy - Wednesday, November 11, 2009 - link

    If AMD wanted Hyper-Threading (they should), shouldn't they be able to license it from Intel given their patent sharing agreements (eg AMD64)?
  • JumpingJack - Thursday, November 12, 2009 - link

    Actually, I am not so sure AMD would need a 'license', SMT (symmetric multi threading) has been around for years, researched, published, and implemented in many other processor designs besides x86. I am not so sure Intel would hold a patent to something as generic as SMT (i.e. Hyperthreading), there could be underlying details on the exact implementation that are patented but at that level I would suspect there is 'more than one way to skin a cat', meaning so long as the details of any implementation is unique there is no harm no foul.

    However, it is clear AMD is taking a much more radical approach to multithreading into a single execution core, it is almost a melded dual core chip. This will certainly yield bigger gains in a multithreaded environment over the more simple SMT approach. How much more will remain to be seen.
  • Alberto - Friday, November 13, 2009 - link

    I don't know if this new Amd solution is good. It is less elegant than SMT and very "buldozer" like. Likely the new cpu will be clocked lower than Intel counterpart in order to stay in the TDP limits.
    So the +90% versus Intel +40% thanks to the new SMT done well, will be undermined by problems of power consumption.
    Moreover this cpu seems "not for consumer space" but server only. Two full integer units with L1 cache mean more die space, lower yields, more leakage and they are useless in consumer SW.
    This is the reason why Amd do not utilize the new core in Fusion.

    Another K8?. Yes it seems.....Amd has done the same old mistake.
    They have not the technical resources to make a new Cpu good for all utilizations.
  • JumpingJack - Saturday, November 14, 2009 - link

    I don't think it is fair to call it less elegant, it is similar in some ways but radically different in others.

    With Intel's SMT implementation, they simply needed logic to track and manage two distinctly different contextual threads, cache, the pipeline, dispatch, execution resources are all shared either fixed sharing or demand based sharing. Translation, it is possible that from the perspective of one thread or the other (depending on the workload/environment) each application could perform worse than if run alone, even if the total number of instructions retired is higher.

    AMD is doing much more here, as it appears they are not sharing nearly as much resources but actually duplicating them. There are trade offs, of course, die size and efficiency with respect to performance. AMD seems to be designing to the concept that the software infrastructure will be much more advanced in multithreaded capabilities at the time they release their product. A common quandry of any CPU designer -- what will the market/software environment look like years from now such that 'my product' fits best within that framework.
  • Calin - Thursday, November 12, 2009 - link

    Performance could be improved up to some 90% in integer heavy workloads, but it might come with a small decrease in performance in floating-point heavy benchmarks. Remember that the first hyperthreading version from Intel had a performance increase between +40% and -20% (and in some very particular cases, even worse than that)
  • blyndy - Thursday, November 12, 2009 - link

    CPU design is a patent minefeild; all the other/better ways of skinning the cat could well be locked up, so I think they would need at least a few patent licenses from someone.

    To be honest I think it's amazing that AMD can even compete with Intel considering that Intel is 10x bigger and has 10x more R&D and patents.
  • Samus - Wednesday, November 11, 2009 - link

    why license jackson technology from intel when they have a perfectly competative alternative architecture design?
  • hamiltonham - Friday, May 28, 2010 - link

    In short, each module consists of two integer(whole number) cores
    and one floating point (number with decimals) core.
    It's an architecture deigned for integer calculations.
    where the majority of Intel's processor architectures are designed for FP calculations.

    the computer sees the two Int cores as separate processors,
    even though all of the FP calculations go to the same core.

    so AMD can market it as a 8 or 16 core processor.

    Simple enough?
  • tomsworkshop - Friday, December 24, 2010 - link

    lets say if the upcoming AMD dual core cpu named as Phenom III X2 , 1 Phenom III X2 has 2 bulldozer core inside , each bulldozer core has 2 tiny core inside , and each tiny core has 2 pipeline inside ?

Log in

Don't have an account? Sign up now