The article, while great news, still leaves me guessing at what they actually tested and how.
They say they got their hands on a 2.2ghz B3. The CPUZ data confirms that. But in the final "extreme" test, they show the B3 @ 2.3ghz. So, they overclocked?? Via unlocked multiplier or by increasing the FSB???
I don't typically read your site anymore as the articles since phenom launch, particularly the phenom launch article reads more like a rant than a review. Every AMD article reads the same, it's a good cpu but........and then this laundry list of issues and why you see them as critical or distasteful and so on. Why the hell can't you just do a straight forward review based on the facts and leave the colorful commentary for the political videos on youtube? You lot suffer from some idiotic perception issues.
Personally I think the problem lies within the 3 levels used. When I first read the Phenom specs, I was sad to see there were 3 levels of cache. The complexity that adds to a processor is exponential! I would have been far more happy to see a single L2 cache between all processors of much larger size and better access. I can't imagine why they opted for 3 levels. Seems to be a hurried solution to get the Phenom to market, since so many of the single core chips AMD has built in the past have 3 levels of cache, which of course helped those chips. They need to design an effective 2 level cache only chip.
About the article, I am thankful for the most complete and understandable explanation of the TLB error and fixes I have seen since Phenom was released.
We finally have more concrete instances of the bug being induced and documented. Though maybe we have different opinions of what constitutes a rare bug. If it took AMD to inform us, perhaps they are fairly rare instance. In hindsight, the coverage on the TLB issue does appear vastly disproportionate to the actual threat itself.
On the desktop, yes. My understanding is that it's a huge issue with the Opterons, which are more likely to be used in situations where the bug crops up (VMWare, etc.) It also explains why you still can't buy a quad-core AMD server from HP, Dell, etc.
Its not so much of an issue with opterons if you use a unix/linux derivate and apply AMDs own kernel patch which solves the problem with almost zero performance loss (it doesnt just disable the TLB like the BIOS option does). Windows Servers on the other hand...
Still, understandable that some vendors just stay away from it until B3.
As most have stated, errata are a fact of life for every CPU, the fact that Intel or AMD publish errata is because the found something so obscure, the typical quality assurance testing (which must be rigorous and thorough) never expressed the bug. The occurence so rare that the problems it may cause would likely go unnoticed to the average user. This is fine for DT, a crash or lock up would probably result in a few choice, colorful words with regard to Microsoft and they reboot and on their way.
In the enterprise space, though, uptime is everything, and more importantly the sanctity of the data.... from AMD's own publication on Errata 298 ...
" One or more of the following events may occur:
• Machine check for an L3 protocol error. The MC4 status register (MSR 0000_0410) will be
equal to B2000000_000B0C0F or BA000000_000B0C0F. The MC4 address register (MSR
0000_0412) will be equal to 26h.
• Loss of coherency on a cache line containing a page translation table entry.
• Data corruption. "
It is last possibility that most likely resulted in AMDs decision to hold off service the mainstream server market and they made the absolute right decision doing so...
Yes, that goes without saying. I guess most assumed the same held true for Phenom as Barcelona, when taking the TLB errata into consideration. And there wasn't one clear voice to dissent or say otherwise, which is unfortunate. For whatever reason Intel's own C2D TLB bug didn't receive nearly as much press, which could also cause system instability. Every chip has bugs, but only when documented does it become errata and revised.
SPEC CPU 2006 in Vista x64 may be real world for enough to warrant the fix (though IMO it should have been right the first time), but it's really not that common.
Only labrats and enthusiasts run benchmarks (but at least they have my respect), and only complete tools run the version of an already heavy OS that further bottlenecks most of today's apps. As a tech, I have no sympathy for anyone who chooses to run down MS's path and patronize their every mistake--yes, it may be a hasty opinion, but it is backed by common sense. There is nothing XP or an Xbox 360 cannot do better.
Anyway... *sigh*
High fives to Anand for another awesome in-depth review, for making me one article smarter, and to AMD for more practical results as of late.
P.S. These guys run their own show--not plagiarize others. Please cite your evidence to them directly instead of FUD the forums.
I believe the above "you stole this" comment is just spam. So just ignore it.
As for the x64 and SPEC CPU stuff... x64 is becoming relevant, as 2GB PCs are common and 4GB setups are increasingly popular. The problem is, you can only fully realize the use of 4GB (or more) RAM if you run a 64-bit OS. Now we just need more 64-bit apps (Photoshop, I'm looking at you!) I'm personally running a Q6600 with 2x2GB and Vista 64-bit, and have no complaints other than the lack of 64-bit applications (not games - applications). And don't even think about bringing Xbox 360 into the picture... last I checked, you couldn't run Photoshop, Office, or any other real business application on an Xbox 360.
Regarding the TLB errata, we have at least two confirmed ways of causing this errata to rear its ugly head. Having done plenty of system testing, I can state that there are infinitely potential operations a CPU might be asked to perform. Multitasking in particular throws a wrench into the gears, because you can never test all the potential multitasking scenarios.
With two cases where the TLB bug comes up, you can be sure there are more. They may be rare -now- but we don't know what will happen going forward. Personally, I don't want a system where any time I experience a crash, I'm left to wonder, "Was that something wrong with the OS or application, or was that just the TLB error popping up in some new way?"
This is the AMD equivalent of the FDIV error Intel had back in the early Pentium days; unfortunately, the workaround has a far greater impact on performance. If you want to take a chance on the TLB error never affecting your system, you're welcome to do so. If that's the case, however, I'm wondering what you use your system for that makes quad-core even necessary.
AMDPhenom if you are going to write a review please do not cheat.
This review was just copied and pasted from WWW.IMB.COM/UK.
We would like GENUINE reviews written by some one who has actualy tested this range.
You have not tested this personaly.
Please do not say you have as well all know you have not.
Please people the Phenom range is buggy I would not buy it untill AMD has sorted out these bugs.
I have read up on this range on many sites and all reviews say it is buggy and no anount of patches and updates are going to solve the problems for you.
Please wait before you decide to buy?
I really wish you would do a clock for clock comparison of the Phenom to a Core 2 chip. That way we could see how the TLB fix affects which CPU is the prefered one in the lower cost segment of the CPU market.
It has already been done. There are tons of sites that have already benchmarked the Phenom (errata fix disabled) against the core 2. Fixing the TLB via hardware doesn't magically make it any faster. There is only a slight increase but its not significant.
Redoing all the benchmarks just to prove a slight increase but still lagging behind overall is just beating a dead horse at this point.
Actually, I'd say clock-for-clock is one of the worst comparisons to make, short of two things:
1) If available clock speeds are similar (they're not - Core 2 Quad tends to have about a 33% advantage in clock speed)
2) If you want to look purely at the architectural performance
While item two looks interesting at first, you have to remember that architecture and design ultimately have a large impact on clock speed. Which is better: more pipeline stages and higher clock speeds, or fewer pipeline stages with lower clock speeds? If you think you know the answer, go work for Intel or AMD. In truth, there is no correct answer - both approaches have merits, and so we end up with a balancing act.
Pentium 4 (NetBurst) is often regarded as going too far in the way of pipeline stages. Which Prescott certainly had some problems due to the pipeline stage count, Northwood and the current Penryn are actually not that far off in terms of stages. The difference is that Penryn (and Core 2 in general) have made numerous changes to the underlying architecture that makes the pipeline stage count less important now.
Clock for clock, I'd imagine an updated 486 core could compete very well in today's market. That is, IF you could actually make such a core. Just think about it: four pipeline stages, give it some more cache, add in SSE and x64 support, put two or four cores on a chip, and then run that sucker at 3.0GHz! But each stage is the old 486 requires so much work to be done that you could never actually get such a design to scale to 2.0GHz on current technology, let alone 3-4GHz.
So when someone says clock-for-clock comparisons are irrelevant, I largely tend to agree. Why don't we do a "clock-for-clock" comparison of a tractor-trailer diesel engine and a formula one engine? Or a "clock-for-clock" comparison of apples and oranges? The latter takes things to an extreme to illustrate a point, but in the case of the former all you really could end up determining is that large diesel engines and racing engines are vastly different.
K10 and Penryn might not be quite so different, but they are dissimilar in enough ways that the best way to compare them really ends up being a large selection of real world performance metrics. Sure, a 2.4GHz Penryn and a 2.4GHz Phenom X4 gives us some idea of how the designs match up, but at the end of the day what really matters is price, performance, stability/reliability, and power requirements (the latter also impacting noise).
Whether or not there is value in comparing IPC is pretty subjective. I happen to disagree with you - I find it valuable, at least for the time being while both AMD and Intel are offering CPUs at comparable clockspeeds (1.6GHz to 3.2GHz, generally). If AMD's were all less than 2.5GHz and Intel's were all more than 2.5GHz then it would be much less useful info to me to know how they performed at the same clockspeed since they didn't operate at the same clockspeed. But it's not the end of the world if Anandtech chooses not to look at such things.
Clock for clock is quite relevant because prices change and people overclock. It doesn't mean someone only picks which has more performance per MHz or which has higher MHz or any such thing, rather within a family it is quite relevant to know how it performs clock per clock then the user does the math to further evaluate other alternatives.
However, it does give a foundation for comparing prices and clockspeeds not explicitly compared. It also helps to evaluate potential gain from overclocking.
You are right, there are better methods. This one (clock-for-clock performance), while not a very valuable metric in and of itself, does allow better extrapolation.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
29 Comments
Back to Article
eok - Thursday, March 20, 2008 - link
The article, while great news, still leaves me guessing at what they actually tested and how.They say they got their hands on a 2.2ghz B3. The CPUZ data confirms that. But in the final "extreme" test, they show the B3 @ 2.3ghz. So, they overclocked?? Via unlocked multiplier or by increasing the FSB???
eye smite - Saturday, March 15, 2008 - link
I don't typically read your site anymore as the articles since phenom launch, particularly the phenom launch article reads more like a rant than a review. Every AMD article reads the same, it's a good cpu but........and then this laundry list of issues and why you see them as critical or distasteful and so on. Why the hell can't you just do a straight forward review based on the facts and leave the colorful commentary for the political videos on youtube? You lot suffer from some idiotic perception issues.Narg - Friday, March 14, 2008 - link
Personally I think the problem lies within the 3 levels used. When I first read the Phenom specs, I was sad to see there were 3 levels of cache. The complexity that adds to a processor is exponential! I would have been far more happy to see a single L2 cache between all processors of much larger size and better access. I can't imagine why they opted for 3 levels. Seems to be a hurried solution to get the Phenom to market, since so many of the single core chips AMD has built in the past have 3 levels of cache, which of course helped those chips. They need to design an effective 2 level cache only chip.BernardP - Friday, March 14, 2008 - link
About the article, I am thankful for the most complete and understandable explanation of the TLB error and fixes I have seen since Phenom was released.bradley - Wednesday, March 12, 2008 - link
We finally have more concrete instances of the bug being induced and documented. Though maybe we have different opinions of what constitutes a rare bug. If it took AMD to inform us, perhaps they are fairly rare instance. In hindsight, the coverage on the TLB issue does appear vastly disproportionate to the actual threat itself.DigitalFreak - Wednesday, March 12, 2008 - link
On the desktop, yes. My understanding is that it's a huge issue with the Opterons, which are more likely to be used in situations where the bug crops up (VMWare, etc.) It also explains why you still can't buy a quad-core AMD server from HP, Dell, etc.Griswold - Thursday, March 13, 2008 - link
Its not so much of an issue with opterons if you use a unix/linux derivate and apply AMDs own kernel patch which solves the problem with almost zero performance loss (it doesnt just disable the TLB like the BIOS option does). Windows Servers on the other hand...Still, understandable that some vendors just stay away from it until B3.
JumpingJack - Friday, March 14, 2008 - link
http://www.amd.com/us-en/assets/content_type/white...">http://www.amd.com/us-en/assets/content...e/white_...As most have stated, errata are a fact of life for every CPU, the fact that Intel or AMD publish errata is because the found something so obscure, the typical quality assurance testing (which must be rigorous and thorough) never expressed the bug. The occurence so rare that the problems it may cause would likely go unnoticed to the average user. This is fine for DT, a crash or lock up would probably result in a few choice, colorful words with regard to Microsoft and they reboot and on their way.
In the enterprise space, though, uptime is everything, and more importantly the sanctity of the data.... from AMD's own publication on Errata 298 ...
" One or more of the following events may occur:
• Machine check for an L3 protocol error. The MC4 status register (MSR 0000_0410) will be
equal to B2000000_000B0C0F or BA000000_000B0C0F. The MC4 address register (MSR
0000_0412) will be equal to 26h.
• Loss of coherency on a cache line containing a page translation table entry.
• Data corruption. "
It is last possibility that most likely resulted in AMDs decision to hold off service the mainstream server market and they made the absolute right decision doing so...
bradley - Wednesday, March 12, 2008 - link
Yes, that goes without saying. I guess most assumed the same held true for Phenom as Barcelona, when taking the TLB errata into consideration. And there wasn't one clear voice to dissent or say otherwise, which is unfortunate. For whatever reason Intel's own C2D TLB bug didn't receive nearly as much press, which could also cause system instability. Every chip has bugs, but only when documented does it become errata and revised.larson0699 - Wednesday, March 12, 2008 - link
SPEC CPU 2006 in Vista x64 may be real world for enough to warrant the fix (though IMO it should have been right the first time), but it's really not that common.Only labrats and enthusiasts run benchmarks (but at least they have my respect), and only complete tools run the version of an already heavy OS that further bottlenecks most of today's apps. As a tech, I have no sympathy for anyone who chooses to run down MS's path and patronize their every mistake--yes, it may be a hasty opinion, but it is backed by common sense. There is nothing XP or an Xbox 360 cannot do better.
Anyway... *sigh*
High fives to Anand for another awesome in-depth review, for making me one article smarter, and to AMD for more practical results as of late.
P.S. These guys run their own show--not plagiarize others. Please cite your evidence to them directly instead of FUD the forums.
Griswold - Thursday, March 13, 2008 - link
You're a tech? Is that what they call callcenter jobs at Dell nowadays?Take your uneducated opionion and stick it where the sun doesnt shine, please.
whatthehey - Wednesday, March 12, 2008 - link
I believe the above "you stole this" comment is just spam. So just ignore it.As for the x64 and SPEC CPU stuff... x64 is becoming relevant, as 2GB PCs are common and 4GB setups are increasingly popular. The problem is, you can only fully realize the use of 4GB (or more) RAM if you run a 64-bit OS. Now we just need more 64-bit apps (Photoshop, I'm looking at you!) I'm personally running a Q6600 with 2x2GB and Vista 64-bit, and have no complaints other than the lack of 64-bit applications (not games - applications). And don't even think about bringing Xbox 360 into the picture... last I checked, you couldn't run Photoshop, Office, or any other real business application on an Xbox 360.
Regarding the TLB errata, we have at least two confirmed ways of causing this errata to rear its ugly head. Having done plenty of system testing, I can state that there are infinitely potential operations a CPU might be asked to perform. Multitasking in particular throws a wrench into the gears, because you can never test all the potential multitasking scenarios.
With two cases where the TLB bug comes up, you can be sure there are more. They may be rare -now- but we don't know what will happen going forward. Personally, I don't want a system where any time I experience a crash, I'm left to wonder, "Was that something wrong with the OS or application, or was that just the TLB error popping up in some new way?"
This is the AMD equivalent of the FDIV error Intel had back in the early Pentium days; unfortunately, the workaround has a far greater impact on performance. If you want to take a chance on the TLB error never affecting your system, you're welcome to do so. If that's the case, however, I'm wondering what you use your system for that makes quad-core even necessary.
Darkness - Wednesday, March 12, 2008 - link
AMDPhenom if you are going to write a review please do not cheat.This review was just copied and pasted from WWW.IMB.COM/UK.
We would like GENUINE reviews written by some one who has actualy tested this range.
You have not tested this personaly.
Please do not say you have as well all know you have not.
Please people the Phenom range is buggy I would not buy it untill AMD has sorted out these bugs.
I have read up on this range on many sites and all reviews say it is buggy and no anount of patches and updates are going to solve the problems for you.
Please wait before you decide to buy?
stmok - Wednesday, March 12, 2008 - link
Darkness if you're going to write feedback, please do not conduct fraud related activities.www.imb.com is a bank located in California.
DigitalFreak - Wednesday, March 12, 2008 - link
LOL. Not even a valid site. What a dipsh!t.nomagic - Wednesday, March 12, 2008 - link
Is this supposed to be some kind of joke?vijay333 - Wednesday, March 12, 2008 - link
dammit, who disabled the moron filter to the internet again?murphyslabrat - Wednesday, March 12, 2008 - link
If this was Dailytech, I would soooooo rate you up!fitten - Wednesday, March 12, 2008 - link
Any news on if they fixed the "Core2" problem?Brian23 - Wednesday, March 12, 2008 - link
I really wish you would do a clock for clock comparison of the Phenom to a Core 2 chip. That way we could see how the TLB fix affects which CPU is the prefered one in the lower cost segment of the CPU market.aguilpa1 - Thursday, March 13, 2008 - link
It has already been done. There are tons of sites that have already benchmarked the Phenom (errata fix disabled) against the core 2. Fixing the TLB via hardware doesn't magically make it any faster. There is only a slight increase but its not significant.Redoing all the benchmarks just to prove a slight increase but still lagging behind overall is just beating a dead horse at this point.
crimson117 - Wednesday, March 12, 2008 - link
Clock for Clock is an irrelevant metric. So what if 2.0GHZ on a C2D is faster than 2.0GHZ on a Phenom?Performance per Dollar or Performance per Watt are much more relevant metrics.
backtomac - Wednesday, March 12, 2008 - link
All those metrics are important. Each individual will have a differing importance on each metric.flipmode - Wednesday, March 12, 2008 - link
Says you. It's relevant to at least two people here.JarredWalton - Thursday, March 13, 2008 - link
Actually, I'd say clock-for-clock is one of the worst comparisons to make, short of two things:1) If available clock speeds are similar (they're not - Core 2 Quad tends to have about a 33% advantage in clock speed)
2) If you want to look purely at the architectural performance
While item two looks interesting at first, you have to remember that architecture and design ultimately have a large impact on clock speed. Which is better: more pipeline stages and higher clock speeds, or fewer pipeline stages with lower clock speeds? If you think you know the answer, go work for Intel or AMD. In truth, there is no correct answer - both approaches have merits, and so we end up with a balancing act.
Pentium 4 (NetBurst) is often regarded as going too far in the way of pipeline stages. Which Prescott certainly had some problems due to the pipeline stage count, Northwood and the current Penryn are actually not that far off in terms of stages. The difference is that Penryn (and Core 2 in general) have made numerous changes to the underlying architecture that makes the pipeline stage count less important now.
Clock for clock, I'd imagine an updated 486 core could compete very well in today's market. That is, IF you could actually make such a core. Just think about it: four pipeline stages, give it some more cache, add in SSE and x64 support, put two or four cores on a chip, and then run that sucker at 3.0GHz! But each stage is the old 486 requires so much work to be done that you could never actually get such a design to scale to 2.0GHz on current technology, let alone 3-4GHz.
So when someone says clock-for-clock comparisons are irrelevant, I largely tend to agree. Why don't we do a "clock-for-clock" comparison of a tractor-trailer diesel engine and a formula one engine? Or a "clock-for-clock" comparison of apples and oranges? The latter takes things to an extreme to illustrate a point, but in the case of the former all you really could end up determining is that large diesel engines and racing engines are vastly different.
K10 and Penryn might not be quite so different, but they are dissimilar in enough ways that the best way to compare them really ends up being a large selection of real world performance metrics. Sure, a 2.4GHz Penryn and a 2.4GHz Phenom X4 gives us some idea of how the designs match up, but at the end of the day what really matters is price, performance, stability/reliability, and power requirements (the latter also impacting noise).
flipmode - Sunday, March 16, 2008 - link
Whether or not there is value in comparing IPC is pretty subjective. I happen to disagree with you - I find it valuable, at least for the time being while both AMD and Intel are offering CPUs at comparable clockspeeds (1.6GHz to 3.2GHz, generally). If AMD's were all less than 2.5GHz and Intel's were all more than 2.5GHz then it would be much less useful info to me to know how they performed at the same clockspeed since they didn't operate at the same clockspeed. But it's not the end of the world if Anandtech chooses not to look at such things.mindless1 - Friday, March 14, 2008 - link
Clock for clock is quite relevant because prices change and people overclock. It doesn't mean someone only picks which has more performance per MHz or which has higher MHz or any such thing, rather within a family it is quite relevant to know how it performs clock per clock then the user does the math to further evaluate other alternatives.murphyslabrat - Wednesday, March 12, 2008 - link
However, it does give a foundation for comparing prices and clockspeeds not explicitly compared. It also helps to evaluate potential gain from overclocking.You are right, there are better methods. This one (clock-for-clock performance), while not a very valuable metric in and of itself, does allow better extrapolation.
Cygni - Wednesday, March 12, 2008 - link
Its called a PREview for a reason. ;) Im sure there will be AT rundown of the chip later. This short blurb is only to tell us about the TLB fix.