Looking at PURELY gaming purposes. Faster CPU is ABSOLUTELY USELESS. Face it, majority of the gamers that are considered hardcore doesn't run 2560x1600 resolutions, and it looks like at the resolutions they are running at, 1280x1024 or 1600x1200, it seems getting a faster CPU is worth it.
Nope. Why get a faster CPU when you already get 100+ frames per second?? What a video card enables is get higher quality graphics WITHOUT losing performance compared to the lower performing video cards. A CPU does what?? A absolutely useless 20 additional frames onto the already more than enough 100+.
Now we still see reviewers benching systems with 4xAA and 16xAF at ever higher(and ridiculously unreal) resolutions. But we know latest video cards allows super high AA features too. 1600x1200 with 16xAA 64xAF(Ok I forgot what the highest AF was, cut me some slack ok?? Way too many features in new video cards, lost track around X800 time...) sounds pretty good.
Would have been better comparing the 6400 Intel,to a 4600 x2 since these come in to even application performance.
Then comparing the 5000 to them both.
Performance per watt is meaningless here.Since the formula (fps/power)is also useless without the baseline.
The 6600 outperforms the 6400 Intel Core Duo.The 5000 x2 outperforms the 4600 x2 AMD.So higher FPS of the higher performing processor is skewed to the formula as well.
I see that your dealing with a range between 15 and 30$ though.
Nobody knows just how well the AMD 5000 x2 does better than the 4600 x2 anyway.
I see you have EEs there AMD (x2).So couldn't go wrong in that effect.Purchasing an AMD 65nm x2 5000(non-extreme).Seeing the 5000 x2 only has 512kbx2 cache.
Long as AMD doesn't put everybody up to those thousand dollar processors.Im ok with them.
So? That's AMD problem they can't afford to put 4MB of LV2 on their processors. The comparison is what it is processors at equal price points. They only had the 5000+ on hand, so it was compared to the closest processor in price.
The 4600+ was only used as it is the highest 65W TDP processor on the 90nm node. It was probably used to see if the 65W TDP was justified on the 5000+ 65nm.
I don't know about you'ins. But I give a shit less about Intel's or AMD's or Macy's marketing prowess. I do my own research from my own sources and make decisions based on that. What's an ad? What's a TV commercial? Only the clueless and the lazy need those to guide them in what products to buy or even if the product exists. I don't give a shit about marketing.
Nice price comparison... maybe you should also add that for the same future set you get on the mobo there is a price difference of 1/3 in favor of AMD! So put a e6400 in line of this comparison then you have the same price tag!
Well we all know oc'ing c2d is for noobs.... you don't need any knowledge...(see it as good or as bad whatever you want)
but oc'ing AMD does require knowledge.
Htt link is on 1125, what do you expect to reach of an oc? on 90nm parts most of the htt link knocked out around 1060-1080, so maybe try to make a decent oc...
And where is a screenshot of the mem and devider when you are oc'ing?
Sorry I should have addressed this in the review - HTT speed had no impact on my overclocking results with this particular chip; even with a HT multiplier of 4X the chip won't get into Windows at 2.99GHz. Memory wasn't a factor as it was set to the lowest speed possible in order to see how far we could push the CPU.
I have a feeling that with better air cooling close to 3GHz may be possible, but I wanted to look at the worst case scenario overclocking potential of just using a stock heatsink/fan similar to what we did in our Core 2 overclocking article upon its release.
I'm working on the Brisbane 4800+ now and will find out soon if it overclocks any better.
OC'ing AMD is pretty simple as well. Just drop the HTT multiplier to 4X if you get above about 220 MHz - not that the total HTT speed generally matters. You can also drop memory dividers if you have less capable RAM. I still think 775 overclocking requires a bit more knowledge/effort - a bit, I said, not a lot! I can't see anyone but an elitist thinking easier OC'ing would be a bad thing, though.
why is 775 oc harder, it has one factor less to keep in mind and then i won't speak about the memory options....
I've used already a 6600 to play with on an asus board, just left everything to auto, put the fsb to 400 and hey it runned stable on 3600 (depending on your memory offcourse)? WTF you could say on stock vcore? nope the board increased the vcore automatically.... it was running on 1,56 auto :)
Will you update this review due to the htt? or will you leave as lots of you're reviews (like the woodrcest testing) saying that some more test are comming but eventually nothing changes.... people actually read your reviews and believe them by hard...so accurate review is always welcome....
Mr. Anand, is it possible that you use Asus's Crosshair motherboard if attempting for the max overclocking of these 65nm's? It's only fair when you use top Intel board but leave out top AM2 board. I have an understanding that Asus's Crosshair board is ~ 15%-17% better performer than other boards. Also I've heard that the DFI board is a great Overclocker and you have used it on the s939 reviews. I would appreciate it if you use either board, but preferably the Asus Crosshair.Thanks.
Unfortunately I don't have either of those boards here for testing, but I'm sure I can persuade either Gary or Wes to do a follow-up with a more serious look at 65nm overclocking once I'm done with the power analysis on these chips. :)
Anyone else wondering what this means for the new generation of Turion X2s?
I know Santa Rosa is coming out either Q1 or Q2 this year; it's supposed to support an 800 or 400mhz FSB, depending on system load, which should drop power consumption a bit. But, as I understand it, the real battery suckers are CPU, display, and HDD. (Yeah throw in GPU too if you have discrete graphics.) But where does that leave the chipset, will Santa Rosa really do much for battery life?
If not, AMD could make serious inroads into the laptop segment. 65W is hot for a laptop, but if they can drop their desktop TDPs by about a 1/3 or 1/4, why can't they do the same for their laptop chips?
FYI even 90nm Turions consume LESS power than Merom. (Merom is more power hungry than Yonah).
Also RS690M is about to rule the integrated market (along with RS700M for C2D). In other word AMD is gonna rule the chipset market for both platforms while beeing pretty competitive in CPU's, especially for bussiness use.
(for bussiness the features and battery life is what counts, not the absolute performance)
Shows power consumption to be as near as identical between the two processors.
Not sure if you are comparing Turion or Turion X2 to Merome but aside from mismatched comparisons (such as comparing the power consumption of a Turion system with onboard graphics and Merom with dedicated graphics) I've not seen like for like tests showing Turions to be more power efficient:
"Does that make the Core 2 Duo worse at power saving than Turion X2? Without equivalent setups (i.e. both using IGP or both using discrete GPUs), we can't say for certain. We can say that an ASUS W5F with a T2300 chip (1.67GHz 2MB cache) that we had at one point bottomed out at 19W in idle mode, so Core Duo and Turion X2 appear close in low power states, with Turion X2 perhaps holding a slight 1-2W advantage. Our testing of Core Duo vs. Core 2 Duo showed the CPUs to be nearly equal in power draw, so it appears AMD is equal or slightly better than Intel at minimum power draw. At maximum power draw by the CPU, Turion X2 is definitely using more power than Core 2 Duo, as even with higher performance/power components the ASUS A8JS still uses less power than the MSI TL-60 at 100% CPU load."
Darn, was so hoping AMD 65nm would give an easy 3.3ghz+ like the intel chips =(. This chip may not reflect overall OC, but it damn hovering around my AMD range of 2.6-2.8 again. A 2.66-2.7 OC intel is > then 2.9ghz AMD. Ill just waited again.
Now that both companies claim to offer "platforms" it'd be interesting to see an Intel 965 board vs. an ATI board being used in these benches. Not sure that the NVIDIA models were the best choice here, especially since their not apples to apples on generation and power consumption.
im quite shocked to see the gaming performance advantage @ 1600x1200!!!! is'nt the cpu removed as a performance bottle neck @ such a high resolution? its all about the gpu horsepower @ that resolution no? can't believe a cpu can show a 25fps diff @ 1600x1200?!?!?
quote: im quite shocked to see the gaming performance advantage @ 1600x1200!!!! is'nt the cpu removed as a performance bottle neck @ such a high resolution? its all about the gpu horsepower @ that resolution no? can't believe a cpu can show a 25fps diff @ 1600x1200?!?!?
Some more research here is in order. The G80 GPU, commonly known as the 8800 GTX or GTS on the market, has taken graphics performance to a new level. All of the reviews that used an AMD FX-60 or 62 CPU clearly showed throttling back to the CPU in many if not most cases, only at the highest possible resolutions/AA + FSAA did the scaling turn back on with an FX-60. The X6800 released the full potential of this card.
The difference in framerate you see bettweent he 5000+ and the E6600 is that the E6600 is has pushed the bottleneck further ahead -- simply because the E6600 is a better gaming CPU.
Tom's did a good article titled: The 8800 Needs the Fasted CPU.
In essense, even for a single card solution, an AMD CPU is not a good match for this card.
This is why back at the Core 2 Duo launch we talked about CPU performance using games running at 1280x1024 0xAA. When a faster GPU comes out and shifts the bottleneck to the CPU (as has happened with the 8800 GTX), saying "Athlon X2 and Core 2 Duo are equal when it comes to gaming" is a complete fabrication. It's not unreasonable to think that there will be other games where that ~25% performance difference means you can enable high quality mode. Company of Heroes is another good example of a game that can chug on X2 chips at times, even with high-end GPUs.
How's a chip that uses less power run hotter? On the last page the 65nm X2 5000+ hit 51C under load, lower than any other chip, but it uses more power than any chip except the 90nm X2 5000+. How's that work?
Not all the power that goes into a chip is released as heat. The heat is basically wasted power that "leaks." So if a chip can get more useful work out of the same amount of power then the amount of heat released would decrease even while power consumption remained steady.
I'm not an expert, but I believe a lot of the special new process techniques we always here about (like strained silicon) basically just reduce the amount of wasted energy. Am I right here?
If all the power can be turned into heat then it will be the most efficient heater the human being ever built. And even greater, you get all the computation done for free. Could you believe that?
You apparently don't have any idea about basic thermodynamics.
If the processor released all of its energy in heat, it'd be the world's most efficient space heater.
You have forgotten a few little points, like that work is done by a processor (wouldn't be much point otherwise). Transistors are switched, mostly, however the IC itself can expand and contract, as well as the packaging material. The heat generated by a CPU is because of the resistance inherent to the circuits. All of the above is considered energy expended (or, more properly, changed in state).
In other words, don't go around insulting people's intelligence when you don't know yourself what you're on about.
I'm betting only one of us has an Electrical Engineering degree, and guess what... You're not the one with it.
The work being done by the CPU is what makes the heat. The transistors themselves create heat because they consume power, and a lot of it, to switch from one state to another at high speeds.
I will say it again since you still don't get it, though it probably won't help. Energy is conserved. Electrical energy goes in, and heat comes out. The thermal expansion and contraction of the part isn't work. It's a side effect of the heat being product when the transistors consume electrical power by switching and make heat.
Thermodynamics 101- First law of thermodynamics: “Energy can neither be created nor destroyed, it can only be converted from one form to another” (Power --> Heat)
I think everyone here knows that, the issue is that current -> heat is not the only type of transformation that can occur. If it was then anything electric wouldn't be able to do anything at all except create heat, and obviously that isn't true.
The problem in this discussion is most participants do not know what heat, electricity, work(i.e. in joule), work(i.e. inlogical one) means.
Except for electromagnetic waves that leak badly shielded PC and the energy required to transfer the information out of the PC (by monitor and network cables) the energy(in form of electricity) consumed by the CP is completely changed to heat.
In other words "focused" form of energy witch low entropy(electricity) is distributed to the environment and becomes an "unfocused" form of enegry, mostly heat. Also even the heat dispersed to the room is still partly focused in sense it still not spread to the whole universe.
If you can see something glowing, then at least part of the energy is producing visible light and not heat. Although now that you mention it, I think I heard that plain old light bulbs are fairly efficient heaters.
quote: The entire semiconductor industry is struggling with the heat of chips increasing exponentially as the number of transistors increase exponentially. Moving to new high-k materials that control leakage is one step of many towards making transistors run cooler. Because high-k gate dielectrics can be several times thicker, they reduce gate leakage by over 100 times, and therefore devices run cooler.
This implies what I was saying, but perhaps the devices only run cooler because they require less power to begin with?
Heat density: less power in a smaller area can potentially run hotter (witness Prescott vs. Northwood P4). Except we're seeing the reverse here, so probably it's just a difference in chip/package design. There's no guarantee that the various chips are measured identically, meaning AMD could have changed temperature reporting with 65nm, and certainly the AMD vs. Intel numbers are not a direct comparison. I would put more weight on power numbers, personally.
In future Brisbane reviews could you check the Brisbane idle temperature to see if it appears to be somewhat accurate? Other previews and leaks show Brisbane idle temperatures in the 10C-15C range which is well below room temperature. Idle and load temps of Brisbane may actually be 15C higher than what is reported.
I was excluding the C2D from my comments. Sorry if that wasn't clear.
If anything the die shrink should make the Brisbane run slightly hotter since the die is a little smaller. I can't come up with any good reason why one 65W processor runs cooler than another 65W processor given the same cooler and same size heat spreader. Maybe the heatspreaders aren't flat between all the AMD CPUs. The 35W AM2 processor definitely should have run cooler.
"As you can expect, AMD is pricing the 65nm chips in line with its 90nm offerings to encourage the transition. Die size and TDP have both gone down to 147 mm^2 and 65W across the line.
Is 147 mm^2 accurate? That happens to be the same die size of 90-nm A64 X2 Manchester, and isn't much of a shrink from the current 183mm^2 512KBx2 Windsor cores. Some rumors had put it at ~125 mm^2.
This "Performance per Watt" marketing bullshit has gone too far. Strange how Anandtech only started including it when Intel started marketing it insanely.
To hell with performance per watt, give me performance.
quote: Strange how Anandtech only started including it when Intel started marketing it insanely.
No... This was one of the major topics of discussion in the A64 v. P4 days as well, only AMD was enjoying Intel's current position. So Intel has a better chip right now, whats the big deal? This is why we like compitition.
We didnt use to care about performance\watt in the consumer sector and left that to the professional arena. But ever since the power draw and heat of the consumer boxes started to rival professional level boxes it is good to see what you are getting.
Gone are the days of a 250 watt PSU and a system that is passively cooled.
Seems to me performance per watt is AMD's only leg to stand on right now (unless you just want to go with pure power draw). If you just look at performance AMD is even farther behind and that won't change until we see K8L, and maybe even that won't be enough. It's good to see lower temps and power requirements from AMD as it shows they're at least doing something other than sitting around bemoaning their fate. Intel already wins the marketing war, and now they're killing AMD on performance as well.
Would have liked to see more CPUs for the comparison, though - like how about a 65nm 4600+ and 3800+ to see how they match up with the EE/EE SFF chips? Getting an E6300/6400 in as well would ahve been nice. Last, the use of an 8800 GTX is going to skew the "performance per watt" numbers, but it should still be consistent in this one set of benchmarks. As long as we don't get comparisons using other GPUs thrown in, it's probably the best you can do. Still would be nice to know how much power the CPU is really drawing on all these tests - probably less than 50W would be my bet.
Intel won WHAT marketing war ? More like they won a battle, after losing many to their competitor, for around two years. Also, unless something has changed recently that I'm unaware of, AMD also leads in memory bandwidth, not that THAT much memory bandwidth translates into real world performance.
I don't know which platform you like, and to be honest, I don't really care. However it would behoove us all, to realize that AMD, and Intel are both good companies, and if one were to go missing, we ALL would be in a much worse "market".
Right now, I prefer Intel. Oddly enough, though, I'm still running AMD Opteron 165 overclocked to 2.6 GHz, because it's fast enough. Better? Nope, but good enough. So I preferred AMD when they were ahead.
My point about marketing is that Intel has always had the lead there - ALWAYS! How many AMD commercials do you see? A few, maybe. Intel floods the airwaves with adverts. Being better at marketing doesn't make your products superior, but if you own the mindshare of the common man it can make it damn hard for others to compete. Hell, with the junk that was Prescott (and the later NetBurst stuff), the fact that most corporations were still buying Intel says something, doesn't it? Yes, AMD made a lot of gains, but with Intel back in the lead those gains are eroding fast.
I for one hope K8L kicks some serious ass. HOPE. I remain unconvinced that AMD can close the gap and then some in the next 6 months. We shall see. Let's just hope it's not more of this Quad FX craziness. If I wanted two dual core AMD CPUs, I think I probably would have bought an Opteron 2xx setup a while back. Unbuffered RAM might improve performance a bit (5% or so?) but not enough to matter. As for octal core systems (dual quad core), don't make me laugh. That's great for the server market, and maybe even high end workstations, but on the desktop we're years away from being able to utilize/need that much CPU power.
Well, better a double post than to lose all that I typed. It looked like the post didn't go through, I luckily had it copied first, and after three refreshes I posted again. Naturally at that point both posts show up.
I don't know what current Intel CPUs memory bandwidth is, but at the time I checked my AM2 3800+ (single core) the closest thing was a core 2 duo at around 8-10GB/s bandwidth. After dropping the multiplier on my AM2 CPU, and increasing the "FSB" to 250MHZ, memory bandwidth on my 3800+ AM2 was over 11GB/s. I didn't test before hand, well maybe I did, but I don't recall what it was, so . . . I may have posted somewhere on the anandtech forums. Anyhow, I'm pretty sure its much more than 5% differences between the two, but like I said, it does not translate into real world performance, so it's pretty much moot.
Right now, I prefer Intel. Oddly enough, though, I'm still running AMD Opteron 165 overclocked to 2.6 GHz, because it's fast enough. Better? Nope, but good enough. So I preferred AMD when they were ahead.
My point about marketing is that Intel has always had the lead there - ALWAYS! How many AMD commercials do you see? A few, maybe. Intel floods the airwaves with adverts. Being better at marketing doesn't make your products superior, but if you own the mindshare of the common man it can make it damn hard for others to compete. Hell, with the junk that was Prescott (and the later NetBurst stuff), the fact that most corporations were still buying Intel says something, doesn't it? Yes, AMD made a lot of gains, but with Intel back in the lead those gains are eroding fast.
I for one hope K8L kicks some serious ass. HOPE. I remain unconvinced that AMD can close the gap and then some in the next 6 months. We shall see. Let's just hope it's not more of this Quad FX craziness. If I wanted two dual core AMD CPUs, I think I probably would have bought an Opteron 2xx setup a while back. Unbuffered RAM might improve performance a bit (5% or so?) but not enough to matter. As for octal core systems (dual quad core), don't make me laugh. That's great for the server market, and maybe even high end workstations, but on the desktop we're years away from being able to utilize/need that much CPU power.
Yes, I would have appreciated a lower end Core 2 Duo that is more comparable performance-wise as well as the 6600 which matches it's price.
Basically, it looks like the new process is only a bit better than the old energy efficient chips, but is clocked higher and will be sold cheaper. The important thing for AMD is probably to get their 65nm process ramped up and have all the bugs ironed out for a good K8L launch.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
63 Comments
Back to Article
IntelUser2000 - Thursday, December 21, 2006 - link
Looking at PURELY gaming purposes. Faster CPU is ABSOLUTELY USELESS. Face it, majority of the gamers that are considered hardcore doesn't run 2560x1600 resolutions, and it looks like at the resolutions they are running at, 1280x1024 or 1600x1200, it seems getting a faster CPU is worth it.Nope. Why get a faster CPU when you already get 100+ frames per second?? What a video card enables is get higher quality graphics WITHOUT losing performance compared to the lower performing video cards. A CPU does what?? A absolutely useless 20 additional frames onto the already more than enough 100+.
Now we still see reviewers benching systems with 4xAA and 16xAF at ever higher(and ridiculously unreal) resolutions. But we know latest video cards allows super high AA features too. 1600x1200 with 16xAA 64xAF(Ok I forgot what the highest AF was, cut me some slack ok?? Way too many features in new video cards, lost track around X800 time...) sounds pretty good.
esterhasz - Sunday, December 17, 2006 - link
I would have loved to see how low the 65nm parts undervolt...Tujan - Friday, December 15, 2006 - link
Would have been better comparing the 6400 Intel,to a 4600 x2 since these come in to even application performance.Then comparing the 5000 to them both.
Performance per watt is meaningless here.Since the formula (fps/power)is also useless without the baseline.
The 6600 outperforms the 6400 Intel Core Duo.The 5000 x2 outperforms the 4600 x2 AMD.So higher FPS of the higher performing processor is skewed to the formula as well.
I see that your dealing with a range between 15 and 30$ though.
Nobody knows just how well the AMD 5000 x2 does better than the 4600 x2 anyway.
I see you have EEs there AMD (x2).So couldn't go wrong in that effect.Purchasing an AMD 65nm x2 5000(non-extreme).Seeing the 5000 x2 only has 512kbx2 cache.
Long as AMD doesn't put everybody up to those thousand dollar processors.Im ok with them.
coldpower27 - Friday, December 15, 2006 - link
So? That's AMD problem they can't afford to put 4MB of LV2 on their processors. The comparison is what it is processors at equal price points. They only had the 5000+ on hand, so it was compared to the closest processor in price.The 4600+ was only used as it is the highest 65W TDP processor on the 90nm node. It was probably used to see if the 65W TDP was justified on the 5000+ 65nm.
Genx87 - Friday, December 15, 2006 - link
AMD needs a new core and that is the scientific truth!hubajube - Friday, December 15, 2006 - link
I don't know about you'ins. But I give a shit less about Intel's or AMD's or Macy's marketing prowess. I do my own research from my own sources and make decisions based on that. What's an ad? What's a TV commercial? Only the clueless and the lazy need those to guide them in what products to buy or even if the product exists. I don't give a shit about marketing.duploxxx - Friday, December 15, 2006 - link
Nice price comparison... maybe you should also add that for the same future set you get on the mobo there is a price difference of 1/3 in favor of AMD! So put a e6400 in line of this comparison then you have the same price tag!Well we all know oc'ing c2d is for noobs.... you don't need any knowledge...(see it as good or as bad whatever you want)
but oc'ing AMD does require knowledge.
Htt link is on 1125, what do you expect to reach of an oc? on 90nm parts most of the htt link knocked out around 1060-1080, so maybe try to make a decent oc...
And where is a screenshot of the mem and devider when you are oc'ing?
Anand Lal Shimpi - Friday, December 15, 2006 - link
Sorry I should have addressed this in the review - HTT speed had no impact on my overclocking results with this particular chip; even with a HT multiplier of 4X the chip won't get into Windows at 2.99GHz. Memory wasn't a factor as it was set to the lowest speed possible in order to see how far we could push the CPU.I have a feeling that with better air cooling close to 3GHz may be possible, but I wanted to look at the worst case scenario overclocking potential of just using a stock heatsink/fan similar to what we did in our Core 2 overclocking article upon its release.
I'm working on the Brisbane 4800+ now and will find out soon if it overclocks any better.
Take care,
Anand
JarredWalton - Friday, December 15, 2006 - link
OC'ing AMD is pretty simple as well. Just drop the HTT multiplier to 4X if you get above about 220 MHz - not that the total HTT speed generally matters. You can also drop memory dividers if you have less capable RAM. I still think 775 overclocking requires a bit more knowledge/effort - a bit, I said, not a lot! I can't see anyone but an elitist thinking easier OC'ing would be a bad thing, though.duploxxx - Friday, December 15, 2006 - link
why is 775 oc harder, it has one factor less to keep in mind and then i won't speak about the memory options....I've used already a 6600 to play with on an asus board, just left everything to auto, put the fsb to 400 and hey it runned stable on 3600 (depending on your memory offcourse)? WTF you could say on stock vcore? nope the board increased the vcore automatically.... it was running on 1,56 auto :)
Will you update this review due to the htt? or will you leave as lots of you're reviews (like the woodrcest testing) saying that some more test are comming but eventually nothing changes.... people actually read your reviews and believe them by hard...so accurate review is always welcome....
Anand Lal Shimpi - Friday, December 15, 2006 - link
Part 2 is coming Monday with Brisbane 4800+ results :)Take care,
Anand
Anand Lal Shimpi - Monday, December 18, 2006 - link
Just an update guys - Part 2 is ready to go, just waiting for a few clarifications from AMD on performance, memory dividers and die size of Brisbane.Take care,
Anand
OcHungry - Friday, December 15, 2006 - link
Mr. Anand, is it possible that you use Asus's Crosshair motherboard if attempting for the max overclocking of these 65nm's? It's only fair when you use top Intel board but leave out top AM2 board. I have an understanding that Asus's Crosshair board is ~ 15%-17% better performer than other boards. Also I've heard that the DFI board is a great Overclocker and you have used it on the s939 reviews. I would appreciate it if you use either board, but preferably the Asus Crosshair.Thanks.clairvoyant129 - Sunday, December 17, 2006 - link
A different motherboard won't save this sorry ass piece of junk.Anand Lal Shimpi - Friday, December 15, 2006 - link
Unfortunately I don't have either of those boards here for testing, but I'm sure I can persuade either Gary or Wes to do a follow-up with a more serious look at 65nm overclocking once I'm done with the power analysis on these chips. :)Take care,
Anand
xenon74 - Friday, December 15, 2006 - link
Jarred, why is then HT Link @ 1125Mhz on Anand's "unfortunate" OC attempt?ADDAvenger - Friday, December 15, 2006 - link
Anyone else wondering what this means for the new generation of Turion X2s?I know Santa Rosa is coming out either Q1 or Q2 this year; it's supposed to support an 800 or 400mhz FSB, depending on system load, which should drop power consumption a bit. But, as I understand it, the real battery suckers are CPU, display, and HDD. (Yeah throw in GPU too if you have discrete graphics.) But where does that leave the chipset, will Santa Rosa really do much for battery life?
If not, AMD could make serious inroads into the laptop segment. 65W is hot for a laptop, but if they can drop their desktop TDPs by about a 1/3 or 1/4, why can't they do the same for their laptop chips?
mino - Friday, December 15, 2006 - link
They can. And they will...FYI even 90nm Turions consume LESS power than Merom. (Merom is more power hungry than Yonah).
Also RS690M is about to rule the integrated market (along with RS700M for C2D). In other word AMD is gonna rule the chipset market for both platforms while beeing pretty competitive in CPU's, especially for bussiness use.
(for bussiness the features and battery life is what counts, not the absolute performance)
Johnmcl7 - Friday, December 15, 2006 - link
Not according to Anandtech:http://www.anandtech.com/cpuchipsets/showdoc.aspx?...">http://www.anandtech.com/cpuchipsets/showdoc.aspx?...
Shows power consumption to be as near as identical between the two processors.
Not sure if you are comparing Turion or Turion X2 to Merome but aside from mismatched comparisons (such as comparing the power consumption of a Turion system with onboard graphics and Merom with dedicated graphics) I've not seen like for like tests showing Turions to be more power efficient:
"Does that make the Core 2 Duo worse at power saving than Turion X2? Without equivalent setups (i.e. both using IGP or both using discrete GPUs), we can't say for certain. We can say that an ASUS W5F with a T2300 chip (1.67GHz 2MB cache) that we had at one point bottomed out at 19W in idle mode, so Core Duo and Turion X2 appear close in low power states, with Turion X2 perhaps holding a slight 1-2W advantage. Our testing of Core Duo vs. Core 2 Duo showed the CPUs to be nearly equal in power draw, so it appears AMD is equal or slightly better than Intel at minimum power draw. At maximum power draw by the CPU, Turion X2 is definitely using more power than Core 2 Duo, as even with higher performance/power components the ASUS A8JS still uses less power than the MSI TL-60 at 100% CPU load."
http://www.anandtech.com/showdoc.aspx?i=2856&p...">http://www.anandtech.com/showdoc.aspx?i=2856&p...
John
rqle - Friday, December 15, 2006 - link
Darn, was so hoping AMD 65nm would give an easy 3.3ghz+ like the intel chips =(. This chip may not reflect overall OC, but it damn hovering around my AMD range of 2.6-2.8 again. A 2.66-2.7 OC intel is > then 2.9ghz AMD. Ill just waited again.dev0lution - Friday, December 15, 2006 - link
Now that both companies claim to offer "platforms" it'd be interesting to see an Intel 965 board vs. an ATI board being used in these benches. Not sure that the NVIDIA models were the best choice here, especially since their not apples to apples on generation and power consumption.poohbear - Thursday, December 14, 2006 - link
im quite shocked to see the gaming performance advantage @ 1600x1200!!!! is'nt the cpu removed as a performance bottle neck @ such a high resolution? its all about the gpu horsepower @ that resolution no? can't believe a cpu can show a 25fps diff @ 1600x1200?!?!?JumpingJack - Friday, December 15, 2006 - link
Some more research here is in order. The G80 GPU, commonly known as the 8800 GTX or GTS on the market, has taken graphics performance to a new level. All of the reviews that used an AMD FX-60 or 62 CPU clearly showed throttling back to the CPU in many if not most cases, only at the highest possible resolutions/AA + FSAA did the scaling turn back on with an FX-60. The X6800 released the full potential of this card.
The difference in framerate you see bettweent he 5000+ and the E6600 is that the E6600 is has pushed the bottleneck further ahead -- simply because the E6600 is a better gaming CPU.
Tom's did a good article titled: The 8800 Needs the Fasted CPU.
In essense, even for a single card solution, an AMD CPU is not a good match for this card.
Makaveli - Friday, December 15, 2006 - link
The reason for the 25fps difference must mean the Geforce is more cpu bottlenecked on the AMD platform than the intel one.U gotta remember the 8800GTX is an insanely fast card, and is still bottlenecked on Conroe systems.
JarredWalton - Friday, December 15, 2006 - link
This is why back at the Core 2 Duo launch we talked about CPU performance using games running at 1280x1024 0xAA. When a faster GPU comes out and shifts the bottleneck to the CPU (as has happened with the 8800 GTX), saying "Athlon X2 and Core 2 Duo are equal when it comes to gaming" is a complete fabrication. It's not unreasonable to think that there will be other games where that ~25% performance difference means you can enable high quality mode. Company of Heroes is another good example of a game that can chug on X2 chips at times, even with high-end GPUs.XMan - Thursday, December 14, 2006 - link
Do you think you might have gotten a higher overclock if you weren't running HTT at 1125MHz?!?Sunrise089 - Thursday, December 14, 2006 - link
The chart on page one claims to have pricing info. There is none as far as I can see.JarredWalton - Friday, December 15, 2006 - link
Fixed. :)RichUK - Thursday, December 14, 2006 - link
Same old sh!t. It's nice to see AMD finally kicking off their 65nm retail chips, but lets see this new core for God sakes.They're lacking big time, this really is sad. **Thumbs-Down**
Stereodude - Thursday, December 14, 2006 - link
How's a chip that uses less power run hotter? On the last page the 65nm X2 5000+ hit 51C under load, lower than any other chip, but it uses more power than any chip except the 90nm X2 5000+. How's that work?Live - Thursday, December 14, 2006 - link
It does not use more power then any other chip except the 90nm X2 5000+. Where did you get that from? Did you read the article?Stereodude - Thursday, December 14, 2006 - link
Yes, I read the article. Excluding the C2D it uses the 2nd most amount of power, basically tied with the 65W 4600+.smitty3268 - Thursday, December 14, 2006 - link
Not all the power that goes into a chip is released as heat. The heat is basically wasted power that "leaks." So if a chip can get more useful work out of the same amount of power then the amount of heat released would decrease even while power consumption remained steady.I'm not an expert, but I believe a lot of the special new process techniques we always here about (like strained silicon) basically just reduce the amount of wasted energy. Am I right here?
Stereodude - Thursday, December 14, 2006 - link
Sorry, but that's incorrect. All the power is turned into heat. The power can't be going anywhere else. Power in = Power out.It's not like a LED where you get some energy out as light, or a motor where you get mechanical energy out of it in addition to heat.
finalfan - Thursday, December 14, 2006 - link
If all the power can be turned into heat then it will be the most efficient heater the human being ever built. And even greater, you get all the computation done for free. Could you believe that?Stereodude - Thursday, December 14, 2006 - link
Where else is the energy going if it isn't getting turned into heat? You apparently don't have any idea how the transistors in a processor work.splines - Thursday, December 14, 2006 - link
You apparently don't have any idea about basic thermodynamics.If the processor released all of its energy in heat, it'd be the world's most efficient space heater.
You have forgotten a few little points, like that work is done by a processor (wouldn't be much point otherwise). Transistors are switched, mostly, however the IC itself can expand and contract, as well as the packaging material. The heat generated by a CPU is because of the resistance inherent to the circuits. All of the above is considered energy expended (or, more properly, changed in state).
In other words, don't go around insulting people's intelligence when you don't know yourself what you're on about.
Stereodude - Thursday, December 14, 2006 - link
I'm betting only one of us has an Electrical Engineering degree, and guess what... You're not the one with it.The work being done by the CPU is what makes the heat. The transistors themselves create heat because they consume power, and a lot of it, to switch from one state to another at high speeds.
I will say it again since you still don't get it, though it probably won't help. Energy is conserved. Electrical energy goes in, and heat comes out. The thermal expansion and contraction of the part isn't work. It's a side effect of the heat being product when the transistors consume electrical power by switching and make heat.
slayerized - Friday, December 15, 2006 - link
Thermodynamics 101- First law of thermodynamics: “Energy can neither be created nor destroyed, it can only be converted from one form to another” (Power --> Heat)smitty3268 - Friday, December 15, 2006 - link
I think everyone here knows that, the issue is that current -> heat is not the only type of transformation that can occur. If it was then anything electric wouldn't be able to do anything at all except create heat, and obviously that isn't true.mino - Friday, December 15, 2006 - link
Read some paper on entropy.The problem in this discussion is most participants do not know what heat, electricity, work(i.e. in joule), work(i.e. inlogical one) means.
Except for electromagnetic waves that leak badly shielded PC and the energy required to transfer the information out of the PC (by monitor and network cables) the energy(in form of electricity) consumed by the CP is completely changed to heat.
In other words "focused" form of energy witch low entropy(electricity) is distributed to the environment and becomes an "unfocused" form of enegry, mostly heat. Also even the heat dispersed to the room is still partly focused in sense it still not spread to the whole universe.
Hope this clears it for some.
mino - Friday, December 15, 2006 - link
CP == PCsmitty3268 - Thursday, December 14, 2006 - link
I don't particularly know how transistors work (only the basics) but if a space heater isn't 100% efficient then why would a cpu be?Again, I could be wrong, but do you have any 3rd party info to support your claim?
Missing Ghost - Thursday, December 14, 2006 - link
I think space heaters are 100% efficient, except maybe if there is a fan, then it could be 99.999%.smitty3268 - Thursday, December 14, 2006 - link
If you can see something glowing, then at least part of the energy is producing visible light and not heat. Although now that you mention it, I think I heard that plain old light bulbs are fairly efficient heaters.smitty3268 - Thursday, December 14, 2006 - link
From http://www.intel.com/technology/silicon/si11031.ht...">http://www.intel.com/technology/silicon/si11031.ht...This implies what I was saying, but perhaps the devices only run cooler because they require less power to begin with?
JarredWalton - Thursday, December 14, 2006 - link
Heat density: less power in a smaller area can potentially run hotter (witness Prescott vs. Northwood P4). Except we're seeing the reverse here, so probably it's just a difference in chip/package design. There's no guarantee that the various chips are measured identically, meaning AMD could have changed temperature reporting with 65nm, and certainly the AMD vs. Intel numbers are not a direct comparison. I would put more weight on power numbers, personally.eRacer - Wednesday, December 20, 2006 - link
In future Brisbane reviews could you check the Brisbane idle temperature to see if it appears to be somewhat accurate? Other previews and leaks show Brisbane idle temperatures in the 10C-15C range which is well below room temperature. Idle and load temps of Brisbane may actually be 15C higher than what is reported.Stereodude - Thursday, December 14, 2006 - link
I was excluding the C2D from my comments. Sorry if that wasn't clear.If anything the die shrink should make the Brisbane run slightly hotter since the die is a little smaller. I can't come up with any good reason why one 65W processor runs cooler than another 65W processor given the same cooler and same size heat spreader. Maybe the heatspreaders aren't flat between all the AMD CPUs. The 35W AM2 processor definitely should have run cooler.
eRacer - Thursday, December 14, 2006 - link
"As you can expect, AMD is pricing the 65nm chips in line with its 90nm offerings to encourage the transition. Die size and TDP have both gone down to 147 mm^2 and 65W across the line.Is 147 mm^2 accurate? That happens to be the same die size of 90-nm A64 X2 Manchester, and isn't much of a shrink from the current 183mm^2 512KBx2 Windsor cores. Some rumors had put it at ~125 mm^2.
Tilmitt - Thursday, December 14, 2006 - link
This "Performance per Watt" marketing bullshit has gone too far. Strange how Anandtech only started including it when Intel started marketing it insanely.To hell with performance per watt, give me performance.
Mantruch - Tuesday, December 19, 2006 - link
Indeed.. who give a sh*t about watt/performance, this is a desktop cpu not mobile...who sold this article to amd ? lol
Locutus465 - Saturday, December 16, 2006 - link
No... This was one of the major topics of discussion in the A64 v. P4 days as well, only AMD was enjoying Intel's current position. So Intel has a better chip right now, whats the big deal? This is why we like compitition.
Genx87 - Friday, December 15, 2006 - link
We didnt use to care about performance\watt in the consumer sector and left that to the professional arena. But ever since the power draw and heat of the consumer boxes started to rival professional level boxes it is good to see what you are getting.Gone are the days of a 250 watt PSU and a system that is passively cooled.
Frumious1 - Thursday, December 14, 2006 - link
Seems to me performance per watt is AMD's only leg to stand on right now (unless you just want to go with pure power draw). If you just look at performance AMD is even farther behind and that won't change until we see K8L, and maybe even that won't be enough. It's good to see lower temps and power requirements from AMD as it shows they're at least doing something other than sitting around bemoaning their fate. Intel already wins the marketing war, and now they're killing AMD on performance as well.Would have liked to see more CPUs for the comparison, though - like how about a 65nm 4600+ and 3800+ to see how they match up with the EE/EE SFF chips? Getting an E6300/6400 in as well would ahve been nice. Last, the use of an 8800 GTX is going to skew the "performance per watt" numbers, but it should still be consistent in this one set of benchmarks. As long as we don't get comparisons using other GPUs thrown in, it's probably the best you can do. Still would be nice to know how much power the CPU is really drawing on all these tests - probably less than 50W would be my bet.
yyrkoon - Thursday, December 14, 2006 - link
Intel won WHAT marketing war ? More like they won a battle, after losing many to their competitor, for around two years. Also, unless something has changed recently that I'm unaware of, AMD also leads in memory bandwidth, not that THAT much memory bandwidth translates into real world performance.I don't know which platform you like, and to be honest, I don't really care. However it would behoove us all, to realize that AMD, and Intel are both good companies, and if one were to go missing, we ALL would be in a much worse "market".
Frumious1 - Friday, December 15, 2006 - link
Right now, I prefer Intel. Oddly enough, though, I'm still running AMD Opteron 165 overclocked to 2.6 GHz, because it's fast enough. Better? Nope, but good enough. So I preferred AMD when they were ahead.My point about marketing is that Intel has always had the lead there - ALWAYS! How many AMD commercials do you see? A few, maybe. Intel floods the airwaves with adverts. Being better at marketing doesn't make your products superior, but if you own the mindshare of the common man it can make it damn hard for others to compete. Hell, with the junk that was Prescott (and the later NetBurst stuff), the fact that most corporations were still buying Intel says something, doesn't it? Yes, AMD made a lot of gains, but with Intel back in the lead those gains are eroding fast.
I for one hope K8L kicks some serious ass. HOPE. I remain unconvinced that AMD can close the gap and then some in the next 6 months. We shall see. Let's just hope it's not more of this Quad FX craziness. If I wanted two dual core AMD CPUs, I think I probably would have bought an Opteron 2xx setup a while back. Unbuffered RAM might improve performance a bit (5% or so?) but not enough to matter. As for octal core systems (dual quad core), don't make me laugh. That's great for the server market, and maybe even high end workstations, but on the desktop we're years away from being able to utilize/need that much CPU power.
Frumious1 - Friday, December 15, 2006 - link
Well, better a double post than to lose all that I typed. It looked like the post didn't go through, I luckily had it copied first, and after three refreshes I posted again. Naturally at that point both posts show up.yyrkoon - Saturday, December 16, 2006 - link
I don't know what current Intel CPUs memory bandwidth is, but at the time I checked my AM2 3800+ (single core) the closest thing was a core 2 duo at around 8-10GB/s bandwidth. After dropping the multiplier on my AM2 CPU, and increasing the "FSB" to 250MHZ, memory bandwidth on my 3800+ AM2 was over 11GB/s. I didn't test before hand, well maybe I did, but I don't recall what it was, so . . . I may have posted somewhere on the anandtech forums. Anyhow, I'm pretty sure its much more than 5% differences between the two, but like I said, it does not translate into real world performance, so it's pretty much moot.Frumious1 - Friday, December 15, 2006 - link
Right now, I prefer Intel. Oddly enough, though, I'm still running AMD Opteron 165 overclocked to 2.6 GHz, because it's fast enough. Better? Nope, but good enough. So I preferred AMD when they were ahead.My point about marketing is that Intel has always had the lead there - ALWAYS! How many AMD commercials do you see? A few, maybe. Intel floods the airwaves with adverts. Being better at marketing doesn't make your products superior, but if you own the mindshare of the common man it can make it damn hard for others to compete. Hell, with the junk that was Prescott (and the later NetBurst stuff), the fact that most corporations were still buying Intel says something, doesn't it? Yes, AMD made a lot of gains, but with Intel back in the lead those gains are eroding fast.
I for one hope K8L kicks some serious ass. HOPE. I remain unconvinced that AMD can close the gap and then some in the next 6 months. We shall see. Let's just hope it's not more of this Quad FX craziness. If I wanted two dual core AMD CPUs, I think I probably would have bought an Opteron 2xx setup a while back. Unbuffered RAM might improve performance a bit (5% or so?) but not enough to matter. As for octal core systems (dual quad core), don't make me laugh. That's great for the server market, and maybe even high end workstations, but on the desktop we're years away from being able to utilize/need that much CPU power.
smitty3268 - Thursday, December 14, 2006 - link
Yes, I would have appreciated a lower end Core 2 Duo that is more comparable performance-wise as well as the 6600 which matches it's price.Basically, it looks like the new process is only a bit better than the old energy efficient chips, but is clocked higher and will be sold cheaper. The important thing for AMD is probably to get their 65nm process ramped up and have all the bugs ironed out for a good K8L launch.
Accord99 - Thursday, December 14, 2006 - link
What's the problem? The Core 2 Duo gives you both.lollichop - Sunday, February 26, 2017 - link
Wow! Ancient chip fanboys.