Comments Locked

55 Comments

Back to Article

  • evanwhat - Tuesday, May 12, 2009 - link

    Please revisit this debate with overclocking factored into the picture.

    Many thanks,
    -Evan
  • Patrick Wolf - Monday, May 4, 2009 - link

    Why even bother with single core CPU's anymore? Why can't you throw a $50 Celeron e1400 in there? How does it compare to the lower end X2's and P4's?
  • orenlevy - Saturday, May 2, 2009 - link

    It seems you are benching the wrong hardware! You are far from the market guys.... the whole story is changing when comparing low end mobos. like G31or n73 for Intel. And 740V\g or 780G for AMD. It change totally the latency timing specially in Intel setup-there is no on die controller.
  • agawtrip - Friday, May 1, 2009 - link

    people who buy these CPUs are those on a budget.
    you only have two setups here - budget setups

    intel e5200/e5300 + 9300/9400 boards
    AMD x27750/7850 + 780G

    now which is faster and has a lower price?

    9300/9400 boards are faster than 780g but on a higher price. then if you're really on a budget you'll have to go for AMD set-up. if you still have some extra cash, you can put it on a video card, or extra memory or go for tri-core which will give a real nice performance, faster than OCed e5200/e5300.

  • dingetje - Tuesday, April 28, 2009 - link

    sorry to say this, but leaving overclocking out of the equation means: FAIL
  • bigsnyder - Tuesday, April 28, 2009 - link

    The article keeps referencing the e6400 as a comparison, but I don't see it in the charts from the original core 2 duo review. Am I missing something here?
  • v12v12 - Tuesday, April 28, 2009 - link

    I don't wanna toot a moral high-horse, but... I noticed the word "crippled" in the article; why on earth (aside from being cheap or broke and needing a better job) would you waste money on a crippled piece of hard ware.

    Let me break this down: So a manufacturer produces a product X with 4 cores or whatever. Now they say "hey we've created this product and want a certain amount of compensation." So, the cost of production, advertising blah blah is all factored into this selling price. 4 cores yadda yadda...

    Now some really smart mofo says "hey we can sell the same exact product from the same fab line, but we'll just 'disable' the other cores and rebadge it as a lower model, even though it costs us little to nothing to do this since it's all the same hardware!!!" Sweet deal for AMD/Intel etc.

    Can you see where this is going? I would never buy something that someone purposefully disabled cores that are on the chip, that could be active and working perfectly, but they've decided to be greedy (not necessarily bad) and disable them unless you pay another higher fee. So all someone (them) has to do is flip a switch or connect some traces and boom you have the true product as intended. So in essence, it costs them little to nothing to cripple this thing, yet charge a bit less for another “product,” which in reality is NOT another product, it's the same dang thing but crippled.

    Smart business practice yes... good for the consumer's wallet? No. Profits Vs Consumers. Often times consumers (cattle) find themselves rationalizing or justifying these shady profit-practices, BUT ask yourselves; would they or do they have much concern for your wallet... Yes they do, they are concerned with getting as much money from you as they can regardless of performance or not. 2 active cores or 4... they don't care so long as you're paying as much as they can get.. It costs them X to make this CPU, and they want to find a reasonable profit margin Vs costs of production, but yet they can still sell you the SAME hardware, just switched off at a MUCH lower price??? WOW these guys are deviously intelligent. Can’t say the same about many of their customers…

    :-/
  • chowmanga - Tuesday, April 28, 2009 - link

    You're missing the point of disabling cores and selling them for cheaper price. Silicone yields aren't 100%. A lot of times these cpu's come out with defected so rather than scrapping the whole thing, they rebadge it them and sell them for cheap. They aren't necessarily "active and working perfectly"
  • kopilka - Saturday, January 16, 2010 - link

    Athlon outside competition, Athlon won the market!
    http://www.safegeneric.com">http://www.safegeneric.com
  • Doc01 - Tuesday, July 27, 2010 - link

    Athlon has surpassed all expectations, Athlon outside competition.
    <a href="http://www.salesgeneric.net">http:...
  • Doc01 - Thursday, November 18, 2010 - link

    I fully agree with you, that the Athlon is unrivaled.
    http://www.salesgeneric.net
    http://www.saferxtab.com
  • v12v12 - Tuesday, April 28, 2009 - link

    Now THIS is a "valid" retort... The others cannot understand my clearly underlying point; as you've described yours. I now understand a possible reasoning.

    Thanks for the insight mate.

    ----
    Lastly...The nit-picking over negligible and highly specific instances where the lower power...

    *Pause* lmfao give me a break, just who or what group (purchasing this cpu) is large enough to justify that case? Nobody gives a crap about 10-40W power diffs. Those types of examples are laughable as:
    1) Most people who buy this chip are not running a file server(s).
    2) Don't give a hoot about small power diffs.
    3) They want performance/dollar period.
    4) Even the cheaper ($20) HSFs of TODAY are more than enough to cool them, even OC'd... Your argument would have flown (shortly) maybe 7yrs ago?

  • erple2 - Wednesday, April 29, 2009 - link

    I had thought that these chips were simply the x4's that couldn't pass the full quality tests with 4 cores, so AMD simply disabled the cores that couldn't pass. That might bump up the "yield" of at least sellable chips from 60 odd percent per die to closer to 85%. From AMD's perspective, it's "free" money - they were going to throw out those CPUs anyway, but disabling multiple cores got them to pass.

    Now, people complain all the time about the power hungry graphics cards that AMD made. See arguments in the comments about the 4890 and the GTX275 about the extra 30W the 4890 uses at idle or full tilt over the GTX275. It therefore does matter to someone. I just questions whether it matters as much as they think it does :)
  • strikeback03 - Wednesday, April 29, 2009 - link

    Actually, if you read through the comments in a bunch of articles here, you will likely find before too long something by the guy running his house completely off solar/wind, who is always asking for lower power components. Not to say that is a typical case, but there are people for whom 10-40W do make a difference.

    Also, on the "crippling" hardware point: By your definition, aren't pretty much all processors "crippled," as they do not run at their maximum stable speed by default? Are you only going to buy the QX9770/PhenomII 955/i7 965 class hardware because it is not "crippled"? And if those can be overclocked, doesn't it mean they are "crippled" too?
  • nubie - Tuesday, April 28, 2009 - link

    You seem to be coming from a different corner than me.

    It makes perfect sense from a market standpoint, there is only so much people are willing to spend, if you don't have a product at that price then they might buy somewhere else.

    I think that the Intel method is great. 2MB level 2 cache is acceptable, although losing the Virtualization is regrettable.

    Having a 45nm core with a multiplier of 12.5 or 13 on an 800mhz front-side bus means that cash-strapped enthusiasts will be clearly on Intel's side.

    I don't see anything wrong with the practice, I always enjoy getting more for less.

    Look at all the benefits of the 'crippled' (we prefer the term 'feature reduced') processors. The same process, same tech, same billions in research and development, 70% of the performance, if not 80-90% (or higher). In the case of a reduced quad core you also get less power consumption and higher potential overclocking, with the lower heat output that goes with it.

    All you have to do is pay a fraction of the price. Sounds fine to me.

    Tinkerers, hackers, overclockers, customizers, hobbyists, call us what you will, but we just don't have the budget of a small country to operate on. These companies are letting us play with the result of Trillions of dollars of research for the cost of a couple tanks of gas or a few meals.

    I would much rather buy Phenom II x3 Black or a Q6600 and spend $50 on a heatsink and thermal compound than $500 on a 6-core or 8-core processor to run at stock speeds. My performance in the applications and games I actually use will be 80%-150% of the more expensive processor anyway.

    The balance of the money I can spend on an OCZ Vertex SSD or a SuperTalent SSD and make the computer perceptibly more responsive.

    I just don't see your point.
  • v12v12 - Tuesday, April 28, 2009 - link

    I don't care what the market or stand point is... It's costing me (more). I've been OC'ing for a decade plus. I'm on a OC'd AMD as we speak :-)

    My point is... I don't like the fact that with the flip of a "switch" you COULD have the true, full active CPU. So the extra "cost" of "on/off" is my issue. Not 80% performance for 30% less (est.) Just give us the 4 cores for the cheap price, b/c it's obvious if they can ship out 4 cores -minus 2 with a flick of a switch... then they prob could be selling all of them for that price or a tad bit (say 10-15% Max) higher. I'm arguing from the consumer's POV, and so should all of you. Justifying the manufacturer's POV is like...

    The problem with USA's society; the ignorant poor/middle still day dreaming as if they were rich, by voting in and arguing the RICH man's POV, which in essence, keeps them enslaved, lol! Fight YOUR class/side's battle, NOT theirs. Arguing why it makes sense for them is pointless; you're still getting 4 cores minus 2, b/c of a simple "on/off." What justifies (aside from more profit (attempts in AMDs case)) this in your mind?

    I'll pay less for the SAME hardware, but get less. If I find a way to "flip the switch" am I "wrong?" Are they "wrong" for cutting the switch? Yes, no? I don't care about them, I care about ME. AMD has shown it's just as guilty of market/performance hype and the like, as with Apple, as with Intel and anyone else. Why the exception and excuses for AMD but nobody else? They flat out deceived and lied about the Phenom when it 1st was in pre-release. Everyone just turned their heads b/c AMD was wounded and limping. A lie is a lie in this case. Once I discovered this from a myriad of review sites, I lost respect for AMD; though I still support them and want them to pull through solely b/c it benefits us as consumers. I don't care about Intel Vs AMD... I care about the CHEAPEST + Fastest hardware for the money.

    The cost of flipping a switch is the bottom-line of the argument; the rest is filler and entertainment value for this forum :-)
  • JarredWalton - Tuesday, April 28, 2009 - link

    Your perspective would be valid were it not for one thing: AMD isn't likely to be killing off fully functional, 100% working cores on a lot of CPUs. The reason they have tri-core isn't to sell more CPUs at a lower price, it's that the fourth core is defective in some way. The same goes for dual-core, so they're not 100% working, perfect CPUs that have been crippled; they're defective CPUs that are made to work 100% correctly by circumventing some faulty circuitry. Maybe *some* dual-core and tri-core are not faulty, but I wouldn't count on it.
  • just4U - Wednesday, April 29, 2009 - link

    Also as a further note, by faulty it just means it didn't pass their scratch test (or whatever) While some of us might unlock these cpu's and find that they work fairly well with all cores enabled that doesn't mean that they are fully working cores according to Amd's spec.
  • Insomniator - Tuesday, April 28, 2009 - link

    If there was no way to flip the switch then the cheapest chip would be well over 70 bucks or whatever the given crippled chips cost.

    Its not like Intel/AMD would sell tons more chips if quad cores were 100 bucks (wait, actually AMD already has them at that price but anyway...), they would just sell the same number of chips but without added revenue from thousands of $150+ chips.

    So do you want to be forced to pay 150 for the only chip available, or have the option of spending half that or twice that depending on what you need?

    Also lets not forget about heat/power consumption... a 5x00 uses less power than a 9x00... which is a big deal for many people and a huge deal for businesses. It would really suck if the only chip available was a Q9650 or a PII 955, even at just $150, if you just wanted to make a file server.

    If say a company like AMD only makes one chip but cripples them to sell at different price points, do you think it makes sense for them to have to make completely different chips with different manufacturing lines to sell the same products? Prices again, would go up overall.

    In conclusion, your complaint doesn't make any sense.
  • balancedthinking - Tuesday, April 28, 2009 - link

    Did I miss something? Are you realy using the word "profit" and "greed" in the same sentence with "AMD", regarding an AMD product?

    Regarding the other aspects, I think you should definitly start thinking about not smoking that green stuff anymore.
  • v12v12 - Tuesday, April 28, 2009 - link

    What does personal mud slinging have to do with my points? Please debate them, if you can?
  • Doc01 - Tuesday, July 27, 2010 - link

    Athlon has surpassed all expectations, Athlon outside competition.
    <a href="http://www.salesgeneric.net">http://www.sa...
  • just4U - Tuesday, April 28, 2009 - link

    I think overclocks are a non issue with the 5300. Why would anyone buy it over a 5200 which is already a proven performer. Not like the 5300 could actually outdue it... Anyway..

    I was faced with a decision on a couple of cheap builds. (no overclocking) The 5200 or the 7750. In the end I opted for the 7750 for a few reasons..

    I kinda felt that the chips were comparable overall.. but a key deciding factor was the motherboards used. The 780G chipset is just way to tempting at such a low low price for a budget build that it sort of trumps Intel's cpu/mb considerations. Atleast in my opinion... am I wrong?


  • TA152H - Tuesday, April 28, 2009 - link

    Anand, you mention the Pentium has a one cycle faster L1 cache, but my understanding is they are both three cycles. I know the K7 and K8 were three cycles. Did AMD slow down the L1 cache on the Phenom's?

    A few other things to consider. The L2 cache on the AMD is exclusive, Intel's is inclusive. But, the L1 cache of AMD processors is not really 128K, since they pad instructions for easier decoding, but I guess that's just nitpicking.

    I'm really curious about the L1 cache latency though. Can you let us know if the Phenom is now 4 clock cycles? It's an ugly trend we're seeing, with the L1 cache latency still going up, except for the Itanium. Which, as we all know, will replace x86. Hmmmm, I guess Intel missed on that prediction :-P .
  • Anand Lal Shimpi - Tuesday, April 28, 2009 - link

    You know, I didn't even catch that until now. I ran a quick latency test that reported 4 cycles but the original Phenom (and Phenom II) both have a 3 cycle L1. I've got a message in to AMD to see if the benchmark reported incorrectly or if something has changed. I'm guessing it's just a benchmark error but I want to confirm, I've seen stranger things happen :)

    Intel found that the 4 cycle L1 in Nehalem cost them ~2% in performance, but it was necessary to keep increasing clock speeds. I'd pay ~2% :)

    Take care,
    Anand
  • TA152H - Tuesday, April 28, 2009 - link

    I was under the impression the reason Intel increased L1 cache latency was so they could use a lower power technology, and save some power.

    I heard the number was around 3 to 4 percentage loss in performance, but I guess it always depends on workload and who in Intel is saying it.

    But, the whole setup seems strange to me now. Typically, when you go to the 3-level cache hierarchy, you see a smaller, faster L1 cache, not a very slow one like the Nehalem has. Especially with such a small, fast L2 cache, and the fact the L1 is inclusive in it, I'm not clear why they didn't cut the L1 cache in half, and lower the latency. You'd cut costs, you'd cut power, and I'm not sure you'd lose any performance with a 32K L1 cache with three cycles, instead of a 64K with four, when you have a 10 cycle L2 cache behind it. A non-exclusive L2 cache that is only four times the size of the L1 cache seems like an aberration to me. I wouldn't be surprised if this changed in some way for the next release, but I have no information on it at all.

    But, mathematically, if you could but the L1 cache to three cycles by going to 32K, that would mean you'd get better performance for reads up to the 32K mark by one cycle, and worse by six cycles for anything between 32K and 64K. Typically, you'd expect this to favor the smaller cache, since the likelihood of it falling outside the 32K but within the 64K is probably less than 1/6 the chance of it falling inside the 32K. Really, we should be halving it, since it's instruction and data, but I think it's still true. On top of this, you'd always have lower power, and you'd always have a smaller die, and generate less heat. And you wouldn't have that crazy four to one L2 to L1 ratio.

    But, the Nehalem has great performance, so obviously Intel knew what they were doing. Maybe they were able to hide the latency well beyond simple mathematics like I used above, or maybe cutting it to three cycles would have been difficult (very hard to imagine since it's working on the Penryn with 64K, and the clock speeds aren't so different). I wish I knew :-P .

    Thanks for your response.
  • Anand Lal Shimpi - Tuesday, April 28, 2009 - link

    This is what I wrote in my original Nehalem Architecture piece:

    "The L1 cache is the same size as what we have in Penryn, but it’s actually slower (4 cycles vs. 3 cycles). Intel slowed down the L1 cache as it was gating clock speed, especially as the chip grew in size and complexity. Intel estimated a 2 - 3% performance hit due to the higher latency L1 cache in Nehalem."

    I believe Ronak Singhal was the source on that, the chief architect behind Nehalem.

    I suspect the decision to stick with a 64KB L1 (I + D) instead of shrinking it has to do with basic statistics. There's no way the L1 is going to catch all of anything, but the whole idea behind the cache hierarchy is to catch a high enough percentage of data/instructions to limit the number of trips to lower levels of memory.

    It's not impossible to build a 64KB 3-cycle L1, but if Ronak is correct then even a smaller L1 would not negate the need to make it a 4-cycle cache - the L1 was gating clock speed.

    I think Intel found the right L1 size for its chips and the right L2 size. The L3 is up in the air at this point. Ronak said he wanted a larger cache, but definitely no less than 2MB per core (8MB for a quad-core).

    Take care,
    Anand
  • f4phantom2500 - Tuesday, April 28, 2009 - link

    seriously, most people who come to anandtech and look at these things are at least aware of overclocking, if not overclockers themselves. whenever i build desktop rigs for myself i always go for the cheap "low end" chips and overclock them. the e5300 is an excellent example. honestly i'd expect it to completely decimate the 7850 in all ways if they were overclocked, but they should definitely have put it in this review, as there are 2 people who buy these types cheap cpus:

    1. people who do basic stuff and don't really care anyway
    2. people who overclock

    and people who overclock would most likely constitute the majority of the readers of a comparison between the two chips, because if you didn't care about performance that much then there aren't too many reasons why you'd bother reading this review.
  • sprockkets - Tuesday, April 28, 2009 - link

    For me, it comes down to whether I want a Zotac mini ITX board based on the 8200 or 9300 chip. While the AMD one is cheaper, they neglected to put in a HDMI and SPDIF ports on it, so, I have to go for the Intel board.

    Besides, they have a small heatsink fan combo. I assume the AMD one is bigger, but of course, it mounts much easier and better than Intel's setup.
  • leexgx - Sunday, May 3, 2009 - link

    the intel one make so little heat does not need an big heatsink, i going to have to start useing intel CPUs for my basic systems soon as where i get me cpus form only stock 2.7ghz amd X2 CPUs or the heat moster 7750
  • JimmiG - Tuesday, April 28, 2009 - link

    Interesting to see the X4 9850 at 2.5GHz beating the higher clocked Phenom-derived Athlon X2 in many of the game tests. If multithreaded performance of games continue to improve, I think a Quad or Triple core CPU would be more future proof?

    The Phenom II X3 is a very nice AMD gaming CPU at this time and a tempting "sidegrade" even though I've already got a first-generation Phenom X4.
  • Davelo - Tuesday, April 28, 2009 - link

    and then totally ignores it's original premise. I'm no fanboy but I find it very hard to miss the fact that the Intel solution costs almost $150 more when you factor in the added cost of the motherboard.
  • Anand Lal Shimpi - Tuesday, April 28, 2009 - link

    I used the X48 simply to allow for direct comparisons to all of the other CPU test data in Bench - www.anandtech.com/bench. The X48 performs similarly to the P45 and the P35 (and many other similar chipsets if you're not overclocking), so the comparison is still valid.

    Take care,
    Anand
  • lopri - Tuesday, April 28, 2009 - link

    What about power consumption comparison? Are you penalizing E5300 with X48 there? That'd be incredibly stupid and unfair to E5300.

    And this paragraph makes no sense to me. (literally)
    quote:

    The original Phenom architecture was designed to be used for quad-core processor designs, hence the use of a large shared L3 cache alongside private L2 caches. With only two cores, many of the benefits of this architecture are lost. Intel discovered that the ideal dual-core architecture featured two levels of cache with a large, fast, L2 shared by both cores. AMD and Intel came to the conclusion that the ideal quad-core architecture had private L2 caches (one per core) with a large, shared L3 cache. The Athlon X2 7850 takes the cache hierarchy of the ideal quad-core design and uses it on a dual-core processor. To make matters worse, it does so with an incredibly small L3. Intel found that on its Nehalem processor each core needed a minimum of 2MB of L3 cache for optimal performance. With Phenom II AMD settled on 1.5MB L3 per core. The original Phenom gave each core 512KB of L3, or in the case of a dual-core derivative 1MB of L3 cache. Again, not ideal.


    Interesting review, nevertheless.
  • TA152H - Tuesday, April 28, 2009 - link

    What he was trying to say was, the ideal cache set up for a quad-core is different from a dual-core. The quad-core is best with a relatively small and fast L2 cache, and a significantly larger L3 cache. The dual core is best with a relatively large L2 cache, and no L3 cache. Because AMD's processor is a quad-core stripped down to a dual-core, it has the cache hierarchy of the quad-core, even though it's a dual core now. So, it's not the ideal cache setup.
  • edogawaconan - Tuesday, April 28, 2009 - link

    Also worth noting that not all lower-end Intel processor includes VT-x which (arguably?) helps accelerating speed for tasks related to virtualization.
  • stmok - Wednesday, April 29, 2009 - link

    Actually, Intel's VT or AMD-V doesn't do much for performance. (You'll see this with VirtualBox. What it does provide is a more stable development approach to virtualization for the software programmer.)

    The one area where you will see a performance increase is with "nested paging". (A 2nd generation virtualisation feature).

    In that case, the Phenom-based Athlon X2 clearly wins. NONE of the Core 2 series have nested paging, only Core i7 series...And this Core 2 doesn't have Intel VT either! Nested paging was introduced in all K10 based AMD CPUs. (Its also carried onto K10.5 or Phenom II processors).
  • nvmarino - Tuesday, April 28, 2009 - link

    Considering the low price and low FSB/high multiplier of the E5300 makes it a perfect candidate for overclocking I'm surprised you guys didn't do some OC tests. Would be nice to see the impact of smaller cache at higher clocks.

    My e5300 does 3.7Ghz easily with a PoS air cooler.
  • aeternitas - Tuesday, April 28, 2009 - link

    I think this review needs to be augmented with OC capabilities and the tests redone with them.

    Its highly unrealistic to test these at stock. The mass majority of people that would care about this review at this price are getting these cpus because of the insane bargain when OCed.

    To not test that is really unrealistic. It makes this whole article much less usefull than it could have been.
  • nubie - Tuesday, April 28, 2009 - link

    I went for the e5200 for $59 on ebay. Same chip but with a 12.5 multi instead of 13.

    I plan on running it well north of 3 ghz on a day to day basis, either 1066 or 1333 FSB. Even 1066 will get you a solid 3.4ghz, and it should be able to reach that easily on any motherboard.

    You completely forgot to mention that this is the only 800mhz FSB line on a 45nm process, and thus can be overclocked in any motherboard, including $45 ones with a simple strap on the FSB pads.

    I think that Intel is the clear winner, hands down, if you are an enthusiast with very little money who is not opposed to overclocking.
  • just4U - Tuesday, April 28, 2009 - link

    I don't agree about intel being the clear choice, (enthusiast or otherwise)

    You have to factor in boards being used to. Chances are most that are looking for a bottom feeder budget build will be using integrated chipsets. The 780g/v brings so much more to the table over what we currently get from intel offerings..

    That was a key sticking point for me.. and I think it really makes the choice a hard one to make unless your brand loyal, or a overclocker looking for a cheap cpu based around a competent setup.
  • nubie - Wednesday, April 29, 2009 - link

    Depends on what you need, I went for a 650i motherboard with a single PCIe slot for an 8600GTS ( the 2ghz ram actually seems to help with the 128-bit bus). It was $40, and you can get that deal yourself.

    I doubt for $140 you can beat a 3.3-3.4 ghz core2 with an 8600GTS.
  • soydeedo - Tuesday, April 28, 2009 - link

    Yeah, that's what I was thinking.
  • Zaitsev - Tuesday, April 28, 2009 - link

    I was hoping to see a few words on OCing as well. I mean, having two cores disabled should yield some more headroom than the quad core parts, right?
  • Anand Lal Shimpi - Tuesday, April 28, 2009 - link

    I didn't have time to test overclocking for this article but if there's enough demand we can definitely look at how the two compare. The E5300 has a good amount of headroom thanks to its 45nm process, I'd expect the standings to remain the same if not widen in favor of Intel.

    Take care,
    Anand
  • cpeter38 - Wednesday, April 29, 2009 - link

    Please do OC the chip ...

  • crimson117 - Tuesday, April 28, 2009 - link

    OC results on these two budget CPUs would be great - but it'd be best if it were normalized somehow...

    1. Same price-class motherboards, around $100 or less to match the low-cost CPUs
    2. Same exact ram modules
    3. Same heatsink, or limit it to included stock heatsinks

    , and re-run just a few choice benchmarks.

    Would make for a great blog post :)
  • Viditor - Wednesday, April 29, 2009 - link

    While you're at it, you should unlock the other 2 cores as well...

    http://www.atomicmpc.com.au/News/143621,amd-x2s-ar...">http://www.atomicmpc.com.au/News/143621,amd-x2s-ar...

  • johnsonx - Wednesday, April 29, 2009 - link

    I've read dozens of articles and posts claiming that you can unlock the extra core(s) in the new X2's and X3's, and exactly ZERO telling how to actually do it. Is this some sort of urban legend?
  • ssj4Gogeta - Tuesday, April 28, 2009 - link

    Yes we'd definitely like to see OC results. I'm sure after OC'ing both chips to their max. gaming performance will be significantly better on the Intel part too.
  • just4U - Tuesday, April 28, 2009 - link

    Can't see the 5300 outdoing the 5200 really. At best it might equal it or not be as good. I Don't even see why anyone would buy the 5300 for overclocking at all.. (unless ofcourse the 5200 is at the end of it's line)

    as a guesstimate...
    5300 might get anywhere from 3.8-4.0+
    4850 would get anywhere from 3.1-3.3+

    End of overclock guestimate review (grin)

    (on article topic.. Great review. Mixed bag of results really and once you factor in budget chipset boards it clouds the choice even further)

  • memphist0 - Tuesday, April 28, 2009 - link

    Definately would like to see some overclocking with a mid range cooler
  • Erif - Tuesday, April 28, 2009 - link

    Yes, I'd like to see how the 7850 OCes compared to my 7750.
  • johnsonx - Wednesday, April 29, 2009 - link

    There's unlikely to be any different at all between a 7850 and 7750; any differences would be the normal chip to chip variability in overclocking. It's not a comparison even worth doing.
  • Doc01 - Tuesday, July 27, 2010 - link

    Athlon has surpassed all expectations!
    <a href="http://www.salesgeneric.net">http://www.sa...

Log in

Don't have an account? Sign up now