7 - 10 post are about the fact that todays sw don't use more than 2 cores in a efficent way. Well 2 - 3 years ago, there was close to none. Did Valve, Epic and others build frameworks for using multi cpu's before the hardware base was in place. The answer is no. Do most big software house today put a big effort in scaling over more cores? The answer is yes. Should Intel/Amd wait until the sw houses catch up? I don't understand it, but the spoken majority seems to answer this with a yes?
My question: When the big sw house is done with the mulri cpu frameworks, do you son't belive they then will scale over n numbers of cpu's. Userinput, rendering/gpu stuff, AI x n the deept of today etc. All real lifte arhitechure is paralell, sw is not yet, but hopefully that will change.
----
If lifte is good, and you have insane to much money, you stop developing, you dont need to priortize and you slowly fall back in your pillow. Yes AMD fight uphill, but if they manage to survive, nature has proven that fhigting uneven odds, will give you and middel to long term edge (ok if you survive). Tons of money dont save anything. Not sure they suvive, but if they don't a new company with clever enginers will rise somwhere in the future. Yes we need competition and there always will be.
This is the end of AMD. Unless this turns out like P4 (not likely), AMD will have to release their process first or soon after [or better yet, a 16nm ultra-fast processor, and while I'm still dreaming, make it free] and have it perform better (also not likely). Poor AMD. I was going to buy a Phenom II, but Intel seems the way to go, future-wise. AMD will be liquidated, as well as VIA and Intel will go back to selling way overpriced processors that perform less than a i386 [Windows 7 certified].
Intel doesn't make fast gpu's. Even when they tried with that agp gpu ati and nvidia killed it. They won't let a new playing into a graphics market with out a fight. Lastly intel has been trying to beat amd for 40 something years, and there still not even close to beating them. Now that amd has acquired amd they have superior graphics patents.
What is really amazing, is the shrink proccess timetable. It looks like they will meet the timetable for our first Quantum DOT procersors. It is theorized to occure at the 1.5nm proccess and by the year 2020.
I guess I can't blame them for changing sockets all the time, but I'm not sure if I'll be switching any time soon. My Q6600 hasn't gone past 50% usage yet, even when extreme multi-tasking (editing HD video, etc.)
I'd love to build an i7 right now, but I just can't justify it.
On the mainstream quad-core side, it may not make sense to try to upgrade to 32nm quad-core until Sandy Bridge at the end of 2010. If you buy Lynnfield this year, chances are that you won’t feel a need to upgrade until late 2010/2011.
So if you buy a quad core 8 thread 3.0 Ghz processor you will "NEED" to upgrade in one year?! What?! It doesn't make sense to upgrade just for the sake of having the latest. Upgrade when your computer can't run the programs you need it to anymore; or when you have the extra money and you'll see at least a 30 percent minimum increase in performance. You should be good for at least 2 years with Lynnfield and probably 4 or 5 years.
I watch roadmaps from time to time and I know where AMD has potential.
Simplify the damn roadmap, platforms, chipsets, sockets!
Seriously, I need a spread sheet and calculator to keep it all straight.
Glad Anand gave kind of a summary for were and when it makes sense to upgrade but I just don't have the patience to filter through it all to the end I get a working knowledge of it.
One thing AMD has been good at in the past if they continue, is to keep upgrades simple. I don't want a new motherboard and new socket on near every CPU upgrade. I'm not sure if mobo makers love it or hate it, obviously they get new sales but it's kind of nuts.
This alone, knowing I have some future proofing on the mobo, makes CPU upgrades appealing and easy and something I would take advantage of.
As far as the GPU/CPU it's nothing I will need for years to come. We will have to wait until it permeates the market before it gets used by devs, just like multicore. It will at least take consoles implementing it before game devs start utilizing it, and even then it's liable to take a lot of steps back in performance (it's only hype now)...
I fail to see the purpose of introducing the 6 core/Gulfstream. Most software could barely take advantage of 4 core, let alone 6. It seems like Intel just want to brag they can cram many cores into a single package without evidence that 6 cores will improve performance. It's almost like the mhz wars from the 1990s. Instead of spending time on a 6 core chip, why couldn't they just bring out Sandy Bridge earlier?
It cant be that the applications that DO have multicore support arnt professional apps that small and large businesses use to make money now could it?
Simple because Intel doesn't cater to your browsing and downloading torrents needs doesn't mean its not a good idea to get the ball rolling.
Oh and hmm, lets see why don't they just go strait to Sandy Bridge that's a good one hmmm maybe its because they DONT HAVE TO. AMD is 18 months behind.
Check back a couple pages, I think we posted exactly the same thing, as I completely agree with you. :)
The only thing I can think of is since the server market pays the bills in a sense they are tailoring the chip for that purpose and just making a consumer level chip that will still be tops but probably not as nice in most instances as a faster quad.
On the first page the article talks about two Arizona fabs, but the picture indicates that there is one Arizona fab and one New Mexico fab. So which is correct?
If I remember right (living here in the Phoenix area), there are 3 buildings in Chandler at the site,... 2 of them will be coverted over to the 32nm process, the 3rd building is no longer going to be used apparently,.... or will use the 3rd for something else,...
"Specifically, Intel is upgrading two of its three manufacturing plants, called "fabs," at its Ocotillo campus in Chandler to make 32-nanometer-size chips."
No, only one fab in AZ is being converted, Intel's newest Fab 32. They are not closing any one or both other buildings. They appear to be "merging" two of them.
F11X is in New Mexico, and will be converted over. There are no plant closings in NM.
D1D and D1C are both at the same location in Oregon and are being converted.
However, the three buildings combined are called Fab 32. Fab 11X as chopshiy pointed out is in New Mexico. The article isn't referring to individual buildings, but sets of buildings.
I am wondering about the integrated graphics in Clarkdale/Arrandale will it be DirectX 11 compliant? Is it going to be better than GMA X4500? What about h264 acceleration, 8 channel LPCM support and working 24p?
11X is in New Mexico as the caption on the pic says. Specifically Rio Rancho, NM, near Albuquerque. It's OK, you'd be surprised how many times I've spoken with someone in the US on the phone that told me I was calling the wrong number, since they don't support locations outside the US. Go American education!
Does anyone know what the lifespan of LGA 1156 will be? Is intel expected to change sockets again when we reach Sandy Bridge? Is there any chance that I will have be able to get one mother board to last me several years?
Might depend on who you buy the motherboard from. My motherboard is a P965 and is not Penryn compatible, though other P965 boards are. There might be both hardware (say, power delivery) and software (BIOS) considerations to future generation processors.
Wery little change. Intel ghange their soccets when they do new architechture prosessor. Only reason would be that AMD would be so cpmpetative that there would be a real prize war... By making new soccet they can make more money!
"Now that isn’t to say that the six-core 32nm Gulftown will work in existing X58 motherboards; while that would be nice, Intel does have a habit of forcing motherboard upgrades, we’ll have to wait and see." Unfortunately, my trusty nearly three year old E6600/ASUS P5W croaked and I need a new build *now* (my PS3 is no real sub for PC gaming :p ). I was going to just go cheap and build an E8500/P45 rig, but after reading this, I'm debating whether I should just go ahead and throw down the extra several hundred on an i7 build for future upgrade insurance. I'm leaning more towards the latter.
My question is this. I've got a QX9650 at 3.2ghz on an x38 asus P5E3 Deluxe. Is it worth upgrading anytime this year to the i7 or am I fast enough to hold out until the Quad Core Gulftown rolls around in early 2010?
Want the highest end? Go for i7 now and upgrade to Gulftown hexa-core next year.
Want a mainstream quad? buy lynnfield at the end of this year and upgrade to Sandy Bridge at the end of the next year.
Are satisfied with your E8x00, or another dual core and think quad-core is a waste of money? Go for Clarckdale at the end of this year.
Want to buy a notebook? The 32nm Arrandale will deliver excellent performance with great power savings and an on package graphics processor for even more power saving.
Want to buy a powerful quad-core notebook? Go for Nehalem based Clarcksfield 45nm, which should deliver quite a lot of performance over current mobile CPU's, with Nehalem's power saving features as well, but not as much power savings as Arrandale.
I'm a bit dissapointed that the next top of the line chip will be 6-core instead of a pumped quad. We are still in multi-core infancy with very few programs taking advantage of anything over dual-core, and almost nothing taking FULL advantage of quad-core. I just don't see how 6-core will be more beneficial than a higher clocked 4-core...
As it stands, however, if the power efficiency is legit my next computer may very well be a laptop.
Take a look at your Program Menu and tell me what apps today that are not multithreaded would receive serious benefit from being multithreaded? Besides gaming? Single-thread apps do receive benefits from multiple cores in typical usage scenarios because they can be run on a (semi) dedicated core and not interfere with other apps.
Interesting thought. I'm hoping that with the mainstreaming of the dual core, multi-threaded apps become more common and that the single to dual jump turns out to be the biggest leap. But it's really just a hope on my part, don't know if it will happen.
Isn't there a multitasking advantage with 4 core machines? Also, once we start ripping 720 and 1080p files, 6 cores is gonna be hot.
There are definite multitasking advantages with quadcore if you are heavily multitasking (i'd argue tri-core is probably used more effectively currently than that final 4th core). Single to dual, however, was a much greater difference for multitasking on the whole.
I just don't see the quad-hex jump being more beneficial than quad-juicedquad in this case.
Yeah, can't say I'm real happy about the lack of a 32nm quad-core for 1366. If my motherboard supported Penryn I'd probably just buy one of those cheap, getting an SSD, and waiting for Sandy Bridge. Since it doesn't, the decision is more difficult. Probably depends how much business I get this year.
SSE 4.2 doesn't bring much useful performance to consumers.
There is no Dual Core Westmere or Nehalem. Not without Intel Sh*test Graphics On Earth.
No wonder why Unreal Dev and Valve are complaining that Intel GFX is basically Toxic....
And i cant understand why Anand is excited, Macbook with Intel Graphics all over again?
And Just before anyone who say Intel Gfx will improve. Please refer to history, from G965 to their X series are so full of Marketing BS.... And never did they delivery what they promised.
Actually, they fully delivered on the marketing. It's just that when Nvidia/ATI delivered products in the same space Intels product looked rubbish. There is nothing wrong with the G45 other than it not being an 9400 or a 790GX.
wasn't yonah the first processor out at the 65nm node? if so intel did perform the same stunt earlier, only at 45nm did they not release a laptop version first.
I just don't see how AMD competes, long term. With Intel moving to 32nm faster than expected, and with mainstream parts, that would put them 18 months ahead of AMD, unless somehow, they manage to pull off a similar coup. But it doesn't look as though they will be able to.
We might remember that a bit over a year ago, AMD stated quite boldly, that they would move to within 6 months of Intel's process changes, but they are still a year behind. No progress there. Unless they can manage to switch around their roadmap the way Intel seems to be able to do, they will fall further behind.
I think we should wait and see how things will turn out. Now that AMD has spun off their fabs to a separate company, it's no longer their (AMD) job to invest on new manufacturing processes.
Hopefully, now that the Foundry company has more "freedom", and injection of capital from sources outside of AMD, it'll be able to increase the pace of the shrinking processes.
Besides all that, doesn't AMD graphics division make use of TSMC's fabs to make their chips?
TSMC's fabs will always be a generation or so behind the like of Intel's own, just as AMD (with IBM's assistance) were ahead of them in the past.
I can't see AMDs fab company getting much outside investment in the current economic climate -- new state-of-the-art fab facilities are too expensive and there is no guarantee of profitable contracts to keep them busy. The Foundry Company is never going to catch up with Intel unless a miracle happens, and TSMC etc will likely be direct competitors.
Intel are speeding up their fab and process development because they have money in the bank and continued profits to fuel it. AMD are in dire-straits financially and making a loss. Even with the risks hedge-fund managers take, they'd be mad to put money into AMD just now.
I wouldn't count AMD out just yet if I were you. One false move from Intel and an unexpected innovation from AMD and they're back on their feet. If in Q4 2007 you said Ati would level the playing field with Nvidia the following year most would call you crazy, yet it still happened. So I still have hopes for AMD.
AMD still has one thing that intel dosent have... low prices. these new cores will cost more than $1000! In the slumping economy it isn't the best time to ask for top dollar.
In all seriousness, I have a feeling AMD might pull a rabbit out of it's hat like ATi did with the 4 series with their new architecture. Actually, technically they did with Phenom II but really it was just too late in the game to make the significant dent that ATi's 4 series did (though I'd say the triple cores this round are a big win).
At any rate, 2011 (Bulldozer, or whatever they're calling it now) better be huge. The 65nm X2s were somewhat competitive with Conroe, but after that it just started going downhill. If Bulldozer doesn't do it I don't think AMD is going to be able to get back up. =(
Let me see if I've got this straight: in 2H'09 (I would actually bet Q3'09) we will finally see the Core i5 quads-cores (Lynnfield/Clarksfield) (on a new LGA-1156 socket), which should have been released in Dec'08.
So the 45nm Core i5 quads will be the highest performing CPU available for LGA-1156, positioning above the 32nm Clarkdale/Arrandale dual-cores (the 'Core i5 Duo' maybe?) which arrive in Q4'09
How do they indent to make the LGA-1366 platform have better overclockability, i7 and i5 are almost the same, are they going to actively prevent OC'ing on i5? that would be ridiculous.
Somehow I don't think that the artificial socket segmentation will have a significant number of enthusiast herded into LGA-1366 to get the higher margin cash-cow that Intel has planned it to be.
Intel isn't going to artificially limit overclocking directly, but it is indirectly by redirecting the better chips to 1366. So the i7 CPU's will be cherry-picked versions of the i5's and thus will overclock better. Besides that the only socket with Extreme versions will be 1366.(Though that is a niche within a niche really)
Overclockers are a very small fraction of the market. I'm not even sure intel is thinking about overclockability when they engineer chips. Overclockability is more an artefact of good engineering than a design goal from the outset. Overclockers are always paranoid that intel or AMD is out to get them by intentionally crippling chips. There just aren't enough of us for Intel to be concerned. We're like 1% of the total CPU market.
Pretty much every chip that intel has released at any price point since the introduction of Core 2 has been wonderfully overclockable. I wouldn't worry that Intel is going to change that soon, especially since Core i5 is basically just mainstream processor with the same design fundamentals as the excellent i7.
Although I understand it's a hobby, I don't care if people can overclock or not. As long as we have fast chips at a good price and they're faster than what we have...I mean, why would you care? Isn't it all about SPEED?
I wouldn't be surprised if the opposite was true. I'm really sick of all the hype on shrinking creates less heat. Look at the gpu industry, ever since they started shrinking things got hotter and hotter, and now it seems with i7 even though it's not a die shrink and we are use to 45nm by now, the new hardware to support minor changes in architecture of the cpu seem to make things run hotter.
I7 is way to hot. The newest GPU's run way to hot.
But you're making an unfair comparison - for example, the current latest GPUs have only been produced on the newest nodes, ever. Now, if we take for example, a Radeon 3870 vs. a Radeon 2900 XT, the former draws far less power and will overclock better on air, almost directly as a result of them shrinking from a 80nm to a 55nm process, despite them performing exactly the same. Another example is the Core 2 E8000 series and E6000 series. Despite the increase in cache size, the E8000 dissipates little enough heat that they can provide them with a very tiny heatsink compared to the earlier 65nm cores, and objectively they draw much less power at the same clock speed because they run at lower volts.
You can see this sort of thing again and again throughout the technology industry, Coppermine (180nm) -> Tualatin(130nm), GeForce 7800 -> 7900, G80 -> G92, etc., etc.
If you were to compare say, a GTX280 to a 8800 GTX and say the former draws much more power than the 8800 GTX, AND it's produced on a smaller process - well, yes, but that's because they've clocked it higher and there are far more transistors (twice as many, in fact).
That's because every time they shrink the chips they pack in new features and push the clock speed to the bleeding edge. If all they did was die shrink the old tech, we'd all be running something like an Atom CPU right now. Atoms closely resemble Pentium 3s, but on modern manufacturing only draw what? 5 watts?
The GPU's run hotter, because they pack double the transistors with a new shrink, than their previous HW... Reduction of the manufacturing process enables that we can have so much more transistors in the same place, of course it gets hot...
For the CPU market, the problem is the ever growing amount of cache memory. Intel processors are designed with the large cache being their solution to improvements that AMD brings to the table.
I suspect that Intel will have more trouble after this move to the new fab process because the difficulty in moving to a new process node grows at an exponential rate. We saw Intel hit a wall with the Pentium 3 line because they were not ready for a new process shrink at that point, so the P4 came out. When Intel got their process technology on track, the people at Intel could go back to the Pentium 3 design(with improvements) to release the Core and Core 2 Duo.
There will come a time when an all new design will be needed in order to hold on to their lead, and that is when AMD will probably catch back up, if AMD can survive until then.
Even though my last three CPUs were all from AMD (they made sense at the time- K6-III/400, Athlon XP 1700+, Athlon 64 X2 4400+), I have to disagree with your comment about the improvements (presumably the integrated memory controller) which AMD brings to the table.
With Core i7, Intel has effectively removed the one last technological advantage AMD had- faster memory access. The fact that Intel chips still tend to have larger L3 caches is quite simply because they can afford to give it to them, as they are ahead of AMD on the fab-process. For a high-end desktop chip where there is die-space to spare, you could add some more cores which will probably sit idle (keeping four busy is hard enough, especially with HT), but adding more L3 cache (so long as the latency of it is not adversely affected) is a very cheap and easy way to use up the space and provide a bit of a speedup in almost everything.
AMD is currently fighting a losing game. The Phenom II (bug-fixed Phenom) cannot compete with Core i7 with AMDs current fabs, and unlike Intel who have the tick-tock steady new-process, then new-design with large teams working on each step; AMD seem to have one team working on a new design, which has to be made to work with whichever process looks like the best option at the time.
We need AMD to survive for the x86 (or x64, who came up with that :p ) CPU market to be competitive, but I think the head of AMD is going to have to get into bed with the head of IBM, else they are doomed to fall ever further behind Intel in chip-design. The K10 is promising, but a long way off still, and AMD hasn't exactly been raking in the billions of dollars of profits recently to do that R&D. VIA have found an x86 CPU niche they can compete in, I fear that unless AMD pull an elephant out the hat with the K10, they'll have to slot in between VIA and Intel in providing CPUs specialising in a particular performance-sector, with Intel being the undisputed leader.
We’ve updated our terms. By continuing to use the site and/or by logging into your account, you agree to the Site’s updated Terms of Use and Privacy Policy.
64 Comments
Back to Article
Oyvind - Wednesday, April 15, 2009 - link
7 - 10 post are about the fact that todays sw don't use more than 2 cores in a efficent way. Well 2 - 3 years ago, there was close to none. Did Valve, Epic and others build frameworks for using multi cpu's before the hardware base was in place. The answer is no. Do most big software house today put a big effort in scaling over more cores? The answer is yes. Should Intel/Amd wait until the sw houses catch up? I don't understand it, but the spoken majority seems to answer this with a yes?My question: When the big sw house is done with the mulri cpu frameworks, do you son't belive they then will scale over n numbers of cpu's. Userinput, rendering/gpu stuff, AI x n the deept of today etc. All real lifte arhitechure is paralell, sw is not yet, but hopefully that will change.
----
If lifte is good, and you have insane to much money, you stop developing, you dont need to priortize and you slowly fall back in your pillow. Yes AMD fight uphill, but if they manage to survive, nature has proven that fhigting uneven odds, will give you and middel to long term edge (ok if you survive). Tons of money dont save anything. Not sure they suvive, but if they don't a new company with clever enginers will rise somwhere in the future. Yes we need competition and there always will be.
mattigreenbay - Friday, March 6, 2009 - link
This is the end of AMD. Unless this turns out like P4 (not likely), AMD will have to release their process first or soon after [or better yet, a 16nm ultra-fast processor, and while I'm still dreaming, make it free] and have it perform better (also not likely). Poor AMD. I was going to buy a Phenom II, but Intel seems the way to go, future-wise. AMD will be liquidated, as well as VIA and Intel will go back to selling way overpriced processors that perform less than a i386 [Windows 7 certified].mattigreenbay - Friday, March 6, 2009 - link
But it'll come with a free super fast Intel GPU. (bye bye Nvidia too) :(arbiter378 - Sunday, November 22, 2009 - link
Intel doesn't make fast gpu's. Even when they tried with that agp gpu ati and nvidia killed it. They won't let a new playing into a graphics market with out a fight. Lastly intel has been trying to beat amd for 40 something years, and there still not even close to beating them. Now that amd has acquired amd they have superior graphics patents.LeadSled - Friday, February 20, 2009 - link
What is really amazing, is the shrink proccess timetable. It looks like they will meet the timetable for our first Quantum DOT procersors. It is theorized to occure at the 1.5nm proccess and by the year 2020.KeepSix - Saturday, February 14, 2009 - link
I guess I can't blame them for changing sockets all the time, but I'm not sure if I'll be switching any time soon. My Q6600 hasn't gone past 50% usage yet, even when extreme multi-tasking (editing HD video, etc.)I'd love to build an i7 right now, but I just can't justify it.
Hrel - Thursday, February 12, 2009 - link
On the mainstream quad-core side, it may not make sense to try to upgrade to 32nm quad-core until Sandy Bridge at the end of 2010. If you buy Lynnfield this year, chances are that you won’t feel a need to upgrade until late 2010/2011.So if you buy a quad core 8 thread 3.0 Ghz processor you will "NEED" to upgrade in one year?! What?! It doesn't make sense to upgrade just for the sake of having the latest. Upgrade when your computer can't run the programs you need it to anymore; or when you have the extra money and you'll see at least a 30 percent minimum increase in performance. You should be good for at least 2 years with Lynnfield and probably 4 or 5 years.
QChronoD - Thursday, February 12, 2009 - link
He's saying that the people who have no qualms about throwing down a grand on just the processor are going to want to upgrade to the 32nm next year.However for the rest of us that don't shit gold, picking up a Lynnfield later this year will tide us over until 2011 fairly happily.
AnnonymousCoward - Thursday, February 12, 2009 - link
> However for the rest of us that don't shit gold, picking up a Lynnfield later this year will tide us over until 2011 fairly happily.My C2D@3GHz will hold me over to 2011...
MadBoris - Thursday, February 12, 2009 - link
I watch roadmaps from time to time and I know where AMD has potential.Simplify the damn roadmap, platforms, chipsets, sockets!
Seriously, I need a spread sheet and calculator to keep it all straight.
Glad Anand gave kind of a summary for were and when it makes sense to upgrade but I just don't have the patience to filter through it all to the end I get a working knowledge of it.
One thing AMD has been good at in the past if they continue, is to keep upgrades simple. I don't want a new motherboard and new socket on near every CPU upgrade. I'm not sure if mobo makers love it or hate it, obviously they get new sales but it's kind of nuts.
This alone, knowing I have some future proofing on the mobo, makes CPU upgrades appealing and easy and something I would take advantage of.
As far as the GPU/CPU it's nothing I will need for years to come. We will have to wait until it permeates the market before it gets used by devs, just like multicore. It will at least take consoles implementing it before game devs start utilizing it, and even then it's liable to take a lot of steps back in performance (it's only hype now)...
tacoburrito - Wednesday, February 11, 2009 - link
I fail to see the purpose of introducing the 6 core/Gulfstream. Most software could barely take advantage of 4 core, let alone 6. It seems like Intel just want to brag they can cram many cores into a single package without evidence that 6 cores will improve performance. It's almost like the mhz wars from the 1990s. Instead of spending time on a 6 core chip, why couldn't they just bring out Sandy Bridge earlier?aeternitas - Friday, February 13, 2009 - link
It cant be that the applications that DO have multicore support arnt professional apps that small and large businesses use to make money now could it?Simple because Intel doesn't cater to your browsing and downloading torrents needs doesn't mean its not a good idea to get the ball rolling.
Oh and hmm, lets see why don't they just go strait to Sandy Bridge that's a good one hmmm maybe its because they DONT HAVE TO. AMD is 18 months behind.
7Enigma - Thursday, February 12, 2009 - link
Check back a couple pages, I think we posted exactly the same thing, as I completely agree with you. :)The only thing I can think of is since the server market pays the bills in a sense they are tailoring the chip for that purpose and just making a consumer level chip that will still be tops but probably not as nice in most instances as a faster quad.
pattycake0147 - Wednesday, February 11, 2009 - link
On the first page the article talks about two Arizona fabs, but the picture indicates that there is one Arizona fab and one New Mexico fab. So which is correct?scruffypup - Wednesday, February 11, 2009 - link
If I remember right (living here in the Phoenix area), there are 3 buildings in Chandler at the site,... 2 of them will be coverted over to the 32nm process, the 3rd building is no longer going to be used apparently,.... or will use the 3rd for something else,...http://www.intel.com/community/arizona/index.htm">http://www.intel.com/community/arizona/index.htm
http://www.azcentral.com/arizonarepublic/business/...">http://www.azcentral.com/arizonarepubli.../2009/02...
"Specifically, Intel is upgrading two of its three manufacturing plants, called "fabs," at its Ocotillo campus in Chandler to make 32-nanometer-size chips."
INDVote - Thursday, February 12, 2009 - link
No, only one fab in AZ is being converted, Intel's newest Fab 32. They are not closing any one or both other buildings. They appear to be "merging" two of them.F11X is in New Mexico, and will be converted over. There are no plant closings in NM.
D1D and D1C are both at the same location in Oregon and are being converted.
pattycake0147 - Wednesday, February 11, 2009 - link
However, the three buildings combined are called Fab 32. Fab 11X as chopshiy pointed out is in New Mexico. The article isn't referring to individual buildings, but sets of buildings.Thanks chopshiy, I didn't see your post earlier.
vlado08 - Wednesday, February 11, 2009 - link
I am wondering about the integrated graphics in Clarkdale/Arrandale will it be DirectX 11 compliant? Is it going to be better than GMA X4500? What about h264 acceleration, 8 channel LPCM support and working 24p?chophshiy - Wednesday, February 11, 2009 - link
11X is in New Mexico as the caption on the pic says. Specifically Rio Rancho, NM, near Albuquerque. It's OK, you'd be surprised how many times I've spoken with someone in the US on the phone that told me I was calling the wrong number, since they don't support locations outside the US. Go American education!philosofool - Wednesday, February 11, 2009 - link
Does anyone know what the lifespan of LGA 1156 will be? Is intel expected to change sockets again when we reach Sandy Bridge? Is there any chance that I will have be able to get one mother board to last me several years?strikeback03 - Wednesday, February 11, 2009 - link
Might depend on who you buy the motherboard from. My motherboard is a P965 and is not Penryn compatible, though other P965 boards are. There might be both hardware (say, power delivery) and software (BIOS) considerations to future generation processors.haukionkannel - Wednesday, February 11, 2009 - link
Wery little change. Intel ghange their soccets when they do new architechture prosessor. Only reason would be that AMD would be so cpmpetative that there would be a real prize war... By making new soccet they can make more money!Nfarce - Wednesday, February 11, 2009 - link
"Now that isn’t to say that the six-core 32nm Gulftown will work in existing X58 motherboards; while that would be nice, Intel does have a habit of forcing motherboard upgrades, we’ll have to wait and see." Unfortunately, my trusty nearly three year old E6600/ASUS P5W croaked and I need a new build *now* (my PS3 is no real sub for PC gaming :p ). I was going to just go cheap and build an E8500/P45 rig, but after reading this, I'm debating whether I should just go ahead and throw down the extra several hundred on an i7 build for future upgrade insurance. I'm leaning more towards the latter.CSMR - Wednesday, February 11, 2009 - link
Great article; nice work putting it together so fast!weevil - Wednesday, February 11, 2009 - link
My question is this. I've got a QX9650 at 3.2ghz on an x38 asus P5E3 Deluxe. Is it worth upgrading anytime this year to the i7 or am I fast enough to hold out until the Quad Core Gulftown rolls around in early 2010?Decisions decisions...
ssj4Gogeta - Wednesday, February 11, 2009 - link
Gulftown is 6 cores. :)weevil - Wednesday, February 11, 2009 - link
Yikes!Yummy ; )
dickeywang - Wednesday, February 11, 2009 - link
I guess I'll just keep my Thinkpad T61p (Merom T7300) for another 10 months. Thanks AnandTech.ssj4Gogeta - Wednesday, February 11, 2009 - link
I think this is a very good move.Want the highest end? Go for i7 now and upgrade to Gulftown hexa-core next year.
Want a mainstream quad? buy lynnfield at the end of this year and upgrade to Sandy Bridge at the end of the next year.
Are satisfied with your E8x00, or another dual core and think quad-core is a waste of money? Go for Clarckdale at the end of this year.
Want to buy a notebook? The 32nm Arrandale will deliver excellent performance with great power savings and an on package graphics processor for even more power saving.
Want to buy a powerful quad-core notebook? Go for Nehalem based Clarcksfield 45nm, which should deliver quite a lot of performance over current mobile CPU's, with Nehalem's power saving features as well, but not as much power savings as Arrandale.
7Enigma - Wednesday, February 11, 2009 - link
I'm a bit dissapointed that the next top of the line chip will be 6-core instead of a pumped quad. We are still in multi-core infancy with very few programs taking advantage of anything over dual-core, and almost nothing taking FULL advantage of quad-core. I just don't see how 6-core will be more beneficial than a higher clocked 4-core...As it stands, however, if the power efficiency is legit my next computer may very well be a laptop.
Jovec - Wednesday, February 11, 2009 - link
Take a look at your Program Menu and tell me what apps today that are not multithreaded would receive serious benefit from being multithreaded? Besides gaming? Single-thread apps do receive benefits from multiple cores in typical usage scenarios because they can be run on a (semi) dedicated core and not interfere with other apps.philosofool - Wednesday, February 11, 2009 - link
Interesting thought. I'm hoping that with the mainstreaming of the dual core, multi-threaded apps become more common and that the single to dual jump turns out to be the biggest leap. But it's really just a hope on my part, don't know if it will happen.Isn't there a multitasking advantage with 4 core machines? Also, once we start ripping 720 and 1080p files, 6 cores is gonna be hot.
7Enigma - Thursday, February 12, 2009 - link
There are definite multitasking advantages with quadcore if you are heavily multitasking (i'd argue tri-core is probably used more effectively currently than that final 4th core). Single to dual, however, was a much greater difference for multitasking on the whole.I just don't see the quad-hex jump being more beneficial than quad-juicedquad in this case.
strikeback03 - Wednesday, February 11, 2009 - link
Yeah, can't say I'm real happy about the lack of a 32nm quad-core for 1366. If my motherboard supported Penryn I'd probably just buy one of those cheap, getting an SSD, and waiting for Sandy Bridge. Since it doesn't, the decision is more difficult. Probably depends how much business I get this year.Pakman333 - Wednesday, February 11, 2009 - link
DailyTech says Lynnfield will come in Q3? Hopefully it will have ECC support.iwodo - Wednesday, February 11, 2009 - link
SSE 4.2 doesn't bring much useful performance to consumers.There is no Dual Core Westmere or Nehalem. Not without Intel Sh*test Graphics On Earth.
No wonder why Unreal Dev and Valve are complaining that Intel GFX is basically Toxic....
And i cant understand why Anand is excited, Macbook with Intel Graphics all over again?
And Just before anyone who say Intel Gfx will improve. Please refer to history, from G965 to their X series are so full of Marketing BS.... And never did they delivery what they promised.
ssj4Gogeta - Wednesday, February 11, 2009 - link
noone's forcing you to use G45. you can still use discrete gfx cards.Daemyion - Wednesday, February 11, 2009 - link
Actually, they fully delivered on the marketing. It's just that when Nvidia/ATI delivered products in the same space Intels product looked rubbish. There is nothing wrong with the G45 other than it not being an 9400 or a 790GX.Spoelie - Wednesday, February 11, 2009 - link
wasn't yonah the first processor out at the 65nm node? if so intel did perform the same stunt earlier, only at 45nm did they not release a laptop version first.IntelUser2000 - Wednesday, February 11, 2009 - link
No, the Pentium 955XE based on Pentium D was.Calin - Wednesday, February 11, 2009 - link
[quote]but at 45nm Intel’s switched from a SiO2 gate dielectric to a high-k one using Halfnium[/quote]It's Hafnium (even if halfnium sounds better)
aapocketz - Wednesday, February 11, 2009 - link
hafnium is great for dielectrics. I hope their yields are good since its very expensive. Most CVD processes are only efficient in the single digit %.MrPoletski - Wednesday, February 11, 2009 - link
doesn't sound as good as unobtanium.melgross - Wednesday, February 11, 2009 - link
I just don't see how AMD competes, long term. With Intel moving to 32nm faster than expected, and with mainstream parts, that would put them 18 months ahead of AMD, unless somehow, they manage to pull off a similar coup. But it doesn't look as though they will be able to.We might remember that a bit over a year ago, AMD stated quite boldly, that they would move to within 6 months of Intel's process changes, but they are still a year behind. No progress there. Unless they can manage to switch around their roadmap the way Intel seems to be able to do, they will fall further behind.
LordanSS - Wednesday, February 11, 2009 - link
I think we should wait and see how things will turn out. Now that AMD has spun off their fabs to a separate company, it's no longer their (AMD) job to invest on new manufacturing processes.Hopefully, now that the Foundry company has more "freedom", and injection of capital from sources outside of AMD, it'll be able to increase the pace of the shrinking processes.
Besides all that, doesn't AMD graphics division make use of TSMC's fabs to make their chips?
PrinceGaz - Wednesday, February 11, 2009 - link
TSMC's fabs will always be a generation or so behind the like of Intel's own, just as AMD (with IBM's assistance) were ahead of them in the past.I can't see AMDs fab company getting much outside investment in the current economic climate -- new state-of-the-art fab facilities are too expensive and there is no guarantee of profitable contracts to keep them busy. The Foundry Company is never going to catch up with Intel unless a miracle happens, and TSMC etc will likely be direct competitors.
Intel are speeding up their fab and process development because they have money in the bank and continued profits to fuel it. AMD are in dire-straits financially and making a loss. Even with the risks hedge-fund managers take, they'd be mad to put money into AMD just now.
Triple Omega - Sunday, February 15, 2009 - link
I wouldn't count AMD out just yet if I were you. One false move from Intel and an unexpected innovation from AMD and they're back on their feet. If in Q4 2007 you said Ati would level the playing field with Nvidia the following year most would call you crazy, yet it still happened. So I still have hopes for AMD.ucsdmike - Wednesday, February 11, 2009 - link
AMD's staff will be hitting the bar tomorrow.This is amazing news from Intel. It is an exciting move.
Looking forward to cooler and longer running laptops in the near future.
icecold101 - Monday, August 24, 2009 - link
AMD still has one thing that intel dosent have... low prices. these new cores will cost more than $1000! In the slumping economy it isn't the best time to ask for top dollar.Ryun - Thursday, February 12, 2009 - link
More reason to work extra hard maybe?In all seriousness, I have a feeling AMD might pull a rabbit out of it's hat like ATi did with the 4 series with their new architecture. Actually, technically they did with Phenom II but really it was just too late in the game to make the significant dent that ATi's 4 series did (though I'd say the triple cores this round are a big win).
At any rate, 2011 (Bulldozer, or whatever they're calling it now) better be huge. The 65nm X2s were somewhat competitive with Conroe, but after that it just started going downhill. If Bulldozer doesn't do it I don't think AMD is going to be able to get back up. =(
blyndy - Wednesday, February 11, 2009 - link
Let me see if I've got this straight: in 2H'09 (I would actually bet Q3'09) we will finally see the Core i5 quads-cores (Lynnfield/Clarksfield) (on a new LGA-1156 socket), which should have been released in Dec'08.So the 45nm Core i5 quads will be the highest performing CPU available for LGA-1156, positioning above the 32nm Clarkdale/Arrandale dual-cores (the 'Core i5 Duo' maybe?) which arrive in Q4'09
How do they indent to make the LGA-1366 platform have better overclockability, i7 and i5 are almost the same, are they going to actively prevent OC'ing on i5? that would be ridiculous.
Somehow I don't think that the artificial socket segmentation will have a significant number of enthusiast herded into LGA-1366 to get the higher margin cash-cow that Intel has planned it to be.
Triple Omega - Sunday, February 15, 2009 - link
Intel isn't going to artificially limit overclocking directly, but it is indirectly by redirecting the better chips to 1366. So the i7 CPU's will be cherry-picked versions of the i5's and thus will overclock better. Besides that the only socket with Extreme versions will be 1366.(Though that is a niche within a niche really)philosofool - Wednesday, February 11, 2009 - link
Overclockers are a very small fraction of the market. I'm not even sure intel is thinking about overclockability when they engineer chips. Overclockability is more an artefact of good engineering than a design goal from the outset. Overclockers are always paranoid that intel or AMD is out to get them by intentionally crippling chips. There just aren't enough of us for Intel to be concerned. We're like 1% of the total CPU market.Pretty much every chip that intel has released at any price point since the introduction of Core 2 has been wonderfully overclockable. I wouldn't worry that Intel is going to change that soon, especially since Core i5 is basically just mainstream processor with the same design fundamentals as the excellent i7.
JonnyDough - Wednesday, February 11, 2009 - link
Although I understand it's a hobby, I don't care if people can overclock or not. As long as we have fast chips at a good price and they're faster than what we have...I mean, why would you care? Isn't it all about SPEED?ssj4Gogeta - Wednesday, February 11, 2009 - link
i5 probably won't have an extreme version.Bezado11 - Wednesday, February 11, 2009 - link
I wouldn't be surprised if the opposite was true. I'm really sick of all the hype on shrinking creates less heat. Look at the gpu industry, ever since they started shrinking things got hotter and hotter, and now it seems with i7 even though it's not a die shrink and we are use to 45nm by now, the new hardware to support minor changes in architecture of the cpu seem to make things run hotter.I7 is way to hot. The newest GPU's run way to hot.
Lightnix - Thursday, February 12, 2009 - link
But you're making an unfair comparison - for example, the current latest GPUs have only been produced on the newest nodes, ever. Now, if we take for example, a Radeon 3870 vs. a Radeon 2900 XT, the former draws far less power and will overclock better on air, almost directly as a result of them shrinking from a 80nm to a 55nm process, despite them performing exactly the same. Another example is the Core 2 E8000 series and E6000 series. Despite the increase in cache size, the E8000 dissipates little enough heat that they can provide them with a very tiny heatsink compared to the earlier 65nm cores, and objectively they draw much less power at the same clock speed because they run at lower volts.You can see this sort of thing again and again throughout the technology industry, Coppermine (180nm) -> Tualatin(130nm), GeForce 7800 -> 7900, G80 -> G92, etc., etc.
If you were to compare say, a GTX280 to a 8800 GTX and say the former draws much more power than the 8800 GTX, AND it's produced on a smaller process - well, yes, but that's because they've clocked it higher and there are far more transistors (twice as many, in fact).
Mr Perfect - Thursday, February 12, 2009 - link
That's because every time they shrink the chips they pack in new features and push the clock speed to the bleeding edge. If all they did was die shrink the old tech, we'd all be running something like an Atom CPU right now. Atoms closely resemble Pentium 3s, but on modern manufacturing only draw what? 5 watts?V3ctorPT - Wednesday, February 11, 2009 - link
The GPU's run hotter, because they pack double the transistors with a new shrink, than their previous HW... Reduction of the manufacturing process enables that we can have so much more transistors in the same place, of course it gets hot...JonnyDough - Wednesday, February 11, 2009 - link
I think you need to re-read the manufacturing roadmap page. It details the leakage gain (heat).Targon - Wednesday, February 11, 2009 - link
For the CPU market, the problem is the ever growing amount of cache memory. Intel processors are designed with the large cache being their solution to improvements that AMD brings to the table.I suspect that Intel will have more trouble after this move to the new fab process because the difficulty in moving to a new process node grows at an exponential rate. We saw Intel hit a wall with the Pentium 3 line because they were not ready for a new process shrink at that point, so the P4 came out. When Intel got their process technology on track, the people at Intel could go back to the Pentium 3 design(with improvements) to release the Core and Core 2 Duo.
There will come a time when an all new design will be needed in order to hold on to their lead, and that is when AMD will probably catch back up, if AMD can survive until then.
BSMonitor - Thursday, February 12, 2009 - link
What an utter load of BS. Thanks fanboy.You get all that from wiki?
PrinceGaz - Wednesday, February 11, 2009 - link
Even though my last three CPUs were all from AMD (they made sense at the time- K6-III/400, Athlon XP 1700+, Athlon 64 X2 4400+), I have to disagree with your comment about the improvements (presumably the integrated memory controller) which AMD brings to the table.With Core i7, Intel has effectively removed the one last technological advantage AMD had- faster memory access. The fact that Intel chips still tend to have larger L3 caches is quite simply because they can afford to give it to them, as they are ahead of AMD on the fab-process. For a high-end desktop chip where there is die-space to spare, you could add some more cores which will probably sit idle (keeping four busy is hard enough, especially with HT), but adding more L3 cache (so long as the latency of it is not adversely affected) is a very cheap and easy way to use up the space and provide a bit of a speedup in almost everything.
AMD is currently fighting a losing game. The Phenom II (bug-fixed Phenom) cannot compete with Core i7 with AMDs current fabs, and unlike Intel who have the tick-tock steady new-process, then new-design with large teams working on each step; AMD seem to have one team working on a new design, which has to be made to work with whichever process looks like the best option at the time.
We need AMD to survive for the x86 (or x64, who came up with that :p ) CPU market to be competitive, but I think the head of AMD is going to have to get into bed with the head of IBM, else they are doomed to fall ever further behind Intel in chip-design. The K10 is promising, but a long way off still, and AMD hasn't exactly been raking in the billions of dollars of profits recently to do that R&D. VIA have found an x86 CPU niche they can compete in, I fear that unless AMD pull an elephant out the hat with the K10, they'll have to slot in between VIA and Intel in providing CPUs specialising in a particular performance-sector, with Intel being the undisputed leader.
JonnyDough - Wednesday, February 11, 2009 - link
Well said. I concur.