Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Designing All-New CPU Cores For ARMv8, X86

samzenpus posted about 4 months ago | from the brand-new dept.

AMD 181

crookedvulture (1866146) writes "AMD just revealed that it has two all-new CPU cores in the works. One will be compatible with the 64-bit ARMv8 instruction set, while the other is meant as an x86 replacement for the Bulldozer architecture and its descendants. Both cores have been designed from the ground up by a team led by Jim Keller, the lead architect behind AMD's K8 architecture. Keller worked at Apple on the A4 and A4 before returning to AMD in 2012. The first chips based on the new AMD cores are due in 2016."

cancel ×

181 comments

Sorry! There are no comments related to the filter you selected.

first cpu (-1)

Anonymous Coward | about 4 months ago | (#46921565)

pissing out frost since 1991.

Keller worked at Apple on the A4 and A4 (5, Funny)

nitehawk214 (222219) | about 4 months ago | (#46921579)

Probably worked on the A4 and A4 and the A4, as well.

Re:Keller worked at Apple on the A4 and A4 (1)

unixisc (2429386) | about 4 months ago | (#46921613)

You forgot to mention the A4

Re:Keller worked at Apple on the A4 and A4 (1)

Megane (129182) | about 4 months ago | (#46921619)

But did he work on the A4? [wikipedia.org]

Re:Keller worked at Apple on the A4 and A4 (1)

jonyen (2633919) | about 4 months ago | (#46921633)

Unless it's owned by Apple, probably not.

Re:Keller worked at Apple on the A4 and A4 (1)

Timothy Hartman (2905293) | about 4 months ago | (#46921741)

Pretty sure Apple needs to log a sueball at these guys.

Re:Keller worked at Apple on the A4 and A4 (0)

Anonymous Coward | about 4 months ago | (#46921795)

Genius! Apple could sue itself for infringing on its own trademark, both win & lose (without admitting any wrongdoing of course) and keep itself in the headlines for weeks.

Re:Keller worked at Apple on the A4 and A4 (2)

Desler (1608317) | about 4 months ago | (#46921635)

Are you sure? I heard he worked on the A4 not the A4.

Re:Keller worked at Apple on the A4 and A4 (1)

bill_mcgonigle (4333) | about 4 months ago | (#46921893)

I heard he worked on the A4 not the A4.

Classic Apple disinfo machine. :)

Re:Keller worked at Apple on the A4 and A4 (1)

ttyX (1546893) | about 4 months ago | (#46921697)

So the way I understand it, up until now all processors he has worked on were named A4 and for the first time he's working on a CPU which ins't named A4?

Re:Keller worked at Apple on the A4 and A4 (5, Funny)

flyingfsck (986395) | about 4 months ago | (#46921787)

No, no, that is obviously a typo. T'was the A4, Letter and Legal.

Re:Keller worked at Apple on the A4 and A4 (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46921791)

Keller worked at Apple on the A4 and A4

Either they meant that it was a dual-core CPU, or that Apple was churning them out like M&M's.

Re:Keller worked at Apple on the A4 and A4 (1)

bill_mcgonigle (4333) | about 4 months ago | (#46921871)

Either they meant that it was a dual-core CPU

Somebody has been spending too much time staring at /proc/cpuinfo !

Re:Keller worked at Apple on the A4 and A4 (1)

nigelo (30096) | about 4 months ago | (#46922251)

Ah! The M4? It's close to the A4: http://en.wikipedia.org/wiki/M... [wikipedia.org]

Re:Keller worked at Apple on the A4 and A4 (4, Informative)

GodfatherofSoul (174979) | about 4 months ago | (#46921793)

You mean the A4 [wikipedia.org] on the A4 [audiusa.com] on the A4 [wikipedia.org] ?

Re:Keller worked at Apple on the A4 and A4 (1)

jfdavis668 (1414919) | about 4 months ago | (#46921975)

and, of course "Imagine a Beowulf Cluster of these!"

Couldn't one core... (1)

unixisc (2429386) | about 4 months ago | (#46921637)

... be common, and use something like code morphing - which Transmeta used - to come up w/ a solution that would work w/ both x64 and ARM 64? Thereby avoiding inventory mix issues during production?

Re:Couldn't one core... (1)

Anonymous Coward | about 4 months ago | (#46921669)

Put simply, no.

These things are so heavily optimised, you can't have that kind of thing getting in the way of making detailed performance tweaks.

Re:Couldn't one core... (2)

K. S. Kyosuke (729550) | about 4 months ago | (#46921825)

Except that when you do this, you have the opportunity to effectively turn a hardware interpreter into a software compiler, reducing control logic (and its constant switching during code execution) and improving efficiency in the same way in which software compilers are better than software interpreters, even if the gap won't be nearly that wide. You can turn the same hardware interpreter into a hardware compiler, but then you have something like a trace cache and the logic has actually increased. Would the SW solution decrease performance per thread? Quite likely. Would it improve performance per watt, which is what will really matter in the future? Well, what if it will?

Re:Couldn't one core... (1)

farble1670 (803356) | about 4 months ago | (#46923013)

Except that when you do this, you have the opportunity to effectively turn a hardware interpreter into a software compiler, reducing control logic (and its constant switching during code execution) and improving efficiency in the same way in which software compilers are better than software interpreters, even if the gap won't be nearly that wide. You can turn the same hardware interpreter into a hardware compiler, but then you have something like a trace cache and the logic has actually increased.

^^^^
that
doesn't support this,

Would the SW solution decrease performance per thread? Quite likely. Would it improve performance per watt, which is what will really matter in the future? Well, what if it will?

Right, because that worked so well (1)

Sycraft-fu (314770) | about 4 months ago | (#46921705)

How's Transmeta doing these days? Oh that's right they are defunct.

That kind of thing doesn't work well for performance.

Re:Right, because that worked so well (2)

fuzzyfuzzyfungus (1223518) | about 4 months ago | (#46921763)

They were never fast; but they were pretty much the only game in town if you wanted x86 within tight thermal constraints, for a time after they launched. VIA was similarly tepid and a bit hotter and Intel was pretending that a "Pentium 4 Mobile" was something other than a contradiction in terms.

Now, once Intel stopped pretending that Netburst was something other than a failure, and put some actual effort into lower power designs, it was Game Over; but they didn't do that overnight.

Re:Right, because that worked so well (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46921869)

Intel has only shown you what's possible with a large number of advanced low-power transistors. That's still just one design (of the many possible ones) that uses this level of logic integration. Does that mean that it's impossible to do anything better with the same large number of advanced low-power transistors? Do you have any reason to believe that the Transmeta approach (that actually worked better back then) wouldn't work better now for some reason?

Re:Right, because that worked so well (5, Interesting)

amorsen (7485) | about 4 months ago | (#46922037)

Transmeta was at the end of the era where decoding performance mattered. Keeping the translated code around was actually useful. These days decoding is approximately free on any CPU with half-decent performance -- the amount of extra die space for a complex decoder is not worth worrying about.

You can save a bit of power with a simpler decode stage, but you are unlikely to beat ARM Thumb-2 on power by software-translating x86 the way Transmeta did. Besides, most of the interesting code for low power applications is ARM or MIPS already, so what is the point?

Re:Right, because that worked so well (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46922207)

These days decoding is approximately free on any CPU with half-decent performance

In what way? And what do you mean by "decoding"? Do you also include dependency solving, interlocking, reordering etc.? Because what I was thinking about was pushing even more to the SW component. The problem is, CPUs have been widening for quite some time because of our over-reliance on single-threaded SW. But even if it doesn't work nearly as well for eight-issue monsters, given that simple cores like Jaguar, which seem to be practicable if you have many more of them, push you back into the time of "quarter-decent" performance, why couldn't this approach be useful once more? (Sorry for playing a contrarian here, but I'm genuinely puzzled about this.)

Re:Right, because that worked so well (4, Interesting)

amorsen (7485) | about 4 months ago | (#46922387)

You cannot meaningfully do reordering and so on in software on a modern CPU. You do not know in advance which operands will be available from memory at which time. You have to redo that work every time you get to the code (unless it is in a tight loop, but modern x86's are REALLY good at tight loops) because circumstances will likely have changed -- and you cannot reorder in software every time, that is just too costly.

If you want to see an architecture which looks like it has a chance of breaking the limits on single-threaded performance, look at the Mill [millcomputing.com] . In theory you could software-translate x86 to Mill code and gain performance, but it would be really tricky and no Mill implementations exist yet.

Re:Right, because that worked so well (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46922557)

I guess you're right on the reorderings, there are unpredictable aspects to the execution trace. But then again, there's the engineering maxim that every extra component has to justify its value to be included in a system. Surely these circuits made sense when Pentium III was competing with P4 was competing with K7. Whether their usefulness is undiminished in low-power parallel systems seems like the question to me, though. There appears to be a law of diminishing returns for everything.

Re:Right, because that worked so well (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46923001)

Also, I forgot one thing...

You do not know in advance which operands will be available from memory at which time. ... If you want to see an architecture which looks like it has a chance of breaking the limits on single-threaded performance

Do I really need to know that, or can I just switch to a different thread of execution until then? And do we really need to care about single-threaded performance that much these days? What if I want to program in Go instead of C++? (E.g., what if Google wants 0.5M of new servers for deploying of Go services?) Perhaps some level of "outoforderiness" is desirable, but a lower one would do? I really don't care in what way the performance gets squeezed into my battery-powered devices, and neither do most people who are buying the stuff, as long as it does.

Re:Right, because that worked so well (2)

amorsen (7485) | about 4 months ago | (#46923477)

Do I really need to know that, or can I just switch to a different thread of execution until then?

Sun tried it, market penetration near zero. You can get 12 threads per socket on a desktop Intel CPU, good luck keeping 12 threads busy on mainstream workloads.

Single threaded performance is everything for a CPU; it is cheap to add sockets and cores for parallel workloads. For real parallel work you use the GPU anyway.

Re:Right, because that worked so well (3, Insightful)

UnknownSoldier (67820) | about 4 months ago | (#46923569)

> And do we really need to care about single-threaded performance that much these days?

Not every task is parallelizable.

Second, are you going to pay for an engineer to make their code multi-threaded that shows X% run-time performance?

Re:Right, because that worked so well (2)

unixisc (2429386) | about 4 months ago | (#46922395)

But that was a part of the very concept of VLIW, which both Crusoe & Efficeon were. But those processors were somewhat more RISC than VLIW, except that their integer units were 128-bit and 256-bit, as opposed to 32-bit or 64-bit. Essentially, the idea here was that the bottom core would be constant, and any time there was an instruction set upgrade in a CPU from Intel or AMD, the Transmeta CPU would implement those new instructions in terms of their own native instructions, which would presumably either outperform them, or provide a better performance per watt.

In Itanium, Intel found that they didn't save much in real estate by tossing all the decoding to the compiler: in the Itanium3, some things like register renaming, which are a part of the compiler in VLIW, have found their way back into the hardware. I think that's what the GP meant by stating that decoding comes cheap.

Re:Right, because that worked so well (1)

Carewolf (581105) | about 4 months ago | (#46922817)

Transmeta was at the end of the era where decoding performance mattered. Keeping the translated code around was actually useful. These days decoding is approximately free on any CPU with half-decent performance -- the amount of extra die space for a complex decoder is not worth worrying about.

Actually Intel has recently returned to that. They now keep a small microinstruction cache of decoded instructions around so that loops can be executed more efficiently.

Re:Right, because that worked so well (1)

amorsen (7485) | about 4 months ago | (#46923407)

Fair enough, but they still choose to have all decoding done in hardware, so they still pay the (rather small) die-space penalty of a complex decoder.

Re:Couldn't one core... (1)

LWATCDR (28044) | about 4 months ago | (#46921721)

The Transmeta chip was not a smash hit so probably not.
The really cool thing is that you will see ARM and X86 will share parts. GPU cores are a no brainer. Throwing in things like cache and memory controllers could be a big deal.
ARM sharing a socket with x86 will be really cool IMHO.

Re: Couldn't one core... (0)

loufoque (1400831) | about 4 months ago | (#46921769)

Yes, because such complex heterogeneous hardware is so easy to program for.

Re: Couldn't one core... (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46921945)

How old is x86 now? It took a long time (ten years?) to get just a basic 32-bit protected mode operating system out to people at large after the hardware (80386) was out. I hope you're not expecting AMD to roll out a full-blown ecosystem of HW, drivers, compilers, and thousands of applications within a year just because you're impatient. I'm afraid the free lunch is over, but still, HSA is hardly a complexity monster. To me, it didn't seem nearly as threatening as a single look at the total size of x86+AMD64+extra instruction sets+documentation for OS developers+HW errata specs together. I suspect the only reason why you see it as "complex" is because you're closing your eyes when faced with the reality that we're already facing much more complex legacy HW. x86 has only the advantage that the tools are already here, but a heterogeneous system properly designed from scratch can't really be that much more complex than the "homoheterogenous" x86AMD64SSE1234AVX123 architecture that we have right now.

Re: Couldn't one core... (1)

aliquis (678370) | about 4 months ago | (#46922049)

What? No support for MMX?

Re: Couldn't one core... (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46922241)

I thought MMX was obsoleted, at least on AMD64 ABIs? Aren't you supposed to be using SSE because the ABI is throwing huge logs under your legs if you don't?

Re: Couldn't one core... (1)

unixisc (2429386) | about 4 months ago | (#46922295)

Wasn't SSE a superset of MMX?

Re: Couldn't one core... (1)

aliquis (678370) | about 4 months ago | (#46922997)

I'm not sure. I figured that maybe it was / that there at least was better instructions around now and no need for it.

Might still had worked for enlengthening (? Enlonging? Dictionary don't like either) the list.

Re: Couldn't one core... (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46923255)

Functionality-wise, yes. Instruction-encoding-wise, I'm not sure but I don't think so. Nor does it use the same registers (SSE ones are physically separate from x87/MMX ones).

Re: Couldn't one core... (0)

Anonymous Coward | about 4 months ago | (#46923777)

MMX was SIMD INT, and SSE was SIMD FLOAT.

There were other differences as well, but that's it in a nutshell.

stumbling over progress (1)

epine (68316) | about 4 months ago | (#46922369)

It took a long time (ten years?) to get just a basic 32-bit protected mode operating system out to people at large after the hardware (80386) was out.

Double facepalm!! That's one version of the story. In other news, the day after the first Prius was available for sale, there was a global recall on internal combustion engines—the kind of recall where they don't give back.

The hump where protected mode starts to drive real productivity benefit is somewhere above a 486SX/25 with 8 MB of RAM and a 120 MB disk drive. I had a Gateway 2000 laptop exactly like that (monochrome). It even had NetBSD for a few days. Simply not worth it. It had relatively fast video, but not VLB. I didn't even try X Windows.

Later I converted a 486DX/100 with 16 MB of RAM and a 200 MB disk drive into a BSD crash box. That system ran not bad, if you were patient enough. It really could usefully multitask.

Then I upgraded my main system to a P6/200 with 32 MB of RAM (not cheap) and a 640 MB SCSI hard drive (about a dollar per MB) and pair of 19" monitors (about $1000 each) running an early version of NT. This was exactly the point where I said to myself "I'll never go back".

This was not a software issue. The delay in widespread adoption of protected memory operating systems was in large measure caused by a DRAM price cartel.

DRAM price fixing [wikipedia.org] . The American company Micron was the ring-leader as I recall it.

In December 2003, the Department charged Alfred P. Censullo, a Regional Sales Manager for Micron Technology Inc., with obstruction of justice. Censullo pleaded guilty to the charge and admitted to having withheld and altered documents responsive to a grand jury subpoena served on Micron in June 2002.

On October 20, 2004, Infineon also pled guilty. The company was fined $160M for its involvement, then the third largest antitrust fine in US history. Hynix Semiconductor soon took the third position in April 2005 with a $185M criminal penalty after they also admitted guilt. In October 2005, Samsung entered their guilty plea in connection with the cartel.

I remember this extremely well because memory flat-lined at CDN $40/MB for about three years in the mid 1990s.

Of course this is not corruption. It's the invisible hand hard at work.

Re:stumbling over progress (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46922595)

Well, now it seems that the adoption of heterogeneous systems will be slowed down by the software cartel. Meet the new boss, not the same as the old one but you won't notice the difference. Still, the hardware has to start somewhere, I guess.

Re:stumbling over progress (0)

Anonymous Coward | about 4 months ago | (#46922775)

Oh man, good thing we don't have anything like that going on these days [camelcamelcamel.com] .

Re:stumbling over progress (2)

operagost (62405) | about 4 months ago | (#46922849)

You should have tried OS/2 with the 486DX. The SX laptop would have been slow with anything but DOS; no local bus and a glacial hard disk is killer. I had OS/2 on a 486DX-40 with 8 MB RAM and it was great.

Re: Couldn't one core... (1)

loufoque (1400831) | about 4 months ago | (#46922545)

I don't see how SSE is anything like it. Either you have a SSE or AVX unit or you don't. If you do, you use it exclusively. With a hybrid x86+ARM+GPU chip, you need to give work to at least all 3 of them, and it's nearly impossible to predict which unit will be the best for each task or even to schedule the damn thing dynamically.

HSA (1)

serviscope_minor (664417) | about 4 months ago | (#46921673)

Did it say anywhere if they're going to juiced with HSA?

nanosecond latency to some bigass stream processors and no risk of memory-scribbling is too much to not want.

Obama lied, patients died (-1)

Anonymous Coward | about 4 months ago | (#46921679)

Obamacare - lies on top of lies. You cannot keep your plan, you cannot keep your doctor, you will pay more, not vase $2500... and on and on it goes.

Democrats just can't stop lying and stealing... thanks so much for voting for this shit.

http://www.reviewjournal.com/politics/own-small-business-brace-obamacare-pain

"The changes put as many as 90,000 policies across Nevada at risk of cancellation or nonrenewal this fall, said Las Vegas insurance broker William Wright, president of Chamber Insurance and Benefits. That’s more than three times the 25,000 enrollees affected in October, when Obamacare-compliant plans first hit the market.

Some workers are at higher risk than others of losing company-sponsored coverage. Professional, white-collar companies such as law or engineering firms will bite the bullet and renew at higher prices because they need to compete for scarce skilled labor, Nolimal said.

But moderately skilled or low-skilled people making $8 to $14 an hour working for landscaping businesses, fire-prevention firms or fencing companies could lose work-based coverage because the plans cost so much relative to salaries."

Re:Obama lied, patients died (-1)

Anonymous Coward | about 4 months ago | (#46921687)

And none of that is even remotely relevant to this article.

Re:Obama lied, patients died (0)

Anonymous Coward | about 4 months ago | (#46922001)

They could name the new micro-architecture "Obamacore" and it would be relevant. ;-0

Re:Obama lied, patients died (1)

jfdavis668 (1414919) | about 4 months ago | (#46921999)

The Obamacare website will work fine when run on these new AMD processors, of course.

Been a long time since I cared (1)

asmkm22 (1902712) | about 4 months ago | (#46921711)

The last time I truly got excited about AMD was when the K6-2 came out. These days, I just wish AMD would put a focus on power consumption and high quality rather than simply trying to out-core Intel.

Re:Been a long time since I cared (5, Insightful)

werepants (1912634) | about 4 months ago | (#46921789)

The last time I truly got excited about AMD was when the K6-2 came out.

What? During the P4 days AMD was ahead in almost every category in the benchmarks... did you miss that whole era? No denying the picture today is far less exciting, though.

Re:Been a long time since I cared (2)

unixisc (2429386) | about 4 months ago | (#46921903)

Actually, K7 - when Dirk Meyer's team left DEC to join AMD - was when they first made any technical challenge to Intel's CPUs. Until then, they were a series of one mediocre challenge after the other - first the Am386s & 486s, then the NexGen acquisition, then the K6. Finally, when AMD did the Athlon w/ the ex-Alpha team from DEC and extended CISC to 64-bit, that's when things started getting interesting.

Re:Been a long time since I cared (1)

werepants (1912634) | about 4 months ago | (#46922777)

My personal favorite was the Athlon XP 1700+. The best was date code JIUHB DLT3C, it had documented cases of getting above 4GHz - pretty good considering that it is still a feat to hit that 10 years later. Bought two or three 1700+'s on ebay before I hit the jackpot. Unfortunately, I never managed to put together the water cooling system I had planned, so I never got it over 3 GHz.

Re:Been a long time since I cared (1)

Jaime2 (824950) | about 4 months ago | (#46921913)

But that's only because Intel let the marketing department make engineering decisions and kept making chips with higher and higher clock frequency. As soon as they regained their sanity, they once again dominated the benchmarks.

I do love how AMD brilliantly capitalized on the blunder. By labeling their chips according to the clock speed of the performance equivalent Intel chip - every time Intel put insane engineering effort into ratcheting the clock up 10% and only getting 1% better performance, AMD simply made their chips a tiny bit faster and labelled theirs the same as Intel's.

Re:Been a long time since I cared (1)

afidel (530433) | about 4 months ago | (#46922019)

Yup, on the server side AMD was ahead from the first Opteron until Shanghai, and then Intel launch Nehalem and they've been ahead ever since. One the desktop Intel got competitive again with the Core2 but on a performance per $ metric it wasn't until Nehalem that they dominated.

Re:Been a long time since I cared (0)

Anonymous Coward | about 4 months ago | (#46923623)

AMD was great from the 386 era, through Pentium IV. They owned low cost CPU's (often by continuing older product lines at higher clock speeds than Intel bothered with), performance per $$$, and often outright performance as well. They were also more aggresive in moving functionality (such as the memory controller) into the CPU. Unfortunately, with the notable exception of the Pentium IV era, they were never competitive with Intel on performance per watt. This deficiency was exacerbated when Intel went to town on power per watt and got more aggressive on pricing, starting with the Pentium M and Core architecture. All that coincided with an market shift to mobile platforms, which worked in Intel's favor. Intel plowed their profits into R&D, widening their power per watt advantage over AMD, and ultimately eclipsing them in performance.

Last time I needed to rebuild and upgrade my rig (which was a couple years ago, at this point) I was ready to go AMD again, as I always had previously, but reading the benchmarks found that a Sandy Bridge Core i3 was sufficient to outrun the entirety of AMD's product line. And my budget allowed a Core i5... That said, since they bought ATI, AMD has owned the "graphics core in the CPU" title. A hypothetical AMD APU, with x86 AND ARM cores sharing the on-board Radeon GPU would be an intriguing concept.

Re:Been a long time since I cared (1)

aliquis (678370) | about 4 months ago | (#46921979)

K6-2 was so-so.

Athlon XP and Athlon64 was good in their time.

Since the days of Pentium-M though ..

Re:Been a long time since I cared (2)

jfdavis668 (1414919) | about 4 months ago | (#46922013)

How about the AMD386? It ran at 40Mhz. 40!

Re:Been a long time since I cared (0)

Anonymous Coward | about 4 months ago | (#46922167)

That bitch had cache memory too. Internal cache memory on a 386!

Re:Been a long time since I cared (2)

DudemanX (44606) | about 4 months ago | (#46922337)

I had an AMD486 80Mhz. It was cheaper than an i486 66Mhz and performed great. The Pentium had just come out at the time but was super expensive. I was able to find late model 486 board with PCI slots though and with the awesome value of the AMD chip was able to have a nice "budget" system for the time. It was even able to run Quake playably(a game which "required" the Pentium and it's baller FPU).

Re:Been a long time since I cared (0)

angel'o'sphere (80593) | about 4 months ago | (#46922747)

I ran quake on an 486 DX(2?) 33Mhz just fine.

More frames per second than the monitor could handle.

Re:Been a long time since I cared (0)

Anonymous Coward | about 4 months ago | (#46922945)

And then there's those of us who actually use all those silly cores:

Tasks: 226 total, 2 running, 224 sleeping, 0 stopped, 0 zombie
%Cpu0 : 30.3 us, 1.0 sy, 61.3 ni, 7.3 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu1 : 32.3 us, 1.3 sy, 58.4 ni, 7.6 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st
%Cpu2 : 34.7 us, 1.7 sy, 56.3 ni, 7.0 id, 0.0 wa, 0.3 hi, 0.0 si, 0.0 st
%Cpu3 : 30.9 us, 1.7 sy, 59.5 ni, 8.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu4 : 11.7 us, 0.3 sy, 85.0 ni, 3.0 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
%Cpu5 : 43.9 us, 1.3 sy, 46.2 ni, 8.6 id, 0.0 wa, 0.0 hi, 0.0 si, 0.0 st
KiB Mem: 16432068 total, 16098748 used, 333320 free, 6420824 buffers
KiB Swap: 999420 total, 13260 used, 986160 free. 6430540 cached Mem
  PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND
20367 blah 20 0 4454228 1.545g 37792 S 544.9 9.9 3350:48 ghb ...

Steamroller/Excavator ??? (1)

jbo5112 (154963) | about 4 months ago | (#46921733)

I'm still waiting for an upgrade to my AMD FX-6300. I bought it on the promise that there would be an upgrade. I've liked AMD for a long time, but getting burned on the first processor I buy from them is no way to keep customers.

Re:Steamroller/Excavator ??? (2)

Kjella (173770) | about 4 months ago | (#46922061)

I'm still waiting for an upgrade to my AMD FX-6300. I bought it on the promise that there would be an upgrade. I've liked AMD for a long time, but getting burned on the first processor I buy from them is no way to keep customers.

So you've been here longer than I have (UID), liked AMD for a long time yet never bought one in the golden years from 1999 (launch of Athlon) - 2006 (Intel launching Core) or relative competitiveness up to 2010 (with Phenom II x6 still giving Intel a fair fight) but waited until October 2012 when they were clearly well into a decline? Pardon me but your story smells worse than shrimps left out in the sun for a week.

Re:Steamroller/Excavator ??? (1)

jbo5112 (154963) | about 4 months ago | (#46923109)

I didn't have money for a computer upgrade in 2001-2006 (thank you Clinton/Federal Govt. for the Dot-com bubble that burst when I was leaving college). I went from a dual Celeron system (ABIT BP6 motherboard) in mid-1999 to an Athlon-XP that someone gave me in 2004 to a Core 2 Quad Q6600 in 2007. When my motherboard died, I needed something cheap for keeping a lot of programs (including a virtual machine or two) running. I knew 6 slower cores would probably work better for me than 2 faster cores, even if they were hyperthreaded, and the motherboard had better features than Intel (more IO and a planned CPU upgrade).

Re:Steamroller/Excavator ??? (1)

afidel (530433) | about 4 months ago | (#46922071)

There's an upgrade, FX-9590.

Re:Steamroller/Excavator ??? (3, Funny)

K. S. Kyosuke (729550) | about 4 months ago | (#46922635)

When did they start labeling the CPUs with their operating temperature?

Re:Steamroller/Excavator ??? (1)

jbo5112 (154963) | about 4 months ago | (#46923419)

My motherboard is limited to 140W CPU's. The FX-9590 would require me to buy both a new motherboard and power supply. For that much money, I could get both Core i3-4340 and Core i7-4770K CPU/motherboard combos to upgrade 2 computers or go with a i7-4820K (LGA-2011) in my workstation. Way to deliver AMD!

Serious Question (4, Interesting)

Anonymous Coward | about 4 months ago | (#46921801)

Is AMD just around so Intel doesn't get bogged down by anti-monopoly or antitrust penalties?

Re: Serious Question (4, Interesting)

Anonymous Coward | about 4 months ago | (#46921857)

64 cores per U, 80% intel performance per core, at 12% intel price.

Re: Serious Question (0)

Anonymous Coward | about 4 months ago | (#46921915)

Also 10% and 15% Intel's revenues and stock price respectively.

Re: Serious Question (0)

Anonymous Coward | about 4 months ago | (#46922315)

100% bullshit

Re: Serious Question (4, Insightful)

Junta (36770) | about 4 months ago | (#46922579)

Well, something of an oversimplification/exaggeration.

64 'cores' is 32 piledriver modules. That was a gamble that by and large did not pan out as hoped. For a lot of applications, you must consider those 32 cores. Intel is currently at 12 cores per package versus AMD's 8 per package. Intel is less frequently found with their EP line in a 4 socket configuration because the performance of dual socket can be much higher with Intel's QPI than 4 socket. AMD can't do that topology, so you might as well do 4 socket. Additionally, the memory architecture of Intel tends to cause more dimm slots to be put on a board. AMD's thermals are actually a bit worse than Intel's, so it's not that AMD can be reasonably crammed in but Intel cannot. The pricing disparity is something that Intel chooses at their discretion (their margin is obscene), so if Intel ever gets pressure, they could halve their margin and still be healthy margin-wise.

I'm hoping this lives up to the legacy of the K7 architecture. K7 architecture left Intel horribly embarrassed and took years to finally catch up with when they launched Nehalem. Bulldozer was a decent experiment and software tooling has improved utilization, but it's still rough. With Intel ahead in both microarchitecture and manufacturing process, AMD is currently left with 'budget' pricing out of desperation as their strategy. This is by no means something to dismiss, but it's certainly less exciting and perhaps not sustainable since their costs are in fact higher than Intel's cost (though Intel's R&D budget is gigantic to fuel that low-cost per-unit advantage, so the difference between gross margin between Intel and AMD is huge, but net margin isn't as drastic). If the bulldozer scheme had worked out well, it could have meant another era of AMD dominance, but it sadly didn't work as well in practice.

Re:Serious Answer (0)

Anonymous Coward | about 4 months ago | (#46921905)

This has been obvious for some decades. When AMD got technologically ahead of Intel, Intel squashed them with the Core series, which outperformed AMD dramatically. Now, Intel is keeping margins high enough to keep AMD selling chips at a loss most of the time.

Re:Serious Answer (2)

Junta (36770) | about 4 months ago | (#46922699)

Well, in the *desktops*, core marked an end to AMD dominance in most practical terms, but architecturally they still were not very good for scalability. Basically, they turned back the clock to pentium iii on modern processes and that was enough to recover the desktop space.

Nehalem is the point at which Intel basically overtook AMD again and AMD has not come back since that point. So Intel's had the ball for 3 of their 'tocks'. AMD prior to K7 was pretty weak for a lot longer than that and I don't think anyone familiar with AMD in K6 and older would guess they would be something more than a budget alternative. So AMD could conceivably come out of this with something awesome despite recent misfortune.

Re:Serious Question (0)

Anonymous Coward | about 4 months ago | (#46923053)

And to power the latest generation of consoles.

All of them.

Every

Single.

One.

And none of your wintel fanboi tears will change it.

Re:Serious Question (1)

tlhIngan (30335) | about 4 months ago | (#46923153)

Is AMD just around so Intel doesn't get bogged down by anti-monopoly or antitrust penalties?

Somehow these days, I think it's yes. And I think Intel's lobbing customers AMD's way to ensure that AMD survives. E.g., the current generation of consoles now sport AMD processors. I'm sure Intel would be more than happy to have the business, but not only do they not need it, they see it as a way to give AMD much needed cash for the next few years.

Hell, I'm sure part of the whole Intel letting others use their fabs thing is to figure out a way to get AMD to use some of their spare capacity. Of course, it has to be done in such a way that it doesn't run afoul of any anti-trust and all that.

Right now, AMD is in a good spot for Intel - big enough to count as competition, small enough to not really matter..

You can bet many other companies pay lots of money for a competitor to stay in it - I can think of Google and iAds, for example. Google got AdMob because Apple introduced iAds, yet iAds is completely worthless to any advertiser - it's too expensive, too limited, and all around a bad dead, whereas Google is cheap and easy. And yet, Apple keeps iAds around , despite practically no one supporting it. Apple's killed other stuff for less. Only reason I can see is Google pays Apple for that to keep anti-trust at bay.

Re:Serious Question (2)

Kjella (173770) | about 4 months ago | (#46923813)

Somehow these days, I think it's yes. And I think Intel's lobbing customers AMD's way to ensure that AMD survives. E.g., the current generation of consoles now sport AMD processors. I'm sure Intel would be more than happy to have the business, but not only do they not need it, they see it as a way to give AMD much needed cash for the next few years.

Consoles are primarily about graphics, not CPU power. While Intel's integrated graphics suck somewhat less than they used to, the PS4 has 1152 shaders backed by 8GB DDR5 and Intel has never had anything remotely close to that, maybe a third or quarter of that tops. An Intel CPU with AMD dedicated graphics would be very unlikely since AMD would almost certainly price it so their CPU/GPU combo came out better. So realistically it was AMD vs Intel+nVidia, neither of which like to sell themselves cheap. I don't think you need any market collusion to see AMD winning this one, while it's floating the boat they're not exactly making big money so they probably sold themselves rather cheap.

Best of luck to them (5, Interesting)

Dega704 (1454673) | about 4 months ago | (#46922227)

I was such an AMD fanboy ever since I built my first (new) computer with a K6-II. I have to admit I miss the days of the Athlon being called "The CPU that keeps Intel awake at night." After Bulldozer bombed so thoroughly I just gave up and haven't followed AMD's products since. I definitely wouldn't mind a comeback, if they can pull it off.

Re:Best of luck to them (1)

arbiter1 (1204146) | about 4 months ago | (#46922377)

As of late its been hopeing and claiming their new stuff would be great but when it hits the market it turns out to be not as good as they hoped and back track some of what they said.

Re:Best of luck to them (1)

unixisc (2429386) | about 4 months ago | (#46922449)

Their manufacturing has always been their Achilles heel. If only they had the fabs that Intel has....

Re:Best of luck to them (3, Insightful)

Bryan Ischo (893) | about 4 months ago | (#46922811)

I don't get it. Do you, and just about everyone else who has posted in this discussion, only by chips that cost > $200? Because AMD is, and always has been, competitive with Intel in the sub $200 price range.

Sub $200 chips have, for a very long time, been very fine processors for the vast majority of desktop computer tasks. So for years now, if you're anything close to a mainstream computer user, there has been an AMD part competitive with an Intel part for your needs.

Of course, once you get to the high end, AMD cannot compete with Intel; but that's only a segment of the market, and it is, in fact, a much smaller segment than the sub $200 segment.

I personally have a Phenom II x6 that I got for $199 when they first came out (sometime in 2011 I believe) that was, at the time, better on price/performance than any Intel chip for my needs (mostly, parallel compiles of large software products) and absolutely sufficient for any nonintensive task, which is 99% of everything else I do besides compiling.

Anyway, if you only think of the > $200 segment, why stop there? I'm pretty sure that for > $10,000 there are CPUs made by IBM that Intel cannot possibly compete with.

Re:Best of luck to them (1)

mbkennel (97636) | about 4 months ago | (#46923719)

| Of course, once you get to the high end, AMD cannot compete with Intel; but that's only a segment of the market, and it is, in fact, a much smaller segment than the sub $200 segment.

From AMD's end, that's a critically important segment since it's where the most money is, and chip design and manufacturing are exceptionally expensive.

Re:Best of luck to them (1)

marsu_k (701360) | about 4 months ago | (#46923595)

I don't know how much of a profit they're making on their APUs, but they're the winners of the current console generation (somewhat surprisingly, the winner of the previous gen was IBM with PPC/Cell). I'm hoping they stay afloat - they may only be competitive (when it comes to general x86/x64) on very few tasks that require very many cores (and even then probably using more watts at that), but it's never healthy to have a monopoly.

Very likely (0)

Anonymous Coward | about 4 months ago | (#46922237)

Intel will follow!
Just like they needed to do with x86_64!

Why are people designing cores? (1)

scorp1us (235526) | about 4 months ago | (#46922243)

It seems that it would be fertile territory for genetic algorithms to design the die. Sure, humans need to define the features, but run everything through a genetic algorithm, simulate and let the computer grow its own chips. Perhaps whole chips are not practical, but sub-processing units could do it.

 

Re:Why are people designing cores? (1)

wiggles (30088) | about 4 months ago | (#46922447)

Pretty sure that firing all of the hot shot CPU designers and having such algorithms design their CPUs for them is how they wound up with the Bulldozer fiasco.

Looky here. [xbitlabs.com]

Re:Why are people designing cores? (1)

rrohbeck (944847) | about 4 months ago | (#46923197)

Nonsense. Code does routing and floor planning, it doesn't design two-core modules.

Oh and in the current designs the automatic layout saved significant real estate and power compared to hand layouts.

The article you refer to is utter bullshit.

Re:Why are people designing cores? (2)

K. S. Kyosuke (729550) | about 4 months ago | (#46922845)

Sounds useful, but for smaller cores. Having said that, the more you simplify the design the better for certain smarter methods. For example, it's my understanding that Chuck Moore optimizes his Forth cores to expand the envelope of operating conditions to such extent that AMD and Intel can't afford simply because their cores are too large to be understood. Too many state transitions to study, too many gates etc., whereas CM can afford to simply run a full physical model including individual transistor temperatures on a regular basis (and his stack machine is simple, so there's not as many states and state transitions to check for non-exceeding of operating limits exhaustively). That's one part why nothing seriously programmable I'm aware of can do more bit-ops per joule (as in bit-equivalent operations, as opposed to some fixed-width integers - since those chips have non-traditional widths, unless you really like PDP-1). At this level, it might be even worthwhile to try to generate instruction sets automatically hand in hand with HW synthesis and simulation and with automated compiler generation, if such thing is possible. I have this nagging idea that we're still guessing what ISAs designs are actually efficient, given that it's so damned hard to cycle a single ISA through the whole design and feedback process. This whole S/360, x86, SPARC, IA64 etc. situation feels like peeking into a few points in the total CPU design space and thinking that we're smarter and know where to look based on those few data points. Except we may be completely wrong on that.

Re:Why are people designing cores? (1)

Anonymous Coward | about 4 months ago | (#46922867)

Who modded this up? I work in vlsi CAD. A really huge amount of design work is automated. Very large scale units are fully automated. Cores are still much larger than the largest unit that can be fully automated. There is a trade off between quality you can get from human decision making and speed you can get from automating and there is a limit to how large of a problem can be solved. Intel and recently Apple make better cores by spending human effort where others use automation and perhaps AMD has learned their lesson and has realized that getting crummy product to market sooner is only a good strategy if you are first to market. Designs are broken into blocks of suitable size for automation and then composed from those blocks. People can write behavioral specification for a block in basically C code and then that gets "compiled" into a hardware block complete with physical layout. Every optimization algorithm gets experimented with. Genetic algorithms seem like the greatest thing ever when you first learn about them, but they aren't actually the be all, end all of optimization algorithms. Specifically, they don't scale well to large problems, which is exactly the opposite of what we need to be able to automate the design of an entire core. So the answer to you question is yes and no, but mostly no. I have my own question to pose to you. Why are humans still posting ignorant questions to troll the comments sections of slashdot? I mean, couldn't a genetic algorithm be used to combine different topics and question characteristics to try to maximize the responses to the automated question? Maybe Dice should look into add that as a feature to Beta. Maybe they could create their own answer generator too. That would be great, then the wouldn't need to community at all!

Re:Why are people designing cores? (1)

K. S. Kyosuke (729550) | about 4 months ago | (#46923125)

Specifically, they don't scale well to large problems, which is exactly the opposite of what we need to be able to automate the design of an entire core.

Well, that's why one should try it with small problems instead! The core I've mentioned above is barely VLSI by modern standards; it has something like 30k gates. Is this still above the limit you mention?

And RISC slowly rediscovers that CISC is better (1)

DutchUncle (826473) | about 4 months ago | (#46922593)

. . . . until the next generation knows not history and thinks they rediscovered RISC . . .

Meanwhile... (1)

Sable Drakon (831800) | about 4 months ago | (#46923181)

Intel is going to have something on the market that runs more efficiently and with better performance. Try as they might, AMD just can't seem to get their act together for producing a decently performing product since the Athlon II.

Re:Meanwhile... (1, Troll)

Tough Love (215404) | about 4 months ago | (#46923401)

Which is why consoles don't use AMD at all. Oh wait...

Re:Meanwhile... (2)

Sable Drakon (831800) | about 4 months ago | (#46923503)

Consoles are using AMD because the parts are cheap, not because the performance/watt is fantastic. AMD hasn't been able to produce a CPU with amazing performance, decent thermals, and high power efficiency for years now. Why do you think gaming PCs and nearly all laptops use Intel? Because Intel offers all three with ease.

amd needs pci-e 3.0 / faster HyperTransport (1)

Joe_Dragon (2206452) | about 4 months ago | (#46923373)

or at least give all CPU's 2-3 HT links so you can have 2 or more HT to chipset / HT to pci-e bridges on a 1 cpu board.

Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>