Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

AMD Breaks 1GHz GPU Barrier With Radeon HD 4890

timothy posted more than 5 years ago | from the sheer-necessity dept.

Graphics 144

MojoKid writes "AMD announced today that they can lay claim to the world's first 1GHz graphics processor with their ATI Radeon HD 4890 GPU. There's been no formal announcement made about what partners will be selling the 1GHz variant, but AMD does note that Asus, Club 3D, Diamond Multimedia, Force3D, GECUBE, Gigabyte, HIS, MSI, Palit Multimedia, PowerColor, SAPPHIRE, XFX and others are all aligning to release higher performance cards." The new card, says AMD, delivers 1.6 TeraFLOPs of compute power.

Sorry! There are no comments related to the filter you selected.

It Was Epic (5, Funny)

eldavojohn (898314) | more than 5 years ago | (#27954781)

AMD Breaks 1GHz GPU Barrier

I was diligently working at XYZ Corp a few buildings down when Incident One happened in their lab. At first, I was just sitting in my cubicle when suddenly we felt a severe shuddering of space & time around us. Then a few seconds later everyone heard a loud "Ka-BOOM" and everyone stood up to see what was going on outside. The buildings directly adjacent to the AMD lab had all their windows blown out and every car alarm within a square mile was going off. Some scientists with their hair blown straight back and carbon scoring randomly on their faces and white lab coats were seen to climb out of the rubble of AMD's R&D building. They immediately began dusting themselves off, high-fiving each other and patting each other on the back laughing and ecstatic. Then they headed towards the liqueur store down the street to pick up some champagne. Shortly after it was discovered that 1Ghz is the frequency at which æther vibrates when it is at rest so once you pass it, you leave a wake of æther behind your time cone. Roger Penrose and Stephen Hawking are due to give a speech at "GPU Ground Zero" this week, I hope to make it.

If I were working marketing for AMD, I would be pointing out how switching from base ten to base eleven, twelve, thirteen, etc provides a theoretically unlimited amount of newsworthy advertisements in broken barriers. "We just need to make it to 2,357,947,691 hertz and we'll be the first to claim we've broken the 1 Ghz (base11) barrier! Where the hell was the report that we broke base9 last year?!"

Re:It Was Epic (1)

FredFredrickson (1177871) | more than 5 years ago | (#27954833)

base9 wasn't really that much of a feat. Not to mention, the class action law suit on differing bases really put a damper on that party.

Re:It Was Epic (1)

ArcherB (796902) | more than 5 years ago | (#27955797)

Shortly after it was discovered that 1Ghz is the frequency at which æther vibrates when it is at rest so once you pass it, you leave a wake of æther behind your time cone.

Wow! And here I thought it was 1.21Ghz at 88 MPH.

Re:It Was Epic (1)

Cornelius the Great (555189) | more than 5 years ago | (#27956643)

Great Scott! Don't you mean 1.21 JHz (jigahertz)?

Re:It Was Epic (2, Insightful)

yabos (719499) | more than 5 years ago | (#27956933)

jigawatts

Re:It Was Epic (2, Funny)

binarylarry (1338699) | more than 5 years ago | (#27958285)

jiga what?

Re:It Was Epic (0)

Anonymous Coward | more than 5 years ago | (#27959275)

boo!

Re:It Was Epic (1)

tlhIngan (30335) | more than 5 years ago | (#27958343)

Great Scott! Don't you mean 1.21 JHz (jigahertz)?

"Giga" in some countries is actually pronounced "jiga". (History says that is how "Giga" is pronounced everywhere except the US, but that's debatable). Thus, 1.21GHz would be an accurate figure in this article.

Re:It Was Epic (0)

Anonymous Coward | more than 5 years ago | (#27955945)

oh god, this was seriously the best post i have ever read on slashdot. thanks for the great laughs! :D

It Will Still Be Epic (0)

Anonymous Coward | more than 5 years ago | (#27956477)

I "can't wait" until they reach 9GHz, and everyone goes "OVER NINE THOUSAAAAAAAAAAND"

Re:It Was Epic (3, Funny)

Anarchduke (1551707) | more than 5 years ago | (#27957263)

AMD Broke the 1 GHz barrier on their CPU, and now they break the 1GHz barrier on their GPU.

It doesn't matter what base you use, AMD owns that achievement.

According to AMD top researchers, whether it was base-9, base-10, or base-11 doesn't matter. According to AMD,

"All your base are belong to us."

AMD CPU too (1)

Devistater (593822) | more than 5 years ago | (#27954821)

Didn't AMD break the 1ghz desktop CPU "barrier" too? ;)

Re:AMD CPU too (4, Informative)

LoRdTAW (99712) | more than 5 years ago | (#27954857)

Digital Broke that with the DEC Alpha (Was it DEC at that time?). Wasn't popular but it was a desktop CPU for high end workstations.

Re:AMD CPU too (3, Interesting)

LoRdTAW (99712) | more than 5 years ago | (#27954901)

Sorry. It was Compaq who owned the Alpha at that time. It was still DEC who designed it though.

Re:AMD CPU too (0, Troll)

Ceseuron (944486) | more than 5 years ago | (#27955265)

I think it was IBM that broke the 1Ghz CPU barrier. [cbronline.com]

It's not really a huge feat to break the 1Ghz "barrier" for a GPU anyway. And since it's AMD product, it'll run hot as hell and require a massive heatsink. You'll be able to barbecue a steak on your CrossFire enabled rig with two of these installed. Since it's also ATi, the card itself will be really awesome but the drivers released will be buggy and unstable, turning the card into little more than a giant red paperweight.

Re:AMD CPU too (0)

Anonymous Coward | more than 5 years ago | (#27957559)

How is this a troll?

ATi's drivers are buggy, unstable and forget about Linux support. Where nVidia supported all their cards when Kubuntu 9.04 came out ATI did not and as far as I know they still do not.

The OP is 100% correct, ATI could make a 12 Ghz chip but it is still worthless because of their drivers and lack of Linux support.

They need to quit giving Fanbois modpoints

Re:AMD CPU too (3, Informative)

hairyfeet (841228) | more than 5 years ago | (#27958741)

Actually as a PC repairman I can tell you the "trick" with AMD, and it is this- always buy a generation or two behind. I have sold many ATI and Nvidia cards as well as AMD PCs with ATI chipsets, and as long as you stay a generation or two behind you're good to go. My dual core Kuma with 780V chipset is solid as a rock

So what I tell my customers is this: If you want to spend top dollar and be on the bleeding edge, go Nvidia. Their drivers will be rock solid even for the card they just released. With AMD/ATI always get a generation or two behind and NEVER upgrade the drivers! Unlike Nvidia whose drivers are pretty painless to upgrade, upgrading to the latest Catalyst drivers usually end up bring nothing but instability and headaches. Now I don't know if this "trick" work with Linux, as I'm a Windows only shop. But I have found in Windows if you follow this rule you'll be good to go and save a few bucks as well. The "bang for the buck" ratio is very good on AMD/ATI which is why I just built my first AMD PC since the old Barton Core. You just have to be careful not to get too close to the bleeding edge with ATI, as out of the box their new drivers always suck.

Re:AMD CPU too (1)

Dyinobal (1427207) | more than 5 years ago | (#27959065)

I've got to say I don't share your experience at all. Beyond the little cpu bug fiasco I've never had an issue with AMD and I wasn't even caught up with that bug as I adopted slightly later when they released revised phenoms. Honestly AMD don't have a bad track record. I see you're recommending from personal experience though, I'd just have to say I'd go the other way.

Re:AMD CPU too (1)

TexNA55 (1338761) | more than 5 years ago | (#27959121)

Nope, it was later publicly released that it was PR-rating1GHz barrier they broke.... >:p

So this means... (5, Funny)

smooth wombat (796938) | more than 5 years ago | (#27954849)

one will finally have a graphics card capable of playing Duke Nukem Forever.

Oh wait...

Re:So this means... (0)

Anonymous Coward | more than 5 years ago | (#27955099)

This makes me so sad on the inside. Admit it, you are hurt too when you make jokes like this.

Re:So this means... (1, Funny)

DigiShaman (671371) | more than 5 years ago | (#27955629)

Don't wait for "never".

Why is it harder on GPUs than CPUs? (4, Interesting)

G3ckoG33k (647276) | more than 5 years ago | (#27954897)

Why is it harder to raise the clock frequenceies on GPUs than CPUs? Is more code in use at the same time per unit area, or?

Re:Why is it harder on GPUs than CPUs? (1)

jgtg32a (1173373) | more than 5 years ago | (#27955037)

They've never needed to get the clock speed up that high before, remember Ghz != Performance

Re:Why is it harder on GPUs than CPUs? (2, Insightful)

Pulzar (81031) | more than 5 years ago | (#27956717)

They've never needed to get the clock speed up that high before, remember Ghz != Performance

Err... It's not that black and white, you can't just say that GHz != performance. If you take a card and raise its clock, you'll usually get more performance. If you raise memory speed you'll usually get more performance. The only time you won't is when the one is bottlenecking the other.

All we're learned from CPU wars is that within two different architectures, the faster one isn't necessarily the one with more GHz. But, between two identical designs, more GHz means more performance.

Re:Why is it harder on GPUs than CPUs? (5, Informative)

KillerBob (217953) | more than 5 years ago | (#27955061)

Heat. Because of the form factor, you can't put a massive heatsink on a graphics card, certainly not the kind that you see on high end desktop CPUs.

GPUs are also generally a completely different architecture than a CPU... they're usually massively parallel and optimized for working with enormous matrices, whereas a CPU is significantly more linear in its operation, and generally prefers single variables.

Re:Why is it harder on GPUs than CPUs? (2, Interesting)

dhanson865 (1134161) | more than 5 years ago | (#27956049)

Yeah you can't put the exact same heatsink on them but take a look at the Accelero S1 Rev. 2 at http://www.arctic-cooling.com/catalog/product_info.php?cPath=2_&mID=105&language=en [arctic-cooling.com]

You even putting a 120mm fan on it doesn't cover the entire fin area. http://www.silentpcreview.com/article793-page5.html [silentpcreview.com]

Yeah with fan it'll be a 3 slot solution and yeah it only weighs half the weight of a high end CPU heatsink but then again that is not their biggest GPU heatsink.

The heaviest solution on AC's site is the Accelero XTREME 4870X2 at 680g which is getting up there for weight on a graphics heatsink. http://www.arctic-cooling.com/catalog/product_info.php?cPath=2_0&mID=244&page=spec [arctic-cooling.com]

I'd say its more of an issue that pure clock speed only covers some GPU problems. Memory bandwidth/latency, number of GPU cores, design of the cores, programming issues are all more difficult to balance than just ramping up the clock. They could cool these chips better but would it really be worth the cost/effort if the rest of the design and supporting software can't take advantage of it?

Re:Why is it harder on GPUs than CPUs? (3, Interesting)

powerlord (28156) | more than 5 years ago | (#27958359)

Pity there isn't a GPU socket on the motherboard the same as the CPU socket. Then we COULD use those big honking CPU cooling solutions (or some derivative of them), provided the case were designed to accommodate the board. You could also get high speed runs between memory (perhaps it could have its own bank), and the CPU.

Pity some CPU maker couldn't come along, buy a GPU maker, and make something like this.

(of course existing GPU solutions in slots are MUCH easier to upgrade, which is something against this sort of solution, unless they come out with a form factor that combines Chip+Cooling solution (similar to the old Slot1/A)

Re:Why is it harder on GPUs than CPUs? (0)

Anonymous Coward | more than 5 years ago | (#27959147)

would you have separate slots for the video memory as well? that actually sounds kind of cool.

Re:Why is it harder on GPUs than CPUs? (1)

caerwyn (38056) | more than 5 years ago | (#27955081)

GPUs are a little more CISCy.. Since the cycle time is constrained to be as slow as the slowest operation that must complete in one cycle, it means that it's a bit harder to cut down on cycle time.

Re:Why is it harder on GPUs than CPUs? (1)

djupedal (584558) | more than 5 years ago | (#27955119)

> "Why is it harder to raise the clock frequenceies on GPUs than CPUs?

Speed costs money...how fast 'ya want to go?

Re:Why is it harder on GPUs than CPUs? (2, Insightful)

mdm-adph (1030332) | more than 5 years ago | (#27955177)

GPU's have recently become massively parallel -- not as much need to go too fast in overall clock speed.

Re:Why is it harder on GPUs than CPUs? (2, Informative)

zolf13 (941799) | more than 5 years ago | (#27955291)

Wide vector processing with "800 stream processing units" (or "pipes" or "cores") - it is hard to put 800 cores in one chip and not to boil the silicon.

Re:Why is it harder on GPUs than CPUs? (1)

Firethorn (177587) | more than 5 years ago | (#27955383)

I think it has to due with the massively parallel operations. You can't pipeline stuff as far. Of course, I'm just guessing.

Basically, due to the parallelization it's more efficient to add more streams/'processors' than to ramp up the overall speed of the system - for example, the referenced 4890 has 800.

In order to have all the stream processors work, you might have to be a bit more conservative in your timing.

Re:Why is it harder on GPUs than CPUs? (1)

91degrees (207121) | more than 5 years ago | (#27955837)

I think it has to due with the massively parallel operations. You can't pipeline stuff as far. Of course, I'm just guessing.

That can't be it. Graphics cards can have vast pipelines. Pipelines' main problems are with branches, and graphics cards don't need to be able to branch.

Re:Why is it harder on GPUs than CPUs? (1)

Cornelius the Great (555189) | more than 5 years ago | (#27956743)

Newer (SM 3.0+) shaders allow flow control, so branching is supported in more recent architectures.

Re:Why is it harder on GPUs than CPUs? (4, Informative)

mikael (484) | more than 5 years ago | (#27955399)

You have so much data being churned around. The high end GPU's have 240+ stream processors, compared to a handful for a mobile phone. Then there is the constant punting of video data from the VRAM chips to the LCD screens (width x depth x RGB x bits/channel Hertz. VRAM is like standard RAM memory except there is a special read channel to allow whole rows of memory to be read by the video decoder simultaneously as it is being read/written by the GPU. It would be possible to
raise the clock frequency, but they would need a larger heatsink. If you visit the overclocking websites, you will see some of the custom water cooling systems that they have. Early supercomputers like Cray used Fluorinert [wikipedia.org] .

apples to apples (1)

alta (1263) | more than 5 years ago | (#27954937)

I have a intel quad core 2 duo, a Q6600 I think.

How many TeraFLOPS is that?

Re:apples to apples (2, Informative)

pshuke (845050) | more than 5 years ago | (#27955235)

According to intel [intel.com] it's about 0.04.

Re:apples to apples (0)

Anonymous Coward | more than 5 years ago | (#27955295)

I doubt it's even 20% of a teraFLOP. The Cell Broadband Engine is more comparable.

Re:apples to apples (4, Funny)

wjh31 (1372867) | more than 5 years ago | (#27955299)

last time i checked, a graphics card will get about 100x more Flops than a similarly priced CPU, give or take an order of magnitude (hey, im an astrophycist, order of magnitude is good enough)

Re:apples to apples (1)

Randle_Revar (229304) | more than 5 years ago | (#27955493)

Even quad-core x86 CPUs are in the 10s of GigaFLOPS.

CPUs have to do a lot of integer ops, and have to be good at everything. GPUs simply have to crunch a lot of Floating Point numbers,

Re:apples to apples (2, Insightful)

Anonymous Coward | more than 5 years ago | (#27958577)

Modern GPUs including every single Nvidia GPU since the G80 series has had a full integer instruction set capable of doing integer arithmetic and bit operations.

CPUs aren't designed to be good at everything, they're designed to be exceedingly good at executing bad code, which is the vast majority of code written by poor programmers or in high level languages.

You can write code for a CPU without worrying specifically about the cache line size, cache coherency, register usage, memory access address patterns and alignment or memory latency on branches or pipeline stalls and the difference in performance compared to optimized code will significant but not unbearable.

GPUs devote significantly less (or in some cases no) die space to things like branch prediction and automatically managed caches. Poorly written GPU code is sometimes almost 2 orders of magnitude slower than well written GPU code, but well written GPU code has much higher potential than what is achievable on modern CPUs. See: CUDA.

Ummmm..... (0)

Anonymous Coward | more than 5 years ago | (#27954967)

Didn't we learn long ago that clock speed by itself means nothing? Even those stupid enough to buy the hype learned this around the time the Core came out at significantly slower clock speeds than the P4.

If this is the best thing AMD has to hype, the card is likely a piece of junk.

Re:Ummmm..... (1)

conteXXt (249905) | more than 5 years ago | (#27955309)

I may not completely understand graphic cards but,

I think in this case clock cycles actually *DO* mean something.

Re:Ummmm..... (0)

Anonymous Coward | more than 5 years ago | (#27955551)

Except 90% of people still think a 3.2 GHz dual core P4 will outperform a 2.8 GHz Core 2 Duo. Hate to tell you, but people are still buying that crap.

But you have to realize that clock speed does mean SOMETHING. This card is already in production, but at a lower clock speed. So, all else being equal, increasing the clock speed really DOES increase performance. It's when you're talking about different architectures that clock speed loses its value.

Re:Ummmm..... (1)

Randle_Revar (229304) | more than 5 years ago | (#27955773)

A 3GHz P4 is faster than a 2.6GHz P4
A 3GHz core 2 is faster than a 2.6GHz core 2
A 1GHz R700 is faster than 800MHz R700

Anyway, the R700 (Radeon 4xxx) series has been very good, mostly equaling or beating Nvidia's current lineup at similar prices.

Re:Ummmm..... (2, Informative)

mr_mischief (456295) | more than 5 years ago | (#27955799)

How about the fact that it runs each instruction on 800 pieces of data at once? This isn't a 1 GHz one, two, four, or even 16-way chip. It's processing up to 800 pieces of data at once, and its clock for doing that ticks every billionth of a second. You're absolutely right, the clock speed by itself means nothing. The clock speed times the amount of work done per clock does mean something. If you raise either without lowering the other, you raise the overall amount of work the chip can do.

Re:Ummmm..... (0)

Anonymous Coward | more than 5 years ago | (#27958647)

How about the fact that it runs each instruction on 800 pieces of data at once?

Small correction: The R770 chip used in the Radeon 4870 and 4890 is capable of executing up to 10 simultaneous instructions, each of which is sent to sixteen shader units, each of which consists of 4 FP ALUs and one transcendental ALU.

http://en.wikipedia.org/wiki/Radeon_R700#Execution_units

1 TF/s is so 1996 (0)

Anonymous Coward | more than 5 years ago | (#27955005)

The original ASCI Red is notable for being the first computer on Earth to bench above 1 TeraFLOPS on the Linpack benchmark (1996), as noted in Top500 Supercomputer sites. After being upgraded with Pentium II Overdrive processors, the computer has demonstrated Linpack performance above 2 TeraFLOPS.

from top500.org

Re:1 TF/s is so 1996 (0)

Penguinoflight (517245) | more than 5 years ago | (#27955327)

There's nothing stopping you from putting 6 of these in a single system. Even better, go with 4 4870x2s for around 12TFlops

Re:1 TF/s is so 1996 (2, Informative)

wjh31 (1372867) | more than 5 years ago | (#27955587)

if you're imagining insane numbers of cards for silly TFlops, the 4TFlop nvidea Tesla has a 1U rack form, so you can shove as many of them as you like in a rack

Re:1 TF/s is so 1996 (1)

mr_mischief (456295) | more than 5 years ago | (#27955829)

Well, up to 42 anyway...

Re:1 TF/s is so 1996 (1)

Randle_Revar (229304) | more than 5 years ago | (#27955589)

Great for ASCI Red. Now, in 1996, can I buy ASCI Red in a size that fits on a single PCI-e card and costs less than $300?

Re:1 TF/s is so 1996 (1)

Ilgaz (86384) | more than 5 years ago | (#27958837)

Well, if you got money, you can have 180 GigaFlop (32bit) or 90 GigaFlop (64bit) right now, on a PCI-e card.

http://us.fixstars.com/products/gigaaccel/ [fixstars.com]

It is Cell powered as you may guess. There is also mention of "720 GF computing power" which I can't even dare to think about it. I guess it is when you combine 4 of them. Oh, just $6100 per one :)

Dear AMD, intel, nVidia, etc (3, Insightful)

Yvan256 (722131) | more than 5 years ago | (#27955141)

As you may have seen from the sales of netbooks and low-power computers, the future is... wait for it... low-power devices!

Where are the 5W GPUs? Does the nVidia 9400M require more than 5W?

Re:Dear AMD, intel, nVidia, etc (1)

Lonewolf666 (259450) | more than 5 years ago | (#27955255)

Even for desktops, I'd like to see more of those. Lets say below 20 W, so a not-too-massive passive heat sink will do.
I'm quite happy with the performance of my NVidia 6800 GT, and it needs about 50W at full usage. With the latest chip technology (40 nm anyone?), the same performance should be possible with much less power consumption.

Re:Dear AMD, intel, nVidia, etc (1)

Randle_Revar (229304) | more than 5 years ago | (#27956203)

Radeon HD 4350 (55nm) is ~20 W and I think should be somewhat better than a 6800.

Re:Dear AMD, intel, nVidia, etc (1, Flamebait)

drinkypoo (153816) | more than 5 years ago | (#27955503)

Does the nVidia 9400M require more than 5W?

Google is your friend [justfuckinggoogleit.com]

The GeForce 9400M claims a TDP of only 12 W. [tomshardware.com]

Re:Dear AMD, intel, nVidia, etc (1)

StikyPad (445176) | more than 5 years ago | (#27958581)

So "only" about as much power as a hard drive [ixbtlabs.com] .

Re:Dear AMD, intel, nVidia, etc (1)

drinkypoo (153816) | more than 5 years ago | (#27958813)

It's a big step in the right direction. I had been hoping to answer the other question but it looked like it was going to be too hard to find information on an embedded GPU core (like for cellphones and stuff.) I wonder what's in the GP2x Wiz [dcemu.co.uk]

Re:Dear AMD, intel, nVidia, etc (1)

Randle_Revar (229304) | more than 5 years ago | (#27955609)

>Where are the 5W GPUs?

Intel integrated graphics

Re:Dear AMD, intel, nVidia, etc (1)

Ilgaz (86384) | more than 5 years ago | (#27958613)

Yes, when you offload the entire thing to CPU and even ignore hardware t&l feature from GeForce 2 ages, it goes down to 5 watts.

Even Apple couldn't stand to their junk and switched back to real GPUs, down to "non pro" laptops.

Re:Dear AMD, intel, nVidia, etc (1, Insightful)

Anonymous Coward | more than 5 years ago | (#27956199)

Speak for yourself, some of us enjoy being able to play crysis/project origin at high res with detail quality maxed out. If the only thing you use a computer for is email and beating off to pictures of drunken sluts on facebook, go get yourself a mac. I hear you can get SHINY ones now.

Re:Dear AMD, intel, nVidia, etc (0)

Anonymous Coward | more than 5 years ago | (#27956427)

Have a look at the ATI Radeon HD 4350 and Radeon HD 4550. Both are said to use a maximum of 25W under full load and test show an average of about 9W in 2D.

Re:Dear AMD, intel, nVidia, etc (1)

Cornelius the Great (555189) | more than 5 years ago | (#27956869)

Nvidia has the mobile graphics line, which is designed for cellphones. AMD used to have a mobile graphics division, but I believe that they sold it to Qualcomm.

So in a the next few months, we'll be seeing mobile chipsets from both companies (Nvidia's Tegra and Qualcomm's Snapdragon) that will have scaled-down tech capable of handling HD video and impressive 3D graphics on embedded devices.

Re:Dear AMD, intel, nVidia, etc (1)

Totenglocke (1291680) | more than 5 years ago | (#27957045)

No one said that this video card was going to be shoved into every computer. This video card is for people who use a computer for more than reading slashdot and checking email.

Re:Dear AMD, intel, nVidia, etc (1)

hguorbray (967940) | more than 5 years ago | (#27957839)

as you can see from the pictures of the massive heatsink (covers the entire board) this is NOT a low power device

and until there is a market for laptop gamers wanting 60fps and millions of polygons specialized cards/chips like this will be found only on render farms, gamer desktop rigs and graphics workstations -which is their intended market anyway

you generally do not get high performance with an economical product, so, for my car analogy I will say that a Pontiac Vibe that gets 35 miles to the gallon is not going to beat a Mustang V8 off the line....

-I'm just sayin'

Re:Dear AMD, intel, nVidia, etc (1)

Ilgaz (86384) | more than 5 years ago | (#27958921)

Soon, not just gamers but ordinary users may need way higher "FPS" than today. 3d stuff (200hz), artificial 3d, massive amounts of transcoding, 12bit per channel video, 2K (or even 4K) are all making their way to average home user. Slowly but sure. These things were all pro high end studio stuff just some years ago.

For example, Apple is still testing a technology which scales desktop to infinite levels of DPI. It is there, embedded to core of OS but not stable or complete yet. To display such a desktop on a hardware accelerated manner, you really need some GPU power. You probably know what we see as "2d" on modern, accelerated desktops is part opengl/direct3d.

Re:Dear AMD, intel, nVidia, etc (0)

Anonymous Coward | more than 5 years ago | (#27957885)

Cheaper / lower power cards are derived directly from their flagship brethren: cut out half the pipelines, underclock the core, use slower memory et voila. (In fact sometimes they are literally the same card just speedbinned with a different BIOS.)

Admittedly however, to some extent a really low power GPU would have to be designed that way from the ground up.

Re:Dear AMD, intel, nVidia, etc (1)

averner (1341263) | more than 5 years ago | (#27959157)

They've been around for a long time - they're called integrated graphics.

Power consumption? (2, Interesting)

LoRdTAW (99712) | more than 5 years ago | (#27955191)

No mention of power consumption or heat dissipation. My PC is already a radiator and in the summer fights with my AC.

I am interested in the computing power, 1.6 terraflops is no small number even if it is single precision.

Re:Power consumption? (2, Insightful)

wjh31 (1372867) | more than 5 years ago | (#27955421)

wikipedia (http://en.wikipedia.org/wiki/Comparison_of_ATI_graphics_processing_units#Radeon_R700_.28HD_4xxx.29_series) suggests the 4890 comes in at 190w, go to s little under double that if they make an x2 version. entry level 4000 series comes in at 25W.

if you want TFlops, try the 4870x2 at 2.4TFlops, or NVideas tesla (http://en.wikipedia.org/wiki/NVIDIA_Tesla) series, made just for GPGPU which reach over 4TFlops

Re:Power consumption? (1)

F34nor (321515) | more than 5 years ago | (#27955761)

This is why I am going to literally make my next PC a hot water heater.

Re:Power consumption? (3, Funny)

Pope (17780) | more than 5 years ago | (#27957931)

Personally, I'd recommend you make it a cold water heater, and get more bang for your buck!

Re:Power consumption? (0, Redundant)

HTH NE1 (675604) | more than 5 years ago | (#27959271)

That's a waste of power. Do a luke-warm water heater and half the work is done for you.

how's ATI driver quality and performance on linux? (1)

yanyan (302849) | more than 5 years ago | (#27955241)

I'm a long-time Nvidia user because of good driver support on Windoze and Linux. I would love to give ATI a try but i've read a lot of negative things about driver quality in Linux. Granted, that was some time ago and things may have changed today. I'd be interested to hear about other slashdotters' experiences using today's ATI hardware + drivers under Linux/X.

Re:how's ATI driver quality and performance on lin (1)

Jeek Elemental (976426) | more than 5 years ago | (#27956021)

Driver is fine (finally).

Re:how's ATI driver quality and performance on lin (1)

Randle_Revar (229304) | more than 5 years ago | (#27956323)

ATI drivers are great in Linux

Re:how's ATI driver quality and performance on lin (0)

Anonymous Coward | more than 5 years ago | (#27959469)

i find my Nvidia card actually has better opengl performance under linux than windows, i think that because the windows drivers for the geforce restrict GL performance, so you buy their exoticly priced quadro crap, which by the way have identical hardware to the gaming cards. I might buy a radeon next time, although its probably the same deal with the fireGL cards right?

Not first ?? (0)

Anonymous Coward | more than 5 years ago | (#27955397)

The GTX 260 I have reports 1242 MHz using CUDA. The clock rates are different for graphics and computation (not really sure why) but this article does seem to be about computation.

So perhaps claiming to be first to 1 GHz is a bit spurious?

Re:Not first ?? (2, Informative)

Warlord88 (1065794) | more than 5 years ago | (#27955655)

The 1242 MHz speed is the frequency of vertex shaders, not the core speed. Also, 1 GHz is the core speed without overclocking.

Re:Not first ?? (2, Insightful)

JackARot (1554025) | more than 5 years ago | (#27957279)

Also, 1 GHz is the core speed without overclocking.

False. It's overclocked alright, it just doesn't have to be overclocked by users or the third party manufacturers to run at 1 ghz. From their press release:

Nine years after launching the world's first 1 GHz CPU, AMD is again first to break the gigahertz barrier with the factory overclocked, air-cooled ATI Radeon(TM) HD 4890 GPU -

*Punches fist in air* (2)

martin_henry (1032656) | more than 5 years ago | (#27955483)

I can finally get a 5.0 on the Vista Experience Index!

Re:*Punches fist in air* (1)

Warlord88 (1065794) | more than 5 years ago | (#27955685)

The highest score you can get on Vista is 5.9.

Re:*Punches fist in air* (1)

socrplayr813 (1372733) | more than 5 years ago | (#27956325)

I think that was the part that was meant to be funny, but my 8800 has gotten a 5.9 on that test for over a year now. Isn't it time we moved past the 'Vista is slow' thing?

Re:*Punches fist in air* (1)

dogmatixpsych (786818) | more than 5 years ago | (#27956935)

Just because your graphics card is fast does not mean Vista isn't slow (yes, the double negative was on purpose).

FLOPs/Hz (1)

dolphinling (720774) | more than 5 years ago | (#27955537)

1600 FLOPs per Hz? That's actually rather impressive.

And.... (2, Informative)

MasseKid (1294554) | more than 5 years ago | (#27955811)

And it's still slower than a GTX 285 OC edition. Ghz != Preformance. And Nvidia, stop renaming your cards damn it!

Re:And.... (2, Interesting)

PitaBred (632671) | more than 5 years ago | (#27956573)

The 4890 actually DX 10.1, and probably has support for almost all the features in 11. Does the Nvidia card? Didn't think so.

I'm also interested in your "slower than a GTX 285" assertion. I just looked at some benchmarks, and Xbit labs has an overclocked 4890@1GHz [xbitlabs.com] beating the tar out of the 285.

uhhh (4, Funny)

nomadic (141991) | more than 5 years ago | (#27955901)

AMD does note that Asus, Club 3D, Diamond Multimedia, Force3D, GECUBE, Gigabyte, HIS, MSI, Palit Multimedia, PowerColor, SAPPHIRE, XFX and others are all aligning to release higher performance cards."

Wait, let me get this straight. Graphics card manufacturers are actually attempting to make their graphics cards perform better? Why was I not informed of this before???

"Barrier" (2, Insightful)

Burning1 (204959) | more than 5 years ago | (#27955989)

AMD Breaks 1GHz GPU Barrier [reference.com]

You keep using that word. I do not think it means what you think it means.

"factory" "overclocked"? (1)

chrispitude (535888) | more than 5 years ago | (#27956451)

*slaps forehead*

Only marketing weenies would play up such an oxymoron.

Re:"factory" "overclocked"? (2, Interesting)

chrysrobyn (106763) | more than 5 years ago | (#27957347)

"Factory" "overclocked"? *slaps forehead* Only marketing weenies would play up such an oxymoron.

I'm pretty sure that word doesn't mean what you think it means. "Overclocked" means "our reliability people don't think this is smart, but it might work for you." In this case, you get a part that may or may not die before you expect it to, it might not last much beyond the warranty, it might have non-standard cooling to enable an operating window that Reliability can't assume (say they model frequency shifting at 85C and they have a heat sink that puts it at 55C; feel free to substitute any other numbers).

Any conditions where the company's Reliability department didn't endorse frequency over the lifetime of the product for 3 sigma worth of sellable parts would be "overclocked".

What does this mean for us Linux users? (0)

Anonymous Coward | more than 5 years ago | (#27956553)

Can I finally play Doom 3 on ultra with the slow ATI Linux drivers? How long until they pull support for this new GPU out of the proprietary drivers?

taking bets (1)

stenchcow (1554779) | more than 5 years ago | (#27956723)

I'm taking bets on how many days it will take Nvidia to one up them with a faster card. I'm guessing 3 days. Any other guesses?

Disclaimer .... (1)

karvind (833059) | more than 5 years ago | (#27957081)

Maybe you want to check the disclaimer too ...

Note: Damage caused by overclocking AMDâ(TM)s GPUs above factory-set overclocking is not covered by AMDâ(TM)s product warranty, even when such overclocking is enabled via AMD software.

Still an ATI (0, Troll)

wiedzmin (1269816) | more than 5 years ago | (#27957751)

Still, it's an ATI... I'll just wait a couple days for nVidia to come out with a better card.

What about a real revolution? (1)

Ilgaz (86384) | more than 5 years ago | (#27958683)

Offer the card, in same price down to cents along with a goodly written driver for Mac Pros and even more miraculously to last generation G5s (Quad/Dual Core).

Open Firmware, Endianness, Altivec, non standard interface (???), all excuses gone. If anyone wonders what I talk about, just watch this card's price when (if!) it ships to Macs. You will understand the comedy going on. In PowerPC times, we had some sort of excuse as "Firmware is hard to code", "drivers man, they can't code for PowerPC" etc. Now all excuses are gone and we sometimes get up to 3x price by this duopoly named NVidia and ATI.

Asymptotic, my ass (3, Funny)

Cajun Hell (725246) | more than 5 years ago | (#27959367)

Everyone thought it would be 999MHz this year, 999.9 MHz the next year, 999.99999 MHz a few years later. It looked uncrossable! Well done, AMD!
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?