Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Graphics AMD Upgrades Hardware Games

AMD Delivers DX11 Graphics Solution For Under $100 133

Vigile points out yesterday's launch of "the new AMD Radeon HD 5670, the first graphics card to bring DirectX 11 support to the sub-$100 market and offer next-generation features to almost any budget. The Redwood part (as it was codenamed) is nearly 3.5x smaller in die size than the first DX11 GPUs from AMD while still offering support for DirectCompute 5.0, Eyefinity multi-monitor gaming and of course DX11 features (like tessellation) in upcoming Windows gaming titles. Unfortunately, performance on the card is not revolutionary even for the $99 graphics market, though power consumption has been noticeably lowered while keeping the card well cooled in a single-slot design."
This discussion has been archived. No new comments can be posted.

AMD Delivers DX11 Graphics Solution For Under $100

Comments Filter:
  • Why? (Score:3, Insightful)

    by Nemyst ( 1383049 ) on Saturday January 16, 2010 @06:34PM (#30793830) Homepage
    I'm sorry, I've seen this news go all around tech sites and... I don't get it. Yay, DX11. The biggest new features I could see about it were hardware tessellation and compute shaders. What, this requires a powerful GPU in the first place to be of any use? Something much, much better than this card? Oh...

    Seriously, good for AMD, but I just don't see the point. Say it's a good card, say it has very low power consumption, but hyping DX11 when it has no particular benefit - especially at this price point - is absolutely useless.

    And before anyone says I'm just bashing AMD, my computer has a 5850.
    • Re:Why? (Score:5, Informative)

      by Lord Crc ( 151920 ) on Saturday January 16, 2010 @06:50PM (#30793992)

      I don't get it. Yay, DX11. The biggest new features I could see about it were hardware tessellation and compute shaders.

      Compute shaders, or more generally GPGPU (via OpenCL as well as DX11) will open up a huge new market for GPUs. One midrange GPU can replace a small cluster of computers at a fraction of the cost. For example, using 2-3 GPUs in one box, people doing architectural visualization can get their results in minutes instead of days.

      • I should add that the arch. viz. sample was for compute shading in general. But even such "puny" cards such as this should give a nice boost to many GPGPU applications.

      • Re:Why? (Score:4, Insightful)

        by Kjella ( 173770 ) on Saturday January 16, 2010 @07:38PM (#30794408) Homepage

        For example, using 2-3 GPUs in one box, people doing architectural visualization can get their results in minutes instead of days.

        Yeah, and the point was that those people wouldn't be buying this card. Face it, GPGPU isn't a general purpose CPU, we have some companies that are already damn good at making those. This means you either need it or you don't, and if you first do you'll probably want a lot of it. Companies and research institutions certainly will have the money, and even if you are a poor hungry student you can probably afford to invest 2-300$ in your education for a HD5850 which has a ton of shaders compared to this. The only real purpose of this card is to phase in a chip built on a smaller process, that'll be cheaper to produce. All they could have gained in performance they've instead cut in size.

        • Re:Why? (Score:5, Insightful)

          by MrNaz ( 730548 ) * on Saturday January 16, 2010 @08:03PM (#30794618) Homepage

          Face it, GPGPU isn't a general purpose CPU, we have some companies that are already damn good at making those.

          Not quite accurate. While GPGPU != CPU, there are things that GPGPUs can do far better than CPUs, and those things are more common than you'd think.

          The only real purpose of this card is to phase in a chip built on a smaller process, that'll be cheaper to produce.

          Even though I don't agree with you that that is the only reason, isn't making the same product, but cheaper, a worthy cause in and of itself?

          I feel that you are being unduly dismissive.

          • Not quite accurate. While GPGPU != CPU, there are things that GPGPUs can do far better than CPUs, and those things are more common than you'd think.

            I agree completely, for example, video encoding is pretty common these days and can be GPU accelerated for massive gains in speed.

        • Yeah, and the point was that those people wouldn't be buying this card.

          True, I was thinking more generally about the GPGPU market. However consider that if a HD5870 speeds up a task by 10-15x compared to a regular CPU for a given task, then this card could potentially give a 2-3x speed-up. For many it'll be easier and cheaper to get this card than a CPU which can do 2-3x.

        • All they could have gained in performance they've instead cut in size.

          Power consumption is down as well. Looking at some random HD 5670 card in my preferred online shops, they are typically listed with 61W maximum power consumption. That is about 10W less than in the 4670. For those of us who want a card with passable performance that is still easy on the power supply, the 5670 looks like a good compromise.

          Eventually we may even see a 5650 that is good for passive cooling (the limit for that seems to be around 50W, if you don't want ridiculously large coolers).

      • A huge new market? More like a small but significant niche.

        • by LBt1st ( 709520 )

          Well nVidia released the GT 240 not long ago for $100. My guess is this is AMD's answer to that.
          nVidia's card supports DirectX10.1. If AMD can't make a card that out performs it, they can at least have a bigger number on the box.

          For developers, both of these cards are good news. It means anyone can afford a video card that can handle the latest features (even if it does them slowly). Devs can focus on making games instead of supporting legacy hardware or creating workarounds for people without feature x.
          Use

          • nVidia's card supports DirectX10.1. If AMD can't make a card that out performs it, they can at least have a bigger number on the box.

            I won't buy an nVidia card because of those longer and longer introductory videos at the beginning of what seems like every video came now.

            You know the ones: the big green logo and the breathily whispered "...ennvideeyah". On Borderlands, for chrissake, it seems to go on forever, with the little robot kicking the logo. So now, I either have to plan to go have lunch while I'm

      • Well, yes ... but the people who are most interested in GPGPU aren't generally all that interested in saving $50 to get less processors.

    • Re: (Score:1, Insightful)

      by Anonymous Coward

      I don't get it.

      Of course you don't. This card is for people who have lower resolution monitor (under 1440 X 900), since at lower resolutions it can run all modern games comfortably. About 50% people still run at 1280 x 1024 or below, and for them this is a great graphics card. It gives good performance at reasonable price, and has the latest features.

    • Re:Why? (Score:5, Insightful)

      by Sycraft-fu ( 314770 ) on Saturday January 16, 2010 @07:13PM (#30794188)

      Well the things that may make DX11 interesting in general, not just to high end graphics:

      1) Compute shaders. Those actually work on any card DX10 or higher using DX11 APIs (just lower versions of the shaders). The reason these are useful, even on lower end cards, is that some things run drastically faster on a GPU so even a low end one is better than the CPU. I don't have any good examples specific to compute shaders but an older non-computer shader example would be HD video. You can do HD H.264 on a lower end CPU so long as you have a GPU that can handle acceleration. Doesn't have to be a high end one either.

      2) 64-bit precision. Former versions of DX required only 32-bit FP max, since that is the most you generally need for graphics (32-bit per channel that is). However there are other math functions that need higher precision. DX11 mandates 64-bit FP support. In the case of the 5000 series, it works well too, 64-bit FP is half the speed of 32-bit FP so slower, but still plenty quick as to be useful.

      3) Multithreaded rendering/GPU multitasking. DX11 offers much, much better support for having multiple programs talk to the GPU at the same time. The idea is to have it fully preemptively multi-task, just like the CPU. Have the thing be a general purpose resource that can be addressed by multiple programs with no impact.

      It's a worthwhile new API. Now I'm not saying "Oh everyone needs a DX11 card!" If you have an older card and it works fine for you, great stick with it. However there is a point to wanting to have DX11 in all the segments of the market. Hopefully we can start having GPUs be used for more than just games on the average system.

      Also, it makes sense from ATi's point of view. Rather than maintaining separate designs for separate lines, unify everything. Their low end DX11 parts are the same thing as their high end DX11 parts, just less of it. Less shaders, less ROPs, smaller memory controllers, etc. Makes sense to do that for a low end part, rather than a totally new design. Keeps your costs down, since most of the development cost was paid for by the high end parts.

      In terms of hyping it? Well that's called marketing.

      • by Nemyst ( 1383049 )
        I do see the points of the new features, I just don't feel like they're best at home in a cheaper card. Props to ATi for keeping their cards unified (unlike the huge mess of i7s using P55 and X58 or mobile GPUs lagging two generations behind but sharing the same name as their newest desktop counterpart), but I just think the angle at which they're marketing this is not the best for the market they're looking at... Unless they really think their biggest buyers will be people who only care about GPGPU and oth
      • by Kjella ( 173770 )

        The reason these are useful, even on lower end cards, is that some things run drastically faster on a GPU so even a low end one is better than the CPU. I don't have any good examples specific to compute shaders but an older non-computer shader example would be HD video.

        Except that for all intents and purposes, it has nothing to do with the GPU. It could just as well have been on a separate chip, like the Broadcom chip for the new Intel Atoms. It could have been on the CPU too for that matter. Right now there's an awful lot of hype, the question is how much is practical reality. Some things are better solved, in fact generally best solved by dedicated hardware like a HD decoder. How much falls between general purpose and dedicated hardware? Very good question.

        • Except that for all intents and purposes, it has nothing to do with the GPU. It could just as well have been on a separate chip, like the Broadcom chip for the new Intel Atoms.

          Any function a programmable chip can do can also be done by a custom chip. However, a programmable chip can do them all without needing to include multiple chips or change manufacturing processes as new uses are invented. Sure, you could make a custom chip decoding HD video, but what happens when someone comes up with a new and super

      • Re: (Score:1, Informative)

        by Anonymous Coward


        1) Compute shaders. Those actually work on any card DX10 or higher using DX11 APIs (just lower versions of the shaders). The reason these are useful, even on lower end cards, is that some things run drastically faster on a GPU so even a low end one is better than the CPU. I don't have any good examples specific to compute shaders but an older non-computer shader example would be HD video. You can do HD H.264 on a lower end CPU so long as you have a GPU that can handle acceleration. Doesn't have to be a hig

      • by Mal-2 ( 675116 )

        Rather than maintaining separate designs for separate lines, unify everything. Their low end DX11 parts are the same thing as their high end DX11 parts, just less of it. Less shaders, less ROPs, smaller memory controllers, etc.

        This also allows them to pop a couple fuses and re-purpose marginal would-have-been high end parts by blocking out the broken parts. They did this back in the 9500/9700 days, I don't see why they wouldn't want to do it now.

        Mal-2

      • I don't have any good examples specific to compute shaders but...

        That's OK, nobody does for home computing yet. This article was just a marketing press release to move some video cards that will be obsolete by Valentine's Day.

    • Same thing was said about DX10. And about HD4670.

      • Re:Why? (Score:5, Insightful)

        by Anonymous Coward on Saturday January 16, 2010 @07:17PM (#30794224)
        Same thing was said about DX10. And about HD4670.

        And about DX9 before that. And DX8 before that. And on and on. I'm amazed by how many people here don't seem to "get" that advances in technology is precisely how technology moves forward. I mean, it's really a pretty simple concept.
        • Re: (Score:3, Informative)

          by Cid Highwind ( 9258 )
          Having a DX9 GPU got you the Windows aero effects, so there was at least a visible benefit to using the lowest end DX9 GPU over a (probably faster) DX8 part at the same price.
          • Except that DX10 hardware was around before Aero was released so what you needed was last generation hardware, not next generation, to get the feature. It also relied on newer drivers, the Intel 915 supported DX9 but didn't support Aero (the whole Vista Capable lawsuit). We're talking about new hardware supplying reasons to buy the hardware, not new software taking advantage of long existing hardware.
        • Re: (Score:3, Insightful)

          by Antiocheian ( 859870 )

          A new DirectX version is not technology moving forward. CUDA and PhysX are.

    • Re:Why? (Score:5, Insightful)

      by BikeHelmet ( 1437881 ) on Saturday January 16, 2010 @07:52PM (#30794510) Journal

      Google Earth across 6 monitors from a single $100 card? Seems like technology is heading in the right direction!

    • Re: (Score:2, Insightful)

      by Anonymous Coward

      I'm sorry, I've seen this news go all around tech sites and... I don't get it. Yay, DX11. The biggest new features I could see about it were hardware tessellation and compute shaders. What, this requires a powerful GPU in the first place to be of any use? Something much, much better than this card? Oh....

      Sounds like AMD wants to pull the "NVidia GeforceFX 5200 card" in the market to see what happens. The FX5200 was on a huge fail scale for being hyped of DX9 Pixelshader 2 features, it does at a grand 1-3fps. Don't get me started on it's unbelievably poor GLSL support either... But hey, it IS "The way it's meant to be played", so can YOU even complain!?

      • by anss123 ( 985305 )

        Sounds like AMD wants to pull the "NVidia GeforceFX 5200 card" in the market to see what happens. The FX5200 was on a huge fail scale for being hyped of DX9 Pixelshader 2 features, it does at a grand 1-3fps. Don't get me started on it's unbelievably poor GLSL support either...

        The 5200 wasn't a bad card as long as you kept away from those features. It was faster than the MX cards it replaced and cheaper too, only with the drawback that some games ran like crap in the default configs. If you want true turds you should look at low end laptop chipset, there we're talking sub Voodoo 2 performance with DX10 feature set and inability to run DX7 games.

    • by mdwh2 ( 535323 )

      Helps with standardisation? I might be writing a game/application that doesn't need tonnes of graphics processing power to run, but it's still easier if I can simply write one DirectX 11 renderer, instead of having to write multiple renderers for people with low end cards that only support older APIs.

      • You'd be better off writing for what everybody has on their machines currently. That's DirectX 10.

        Don't be a victim of Microsoft's need for revenue from planned obsolescence. Code to DirectX 11 in a few years, if ever.

        • by mdwh2 ( 535323 )

          Indeed yes, for now (actually DirectX 9, judging by the number still on XP).

          But the point is that releasing low end cards now that run the latest DirectX means that things will be easier in future, and will mean developers can sooner start focusing on only DirectX 11.

    • Part of the reason DX10 never really took off was that only the highest end graphics cards supported it for years, and so software developers who used DX(far beyond just game writers) had to focus on supporting either just HW DX9 or both, to which the answer is pretty obvious. Because of the limited benefit you get from one version to the next on something like DX, this is a very bad trend. So by saturating the whole market with DX11 capable cards, hopefully this means that in a few years more apps will sup
      • by Hadlock ( 143607 )

        The real reason DX10 never took off is that nobody could tell the difference between DX9 and DX10 screenshots.

    • ... hardware tessellation and compute shaders ...

      Compute Shader for Shader Model 5.0, yes. However, starting with Catalyst 9.12 (December 2009) the HD48xx cards have driver support for CS for SM4.0. Regardless, afaik, no one is using either presently. Would be interesting to see a new physics engine that ran on this; PhysX for Compute Shaders I guess.

    • "Seriously, good for AMD, but I just don't see the point."

      Not only that, but it's slower than the 8 month old $99 ATI Radeon HD 4770 [bit-tech.net]

      so if I bought the $99 ATI Radeon HD 4770 8 months ago, why would I spend $99 on a slower card now?
    • Comment removed based on user account deletion
  • ... so somebody tell me if we actually have any that can really take advantage of the latest greatest graphics cards, yet? Seems like the hardware is outpacing the software, isn't it?
    • Clearly you don't run nearly enough instances of Crysis at a time.
    • by TheKidWho ( 705796 ) on Saturday January 16, 2010 @06:44PM (#30793950)

      A lot of games will struggle on this card significantly. It's about as powerful as a 3870 from 2+ years ago.

      • Re: (Score:1, Insightful)

        by Anonymous Coward

        Which is still plenty powerful enough to run any game that also launches on the Xbox 360.

        It also does it without having to buy a new PSU. The DX11 bits are just there to help cheap people (like myself) feel comfortable buying the card, knowing that it'll still play all the games (even if poorly) that come out next year since games that use DX11 are already starting to come out.

        It's a good move from ATI, targeted at cheap gamers that are looking to breathe life into an older computer.

    • How could hardware not outpace software? I mean it is really hard to develop a game for what does not yet exist. The hardware has to come out, and in particular the API has to come out, then developers can develop for it. They do get engineering samples a little early but still.

      In terms of DX11 support. Yes, there are a couple games that will use it. No, it isn't really very useful. Said games run and look fine in DX9 mode.

      Really, you don't buy new cards because you need their features right away. There are

      • by tepples ( 727027 )

        I mean it is really hard to develop a game for what does not yet exist. The hardware has to come out, and in particular the API has to come out, then developers can develop for it. They do get engineering samples a little early but still.

        Do new video game consoles come onto the market with 0 games? No. Even the Nintendo 64 had three finished games at launch: Mario, Pilotwings, and Shogi. So at least some video game developers depend on engineering samples.

        • Consoles are different. They are given to developers for longer periods of time, precisely for the reason that there need to be games out at launch. Graphics cards are tracked to the public much faster as people will buy them without any special titles, since they run older games better.

          Also, console development these days can be done on specially modified PCs. You use PC hardware to simulate what'll be in the console, since the console chips come from the graphics companies' PC products.

          • I think it's related to consoles, because graphics card manufacturers won't create cards that are beyond the spec people are requesting and people request performance that can play games. Majority of these games will be ported to consoles and have console restrictions built in from the beginning.

            Haven't you noticed that main improvements in graphics cards have been both lower price and lower power consumption? I call that quite significant development, there is no real demand to overpower the average deskto

      • How could hardware not outpace software? I mean it is really hard to develop a game for what does not yet exist. The hardware has to come out, and in particular the API has to come out, then developers can develop for it. They do get engineering samples a little early but still.

        Not really true. Most games have variable quality settings. You can test the playability at the low quality settings on a top of the line card during development and test the graphics output from the higher settings with a driver that emulates the missing features. You may only be able to get one frame every few seconds, but that's enough to check that the rendering looks right. Then you release the game and when the hardware improves people can just turn up the quality a bit. That way it keeps looking

    • by Hadlock ( 143607 )

      The biggest problem for game makers is that people went from 17-19" 1280x1024 displays (1.6 megapixels, i think, not going to do the math this late) to 21-24" displays at 1680x1050 (2.3 megapixels). The old standard used to be 1024x768. For a long time it was 1280x1024 (small step up). Now the standard (1680x1050) increased by about 50% seemingly overnight. A card (8600GT 512MB) that could push Valve's TF2 (two year old game at this point) at 45-60fps on 1280x1024 no problem with most of the settings at med

  • Whats the point? (Score:4, Informative)

    by Shanrak ( 1037504 ) on Saturday January 16, 2010 @06:54PM (#30794020)
    Toms Hardware's review here: http://www.tomshardware.com/reviews/radeon-hd-5670,2533.html [tomshardware.com] TLDR: While it does support DX11, its not powerful enough to really do much with it, barely keeping 30 FPS at 1680x1050.
    • by tepples ( 727027 )

      While it does support DX11, its not powerful enough to really do much with it, barely keeping 30 FPS at 1680x1050.

      As I see it, the point of a bargain DX11 card is being able to run games that require DX11 at a lower res such as 1024x600 rather than at 0x0 because the game fails to start without features present.

    • Re:Whats the point? (Score:4, Informative)

      by Anonymous Coward on Saturday January 16, 2010 @07:03PM (#30794104)
      I think your post is misleading. According to that article, the card gets 46FPS average on Call of Duty: Modern Warfare 2, on 1920x1200, highest settings -- and that's one of the more intensive games. I have no idea what numbers you're quoting.
      • MW2 does not have DX11, the numbers I'm quoting is on Dirt 2, which does have DX11. The whole selling point of this card after all, is DX11 support since there are much better cards for that value that does not support DX11. (Note I'm not an ATI basher, I have a HD 5850 which I bought for DX11 in Arkham Asylum and Dirt 2.)
        • I guess the 5670 has lower power consumption than the cards it replaces. As an owner of a Radeon 2900, I can appreciate what that would be useful for, like not sounding like a vacuum cleaner while playing games. But I don't know whether I'd want to upgrade to the 5670, since as you said it seems to not be significantly faster than a 3870, but it costs a fair bit more.
    • Tom's Hardware seized to be informative years ago, nowadays they are just nVidia/intel advertizer.

    • Re:Whats the point? (Score:4, Interesting)

      by YojimboJango ( 978350 ) on Saturday January 16, 2010 @11:41PM (#30795818)
      I'd like to point out something in that review. The only benchmarks that this card ever goes below 30fps minimum are Crysis and Far Cry 2 at 1920x1200 running in DX9 mode (instead of DX10 where the card is more likely to shine). Also they list the GeForce 9600 as getting 40.7fps average while playing DIRT in DX11. The GeForce 9600 does not support DX11.

      In DirectX 9 mode, the Radeon HD 5670 is once again keeping pace with the GeForce 9800 GT, delivering playable performance all the way to 1920x1200. However, once DirectX 11 features are enabled, the latest Radeon slows to a crawl. Even the powerful Radeon HD 5750 has difficulty keeping the minimum frame rate above 30 fps at 1680x1050.

      They pretty much tell us that they're testing these cards using higher settings for the ATI parts. Also on the reviews front page it tells us that they've under-clocked all the cards before testing. Why would anyone take their reviews seriously after actually reading that?

      Not like I'm an ATI fanboy here either, my current and last 3 video cards were all Nvidia (was close to getting a 4850 about a year ago, but newegg had a sweet sale on the GTX260). It's just that this level of sleaze really pisses me off.

    • While it does support DX11, its not powerful enough to really do much with it, barely keeping 30 FPS at 1680x1050.

      When did 30 FPS become bad?

    • by hkmwbz ( 531650 )

      barely keeping 30 FPS at 1680x1050

      30 fps at 1680x1050 sounds fucking amazing to me. I would probably just run it at 1200x1024 or something anyway. But when did 30 fps become bad?

      • 30 fps average usually means that sometimes you're looking at 5 fps, that's when.

        • by hkmwbz ( 531650 )
          I doubt you'll be looking at 5 fps, and still, 30 fps doesn't sound too bad. Just turn down the resolution a couple of notches in case of lag, and the problem is solved.
          • I doubt you'll be looking at 5 fps, and still, 30 fps doesn't sound too bad. Just turn down the resolution a couple of notches in case of lag, and the problem is solved.

            So you're one of the three people who hasn't moved to an LCD yet? Is it that you like wasting desktop space, or electricity? The rest of us would like to use our displays at their native resolution. Also, I find resolution to be the single most important factor in what I can see. When I was playing TacOps I had a fairly beefy computer and I could often shoot people before they could even really see me, on distance maps. Reducing texture quality would make far more sense.

  • by tji ( 74570 ) on Saturday January 16, 2010 @07:11PM (#30794160)

    I'm not a gamer, so the 3D features are not important to me. I am an HTPC user, and ATI has always been a non-factor in that realm. So, I haven't paid any attention to their releases for the last few years.

    Has there been any change in video acceleration in Linux with AMD? Do they have any support for XvMC, VDPAU, or anything else usable in Linux?

    • Re: (Score:3, Informative)

      by Kjella ( 173770 )

      From what I understand hardware acceleration is now somewhat usable with the Catalyst drivers (source [phoronix.com]). But for the open source drivers there is nothing, there's no specs for UVD and even though it should be possible to implement a shader-based acceleration and the docs for that is out, no one has done it yet.

      • Re: (Score:3, Informative)

        by moosesocks ( 264553 )

        AFAIK, the open-source drivers are progressing at a breakneck pace, and hardware acceleration is very usable on some cards. One of the more recent kernel releases included a new driver, which is allegedly quite good.

        Apologies for being unable to offer more specifics. The current state of affairs is rather confusing, although I'm fairly confident that we're very quickly progressing in the right direction.

        • by Kjella ( 173770 )

          AFAIK, the open-source drivers are progressing at a breakneck pace, and hardware acceleration is very usable on some cards.

          You are referring to 3D acceleration, not video acceleration. There is no open source video acceleration for any card, neither UVD-based or shader-based.

          • My guess is we will need to see OpenCL support for people to start working on these features. You could do it with shaders probably. But shaders are easier to work for to program filters and stuff like that.
    • by bfree ( 113420 )
      I've had no problem displaying BBC-HD (1080i h264) with an 780G and an X2 5050e (low power dual core) with the Free drivers from x.org (but non-free firmware required for video acceleration and 3d). I wouldn't touch the closed source drivers from Ati or NVidia with yours but I'd now regard the modern Intel or Ati solutions as just fine for undemanding users.
    • I am an HTPC user, and ATI has always been a non-factor in that realm.

      Not in Windows. MPC-HC's hardware acceleration has worked better with ATI chips than with Nvidia until just recently. The biggest sticking point was that VC1 bitstreaming (where you hand the entire bitstream to the gpu for decoding and display, rather than accelerating just parts of it like iDCT) didn't work on any nvidia gpus except the embedded ones - that did change with their most recent hardware release a couple of months ago, but ATI's had support for bitstreaming VC1 in their regular cards for at l

  • It’s called a “software renderer”. ;)

    Just as AMD, I did not say that it would actually render anything in real time, did I? :P

  • Anyone else still :%s/AMD/ATI/g when coming up on these stories?

  • by Eukariote ( 881204 ) on Saturday January 16, 2010 @07:42PM (#30794426)

    With NVidia unable to release something competitive and therefore creating a "new" 3xx series into being through renaming 2xx series cards [semiaccurate.com], the gts360m as well [semiaccurate.com], those with a clue will be buying ATI for the time being.

    Sadly, the average consumer will only look at higher number and is likely to be conned.

    • The average consumer isn't buying video cards, especially not "top of the line" ones. Whether Nvidia is doing this or not, I don't think it will have much affect on the market.

      And it's not like you can compare model numbers of Nvidia cards to those of ATI's and figure things out, if they did that, everyone would just buy ATI anyway.

      • That is annoying.

        Whenever I think of switching to an intel cpu I give up since I cannot figure out how to compare them to an amd cpu

        I am sure I would have the same problem if I was switchiong from itel to amd

        • Whenever I think of switching to an intel cpu I give up since I cannot figure out how to compare them to an amd cpu

          Look at benchmarks (preferably from multiple sources), particularly of things you actually tend to use the CPU for? There are plenty of people out there who've already figured out how whichever CPUs you're considering compare to each other for <insert purpose here>, whether it's compiling code, playing games, encoding video, running simulations, or whatever else. Works for me. This past fall, I found a few different ones in the price/performance range I was looking for, then poked around on places

    • The funniest part is that much of the 2xx series cards are just renamed 9000 series cards, and much of those are renamed or die-shrunk 8000 series cards. That said, Charlie Demerjian is hardly an unbiased source of reporting on nVidia, although I think he does have good reasons for his "grudge."
      • Charlie Demerjian is hardly an unbiased source of reporting on nVidia

        And the understatement of the year award goes to...

      • by Jthon ( 595383 )

        Charlie doesn't know what he's talking about. Most of the 2xx cards at this point should be GT21x chips which are based on the GT200 high end chip. The 9000/8000 series cards were all g9x or g8x architectures. Though I think at the low end they might have reused g92 as a G210 in the desktop segment.

  • We consistently see new hardware like this for people "DX10 cards now as low as 150$" or in this case DX11 cards at the 100$ price point.
    Time and time again the game developers couldn't give a damn and I don't blame them - they target the biggest possible audience.
    I'll never forget the Geforce 3 announcement at one of the Apple Expos of all things, Carmack was there and showed off some early Doom 3, it was absolute hype extravaganza. "Doom 3 will need a pixel shader card like the GF3!" So many people purch

    • by mdwh2 ( 535323 )

      My point is, any new tech like DX11, while great for all of us is never fast enough in the first implimentations, you'll see in 18 months time though, the DX12 cards will be bloody fantastic at DX11 features though, this is just how it is.

      If that's true, you should be glad to get a DirectX 11 card, because it will be bloody fantastic at DirectX 10 features, which your current DirectX 10 card must surely not ever be fast enough at...

Get hold of portable property. -- Charles Dickens, "Great Expectations"

Working...