Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Hardware

PC1066 RDRAM vs. DDR SDRAM 183

Brad wrote into send us his "Comparison of PC1066 RDRAM vs DDR SDRAM. Quote - RDRAM is considerably more expensive that DDR SDRAM, and up until now the 100MHz PC800 specification didn't do well in comparison. Just recently 133MHz PC1066 was launched, and is now officially supported by the new Intel P4 and the Intel 850E core logic chipset, but this time promises to bring memory performance to the next level."
This discussion has been archived. No new comments can be posted.

PC1066 RDRAM vs. DDR SDRAM

Comments Filter:
  • 1066? (Score:2, Offtopic)

    Why name memory chip standard after the year in which the Battle of Hastings was fought?
  • Phew! (Score:1, Interesting)

    by Anonymous Coward
    I was wondering what was going to come along to give PC/OS manufacturers an excuse to charge more for a PC, and here it is!

    No doubt XP2 will require a 4ghz cpu, 2 gigs of this new ram, different coloured motherboard, maybe firewire2, superDUPER ata 9 million IDE etc etc...

    I`m stopping at my current machine. Linux presumably doesnt need all this crap to do the same stuff its done up until now without it. What do we need more power for anyway? Games? Is that it? What other aspect of PC`s needs accelerating now? I thought the weak link was internet bandwidth?
    • Well, as we speak, I am testing some new memory in my old Compaq 575. I have an AMD K-6 2 installed, so it runs fairly well, normally. Anyway, Windows 98 crashed badly, so I decided to reboot into the RHL 6.1 partition and see how that holds up. Right off the bat, Gnome crashed, and I was deposited right back a the login screen. I'm using Mozilla now, and the talkback feature is hollering full blast. Here's what I have done: I do not have a matched set of memory in this thing. I can see right now I am going to have to put the old memory back in. Linux does a better job than Windows 98 in tests like this, but still has problems. My point is, whatever memory you use, make sure it is good quality, and a matched set, approved for your machine. I like those setups where you get 512 MB in one stick. That ought to do it. I'm not blaming Windows, it's me and my cheap off brand ram. This is a case where it's not the software, it's the hardware. If you always wondered if you would hear about someone that has that problem, then today is your lucky day:-)
      btw, I'm having all kinds of strange colors show up, but except for that, Linux is hanging in there. Windows couldn't get too far before a lockup.
      • Ok, I found a home for the two 32 MB EDO 72 pin 4k refresh sticks. It's in my Macintosh Quadra 660av!
        Normally, apple ram will work in my Compaq 575, but these two sticks caused lots of trouble with Windows 98 in the Compaq, and barely ran RHL 6.1 (well enough to get the above post completed). I am making this post with the two sticks, and from the Mac.) Where will I go next with the two bargain 32 mb sticks? Will I put them in something else, and get back here with a slightly off topic post? I'll spare everyone that;-).
  • DDR?? (Score:1, Funny)

    What does Dance Dance Revolution have to to with SDRAM??
  • Sure, it's faster... (Score:1, Informative)

    by Anonymous Coward
    but why would anyone want to shell out for an RDRAM/P4 system? You can get an Athlon for much cheaper, and load up on DDR memory. It may not be quite as fast as the Intel system, or play a fancy tune in some commercials, but it'll get the job done for a lot less, in most cases.
    • Once again, AMD is a better bang for your buck...
    • I had a very nasty experience with 3 athlonXP motherboards. For this reason I threw the athlon in my bottom drawer in my closet and downgraded back to my pentiumIII 700 after several hundred dollars and over a month of time were blown.

      Some of the problems are alot of athlon motherboards require apic irq sharing which linux doesn't fully support yet( read the article on soyo's mobo from last march here on slashdot), to requiring weird 400 watt power supplies, to incompatabilities with standard hardware like geforce video cards and even netgear nics( I had to buy an expensive intel etherpro, more info is available from abit's newsgroups) and even a few sound blaster lives, to also some freezes after several months of use from msi boards. Alot of you reading this have had nothing but great luck with there althons and I am not debating there are nice athlon machines out there but for now I am skeptical. For myself I will never buy a non intel machine again unless its a ti-powerbook :-)

      If you are on a budget and need something that is guarunteed to work then I would pick an intel box. There are more expensive and slower but you will not go through the hassle like I had. Oh and if you buy XP guess what? You will have to repurchase XP FOR EACH MOBO YOU REPLACE! This is what fucking killed me. I ended up buying Windows2000 professional to avoid this crap again. Yes, I need windows and linux along will not work for me. Intel boards are mostly extremely reliable. If its for school or work then you know that a downed system could really fuck you over and could cost you money and time. Alot of people had no problems with their athlons but I would advise you to pick safely unless your loaded. This is why people like myself buy intel based motherboards and chips. Stability and reliability are king for corporations and individuals.

  • Bzzzt! (Score:5, Informative)

    by popular ( 301484 ) on Monday May 27, 2002 @12:56PM (#3591127) Homepage
    Intel's i850 does not support PC1066 officially, and parts of that speed have only been validated since the release of i850E. Officially, the chipset simply supports a FSB that would complement that speed, if the two busses ran synchronously. Seen here:
    http://www.theinquirer.net/24050203.htm [theinquirer.net]

    That said, PC1066 has been tested before (can't find the article at Ace's Hardware), and the bandwidth of DRDRAM appears to compensate quite nicely for the P4's generally lousy architecture, as does its increased cache size (now 512k L2).

    • Yes, what exactly is lousy about the P4 architecture? Don't tell me it is lousy because of the performance / clock cycle ratio, because the chips clearly make up for that in their clock speed. The fact is, as overpriced as they are, the top-of-the-line P4 is king of the x86 performance arena right now. That doesn't happen to lousy architectures. p.s. I own an Athlon.
      • And don't tell me my Ford Pinto is lousy because when I rev it up to 10k RPMs I get the same performance as a Ford Taurus at 2k RPM! I thought Slashdot embraced the simple and elegant approach instead of the brute force method?
        • The P4 was designed for high clock speeds by giving it a long pipeline and a trace cache. It isn't a "brute-force" option, just a different method of increasing performance.
          • oh and to counter your ford pinto example, my Celica GTS does 180HP out of a 1.8L engine at 6800RPMs, while a Ford Mustang does 180HP at probably 3000RPMs. Which is the brute force approach?

    • I talked about the architecture of the Pentium IV with two of the architects. (In Portland, Oregon, it is sometimes possible to meet them at parties, and we have become friendly.) In perhaps 18 months, the speed of the P4 will reach 6 GHz. That's when you will be seeing more of the benefits of the design.

      Remember the 1 GHz P4? That was a marketing push to try to counter AMD's competition, not something the engineers wanted. In many ways, it made the P4 look bad, because the P4 was not designed to run at 1 GHz. People still remember the poor 1 GHz benchmarks; those benchmarks have done lasting damage.

      In my opinion, Intel's marketing is not technically skilled, and not skilled overall. (One of the engineers strongly agrees with this.) One of the tasks of the marketing people now should be showing people how the much faster processing speed can be used. Intel marketing, having little technical knowledge, cannot possibly do the job.

      Also, Intel's management has foundered since Andy Grove got tired of running the company. The problem with poor management pre-dated his cancer. No matter what you do, if you do it for too long, it stops being exciting and becomes boring, and it becomes difficult to give it proper attention.
      • Don't you mean the 1GHz P 3 ?

        • It appears so. I got my information confused: Intel confirms P4 speed revs [213.219.40.69]. I confused the disappointing early P4 benchmarks and the problems with speeding up the PIII.

          The overall point is correct, however. Intel's marketing created big problems for the company. Intel let events run the communication about the P4, rather than their own marketing explanation. For example, see Pentium 4 yields 'not impressive' [theregister.co.uk]. Someone leaked that story from a plant in Israel.

          Now that I look at some of the old articles, I realize that Intel's marketing communication was even worse than I thought. In general, companies are having huge problems running highly technical operations with a large percentage of people who have little technical understanding.

          My contacts at Intel insist that the biggest problems are with communication, not with fundamental details. To me, that seems right.
      • So what you are saying is that the P4 has a flawed architecture, but they overcome that by ramping up the speed to 6GHz (and 1.2MW heat dissipation)? I'm not seeing how this is a plus in any way. Most people consider good design to be able to do more (processing, executions per clock cycle, memory movement) with less (voltage, energy, heat).
        • I dont have any URLs to back this up - but the point of the P4 is specifically that it does less with each cycle (it doesn't even have a barrel shifter..) - but it's consciously and deliberately designed to be able to go to massive clock speeds as technology improves. Yes, it beats the Athlon clock-for-clock - but in 2 years time when the P4 is at 6Ghz or whereever, what is the Athlon going to do then?

        • No, the P4 has an architecture that was designed for the computers of the future. It's like a small dog with very big paws. It will be impressive when it grows up.

          The heat dissipation comes from using the P4 architecture with the larger design rules. As the die sizes shrink, the heat dissipation will go down, and the wisdom behind other elements of the design will become more apparent.

          Notice that we are already seeing this effect. The 2.4 GHz P4 performs very well.

          Intel is demonstrating a 5 GHz P4 that runs cool with no fan. See, for example, Intel to demo fanless, cool 5 GHz chip [theinquirer.net]. Quote: "Intel has now formally released details of the 3MB cache on chip which it claims will deliver 1.5 to two times [the] performance over the current designs." [My emphasis.]

          The utter sadness of Intel's marketing is demonstrated by the fact that this information is being brought to you by a guy [me] whose only connection with the information is that he sells computers to business customers and that he happens to live in the same city as Intel's design team. The guy happened to meet two Intel engineers at parties. If Intel had good marketing, you would already know these things.

          The moral of the Intel marketing story is: Don't try to run a high-tech company with low-tech employees in marketing. If I were running Intel's marketing, your little brother and maybe even your mom would be asking you about Intel's great new achievements.
          • Hmm.. Interesting how an post that's flat out wrong got modded up... Ahh well..

            As others have pointed out, the 5GHz chip was NOT a P4 at all, but just a stripped down portion of the P4. The whole processor is only expected to reduce power consumption by something like 23% in the integer unit, ie it'll do VERY little for the overall power consumption of the chip. A 5GHz P4 on .13um design rules is still going to require a LOT of power (though don't think for a second that Intel isn't doing plenty to reduce power consumption).

            Also, that's a great quote, but if I can add another quote from the same article:

            "This processor, note, is a 32-bit chip - it's a different presentation from the McKinley that we detailed above."

            The 3MB cache is for the McKinley (aka "Itanium 2"), it has NOTHING to do with the 32-bit integer core mentioned above, and it certainly has nothing to do with the P4!
      • Remember the 1 GHz P4? That was a marketing push to try to counter AMD's competition, not something the engineers wanted. In many ways, it made the P4 look bad, because the P4 was not designed to run at 1 GHz. People still remember the poor 1 GHz benchmarks; those benchmarks have done lasting damage.

        Except there was never 1GHz P4. The slowest desktop P4 is 1.4GHz -- they need the MHz gap just the match the speed of P3.

        In my opinion, Intel's marketing is not technically skilled, and not skilled overall.

        Well, they did manage to convince people that this magic MHz thing is all that matters...

  • There's certainly something to be said for proprietary memory technology. Sure, it's expensive, and Rambus does all kinds of dishonest lawyer tricks with the patent system, but you probably won't find that level of integration between the processor and the memory on a standards-based SDRAM system. AMD now faces even more serious competition from Intel, who could bury them, performance-wise, with this kind of memory bandwidth.

    I wonder how expensive a graphics card with RDRAM would be, or if it would be any faster?

    • it would cost $199 [rdram.com] or $99 [demon.co.uk]
    • By rights, if you accept the higher bandwidth present with RDRAM, it should be doing dramatically better than DDR SDRAM. It's not. This is because it takes longer for the RDRAM to respond when it's accessed. If you're doing large blocks of things in memory, you might see an advantage. I say might because modern CPUs don't do as well with large blocks of data (stuff pops out of cache, etc.) so any advantage there is masked at least partly by cache misses, etc. The same goes for display chips for differing reasons- display chips access memory VERY regularly and very often. The latencies present in RDRAM might be too much.
    • OK, that's crap. Ask Nvidia & ATI! They need every little bit of bandwidth they can get on their video cards and they still use DDR. I am sure Rambus Inc would love to see RDRAM on GeForce5 but it ain't gonna happen. DDR is superior. It has lower latency and the bandwidth difference between it and RDRAM can be easliy fixed by cranking up the clock and interleaving.

      Lastly, the practices of Rambus Inc make me not touch their RAM with a 10 ft pole but that's besides the point.

      D.
    • AMD now faces even more serious competition from Intel, who could bury them, performance-wise, with this kind of memory bandwidth.

      That was the original promise of RDRAM, but turned out that where the rubber meets the road, latency will win this particular drag race with most people. I suspect this will slowly change as programmers start to make more resource-hungry applications that address very large regions of memory. But by the time that changes, all the AMD systems will be Hypertransport-backed, yes? Speaking of Hypertransport, it appears to me that the Hypertransport alliance is winning the bandwidth game in terms of adoptees and so forth. This has been one of AMD's better moves.

      C//
    • Notice how the article talks about DDR 266 having 2.1GB/sec and DDR333 having 2.7GB/sec. PC1066 has bandwidth effectively equal to DDR 266, and even on his tests the memory seems to give a about a 2% performance advantage. Rambus isn't really a break through, and it will soon be antiquated by cheaper DDR 333 which has 600MB/sec speed advantage over PC1066.
  • Ok, the diference is not THAT large... knowing that this memory is much more expensive, Id rather have a system with 3gb DDR that is slight slower (but we know atlhons are in many cases faster) than one with 512mb of this "state-of-the-art" chip...
  • by gamorck ( 151734 ) <jaylittl e A T ... l i ttle DOT com> on Monday May 27, 2002 @12:59PM (#3591135) Homepage
    Why didn't they show us any Quake III comparison benches? We all know that at lower resolutions the processor drives Quake III and that its extremely sensitive to memory bandwith capabilities. Anyway it appears that RDRAM 1066 is a definite improvement over RDRAM 800. Its good to see that Intel is still continually raising the bar.

    Also I believe there were some initial benches (better ones) on http://www.tomshardware.com

    J
  • PC1066 supported? (Score:3, Insightful)

    by pacc ( 163090 ) on Monday May 27, 2002 @01:04PM (#3591151) Homepage
    The right way around would be to report that there now are PC1066 RAM available that supports the I850E platform.

    Apparently the chipset is just an overclocked variant of the earlier variant and could not use the slowest version of the PC1066 standard memory. Ironically the only version available when 850E was launched.

    www.theinquirer.net, wish they had a better back-catalogue
  • Hype (Score:1, Troll)

    by rmarll ( 161697 )
    Good god. Most of those benchmarks showed little or no performance benefit. Some even had a small(insignificant) decrease compared to the other platforms.

    The reviewer was sure jazzed about that .1-1% increase though.

    Really damn excited...
    • Re:Hype (Score:2, Informative)

      by Frogg ( 27033 )
      Most of those benchmarks showed little or no performance benefit.

      The article is about PC1066, a new kind of memory. The memory specific benchmarks do show quite a big performance increase!! (see the last three graphs on this page [tweakers.com.au] of the article)

      The fact that the other graphs show little or no performance difference I think is quite likely due to the fact that the tests employed have different kinds of bottlenecks due to system limitations -- limitations other than memory bandwidth.

      You might get similar results if you tested a new sound card (for example) that had faster hardware acceleration -- sure, the Quake III benchmark would only show a small difference, but another test that made more significant usage of the sound card (a test in Cubase for example) would show a greater performance increase. (Umm, I know it's not a great example, but I'm hope you get what I mean!).

      • Sorry, my bad. I'll retract my statement just as soon as my boss replaces all of our software with Mem Tach.

        Sarcasm aside, you are right. Probably.

        None the less, our fine editor Brad Maher was inexplicably exuberant, nay, gushing over a synthetic memory benchmark that was contradicted by every single other test he ran.

        It's just weird. I guess he has to have something to write about, but apparently he can not be bothered with real world application.
  • I dont really know much about memory, but from the test results shown, the percentage of performance increase seemed to be almost trivial on the multimedia test. would this really help my browser render pages faster or increase the frame rate of my dvds?
  • So maybe I am an idiot, but does anyone know (i.e, have figures) that relate these to the memory types commonly in systems people actually have... (SDRAM).

    For example... apple is moving from of PC133 SDRAM (current G4 systems) to PC2100 DDR SDRAM, what does this actually mean to an actual user?

    MAK
  • The fix is in. (Score:4, Insightful)

    by blair1q ( 305137 ) on Monday May 27, 2002 @01:15PM (#3591193) Journal
    What a bogus comparison.

    PC2100 is old news, and 1066 RDRAM is just being released.

    The proper comparison would have been against PC3200, or PC2700 at least.

    N.B., I've been using PC2700 in my machine for two months. PC3200 is about 33% more expensive [priceindexes.com].

    --Blair
  • 5% is "Thrashing"? (Score:2, Interesting)

    by jigokukoinu ( 549392 )
    Of all the tests done between these two, about a 5% improve was the most that the PC1066 had. How exactly does about a 5% improve justify the (previously true, now perhaps perceived) significant increase in price?

    It ALMOST sounds like someone *COUGHRDRAMMAKERSCOUGH* was "supporting" the writer of that article, their adjectives were too strong for the data.

    -Jeremiah
    • Considering people buy Sun Workstations, which (vs. a Pentium 4 workstation) give you a -50% speedup for a 500% markup, it seems that RDRAM, which gives about 5% speedup for 25% extra cost, is quite well worth it.
      • Whoa....

        CRACKPIPE ALERT!!!!!!!!!

        Since when are Suns faster than a garden variety X86 box at 1/6th the price? Provided you don't need more than 4 procs and what, 16 gigs of ram, a Sun's price is not justified. Yeah additional X86s don't scale like the what, 93% boost you get from doubling a crowd of Sparc.... BUT there's not much you can do with a 16 proc box that shouldn't be able to be handled on a 4 quads.

        Not much....
        • You can get an absolutely top of the line dual Pentium 4 workstation for $4000. Compare to the top of the line Sun Blade 2000 - which costs $23,000. And has half the performance (if even).
  • by NickRob ( 575331 ) on Monday May 27, 2002 @01:32PM (#3591259)
    I think I'll wait until The guy who wrote this hardware report [somethingawful.com] writes on this issue.
  • Seriously... why doesn't RDRAM die already... everyone knows it sucks for its price compared to DDR..... I hope Intel learned their lesson, they can't force stupid (and expensive) things onto consumers...
    • 1) RDRAM doesn't die because Intel still supports it.

      1a) Intel still supports RDRAM because it wasn't a 100% bad decision, and they invested HUGE amounts of money.

      2) Intel can't force stupid things onto consumers? How about an endless string of CPU upgrades based originally on the 4004? Motorola dumped the 6800-based line for the PPC, which is what Intel has been too scared to do. If IBM hadn't fallen on their fat and lazy ass, the PPC probably would have cut Intel's market share to about 40% right now. (and we'd have a better windows CPU than the P4)
      • If intel is still supporting Rdram so strongly, how come there isn't an Rdram supporting chipset on intels future roadmap?

        • Is there not? Excellent! That might mean that Intel will continue to support it on the P4 line, and then let it die a miserable death.

          They can't dump it yet, because most of the early P4 systems were sold to companies who want some ROI before the hardware dies. If Intel pulled the plug 100% right now, Sun would reap the benefits.
  • So, let's see... (Score:1, Flamebait)

    by jejones ( 115979 )
    ...we compare the very latest Rambus RAM against previous generation DDR (isn't DDR333 available now?), find one benchmark in which the Rambus RAM runs about 4% faster, and say that Rambus "excels." What's wrong with this picture?
  • this is sort of a bunk article, isn't it? i mean, they don't go into Rambus' higher latencies at all..
  • by nrosier ( 99582 )
    I still don't get what the deal is with all this Mhz....
    Why can't they just do interleaving (call it stripping/RAID-0 for memory)? No need to crank up those Mhz's, but spread the load over a couple of DIMM's. Most large systems (at least Sun I know off) still use 100Mhz or so DIMM's but do 8-way interleaving (maybe even higher) to get their high memory bandwidths.
    The market seems to be demanding higher Mhz's and seems to forget there's other stuff involved. Just look at IBM's Power4, Sun's UltraSparcIII etc... Lower Mhz's (or Ghz's) but with a big level-2 cache and by using SMP they're able to beat whatever Intel/AMD system you put them up against.
  • by Jeppe Salvesen ( 101622 ) on Monday May 27, 2002 @02:19PM (#3591422)
    CPU cooling is much more relevant to performance than a 2% memory bandwidth gain.

    Basically, CPU cooling has been hitting us for a good while.

    From an article [theregister.co.uk] about a bigass Beowulf cluster running Transmeta processors, you have Wu-chun Feng of the Los Alamos Labs stating

    The continued tracking of Moore's law will result in the microprocessor of 2010 having over one billion transistors and dissipating over one kilowatt of thermal energy; this is considerably more energy per square centimeter than even a nuclear reactor.
    Oh my. So - what else can we do to stop this trend? Relatively slow multi-processor machines. If we keep working on multi-threading our applications, we might be able to make a computer with 8 1ghz efficient chips outperform an 8ghz Moore-compatible Intel hype-chip-based system. Really. Multi-processor machines have traditionally been too expensive for the desktop. The software people have not spent a lot of time making sure that the regular end-user applications scale well across several processors.

    Take something like a web browser. Given a bit of wizardry (obviously, we need to consider concurrency and critical sections), you could have separate images downloaded and processed by separate processors. Your flash ad would run on another processor.

    Frankly, I'm wondering what's stopping us from using this approach to increasing performance? Is this like the fact that OEMs equip the low-end PCs with too little RAM so that Joe Shmoe will buy a new one as quickly as possible, since he does not know that spending 100 bucks on more RAM will make his computer last another year or two?

    And, really, as long as the focus is on the gigahertz, do the chip makers really concentrate on making their designs as efficient as possible?

    • UUUhh for a webpage loading multiple images and perhaps a flash object... I'm pretty sure it has to do that in seperate threads anyways.

      Thus an SMP system would handle that just fine without any extra programming.
    • Frankly, I'm wondering what's stopping us from using this approach to increasing performance?

      Multithreaded programming is tricky, and writing efficient multithreaded programs that don't suffer from mutual thread-contention issues is even trickier. The sovoir noire of thread programming is just now reaching the mainstream, in part due to Java, actually. Which isn't to say I'm any kind of Java fanboy, but credit where credit is due.

      Speaking of Java and threads, I think it's past time for someone to seriously think about creating a language with even more first class structures for dealing with parallelism.

      C//
      • I'm defintely not arguing that this is tricky. However, some of the multithreading might be possible to do behind the scenes. Let's consider GTK or Swing. By introducing some (hidden) complexity, wouldn't it be feasible to have multiple threads painting and manipulating widgets and windows? If we had some communications between components and "layout manager", you could have the "layout manager" assign tasks to children, since it should be able to figure out the sizes needed for the widgets.

        Am I stumbling here? I haven't dealt that much parallelism, really.. (About to, though, but that's a different story)
        • Well, the easiest way to achieve such a thing would be to back it with what is called a "job-worker-thread model" where you ask for something to be done and then have a pool of threads service the request. What makes this hard is time-dependencies between the tasks, and mutual resources that they each depend on. For example, does the underlying OS _itself_ allow multiple threads to blit to areas of the screen at the same time? While certainly each thread can _prepare_ its bit plane simultaneously, it's likely that there will be some resource-contention going on there, in some way limiting what we can actually get out of multithreading. Note that I actually don't know what OS constraints will be faced. Let's just say that even on a multiprocessor machine, multithreading results can be somewhat disappointing, some of the time. There's a whole stack of issues to be addressed, including the application, the OS, mutual resources like memory and storage, as well as absolute hardware issues like (on some machines) shared bus limitations between the multiple processors.

          C//

          C//
          • Hmm. Tricky, this. I don't think we necessarily need provably n-scalable code (or whatever it's called) for everything. Let's be pragmatic for a little while.

            In the widget-set example, processing/drawing the widgets in parallell could still provide better than 1-scalable code. Maybe it's 0.8n-scalable. When we're talking about 8 processors, that would still be a solid improvement. The remaining 0.2n would be available to (say) file sharing network, garbage collector, application or whatever else running.

            Let's not consider too many hardware limitations. For our theoretical shift in paradigms, we theorize that hardware limitations are as minimal as they can be. Maybe the graphic card even accepts pseudo-concurrent blit commands. If we rerouted all the focus on ever faster processors into improving multiprocessor architecture (and/or making the technology used on mainframes and SGI stations more affordable), I bet we could do a bit better than following the uniprocessor paradigm. After all, if smp-boxes became a commodity, wouldn't we have more hacker brainpower available to figure out how to use them more efficiently?

            Ok. I'm probably boring you silly with my abstract , non-rigorous thinking. I'll stop now. Good bumping brains with you.
            • It's an _old_ race, actually. Here's the deal. If Moore's Law stalls a bit, there is a tendency to move a bit towards multiprocessing. This mostly caters to products which can afford the additional silicon and space. Since Moore's Law is all about reduction, however, there is a practical limit to how much silicon will really fit in a box. Since the chips at any given process are all the same size, adding chips increases size linearly. And if the process is shrinking, well, you don't have this problem, and SMP tends to be not as important. This puts an absolute maximum upper limit on the number of CPUs you can _ever_ expect to see in a machine if you think about it. At least with basic semiconductor technology that we have today.

              Really. Consider. If process shrinks stopped at say, .01 micron, a top end CPU might be (making this up) 1 centimeter^2. Pretending for the sake of an argument that no smaller transistors were possible, our smallest CPU is 1 cm^2. Moving to 2-way SMP requires 2 cm^2, plus whatever additional logic and hardware you need on the board in order to make the chips work together.

              There is a dynamic between SMP and single-processor process shrinks and speed improvements.

              There are other things to think about (read: system responsiveness; threads/CPUS that aren't tied up can reply quickly to requests), but the ultimate equation means that very highly parallel SMP is second string to process shrinkages, and in any case, limited absolutely in its extent by the physics of the whole affair.

              About the only mitigating circumstance is economies of scale and industry. 20 years after we get stuck at .01 micron (or wherever), even though there hasn't been any more shrinkage, we'd still expect chips to be a lot cheaper. In that case, we get fudge-factor, because you might not care if your 1-processor computer costs $50, and you're 8-processor machine costs $500.

              You're still not talking very highly parallel machines, though, right?

              C//
              • Pretending for the sake of an argument that no smaller transistors were possible, our smallest CPU is 1 cm^2. Moving to 2-way SMP requires 2 cm^2, plus whatever additional logic and hardware you need on the board in order to make the chips work together. Now we're talking. Consider the fact that cooling the 8ghz processor could require 16 cm^2 worth of real estate for the required liquid cooling system, while you might get away with the same amount of real estate (or less) if you had 8 smaller, more efficient processors. So, if the processors start to disspate one kilowatt of thermal energy (that's as much as some electrical heaters), the amount of power and space needed to cool this down might pave way for the desktop SMP.

                However, we're ignoring the fact that our computers might be rather dumb in the future. If we're all fiber-connected, I can see a point where processing power is part of the internet connection deal.

                • Sure, but what you just proposed isn't really all that different than deliberately relaxing transitor-density and thereby increasing the amount of silicon space used by a single CPU. There's also a point at which maximum density has been reached, but no further clock cycles can be pumped. At that point, one would expect that the natural solution is to increase cpu real estate with additional parallel transistors, not unlike what Power-4 is doing with their dual-on-chip-CPU solution for the high end right now. This is, I might add, working out pretty well for them. Instead of trying to pump up clocks and find more ILP, they've just come up with a very efficient way of putting "two cpus" on board one single silicon die, and then tied them together with an on-die bus that has the sort of bandwidth you could only ever dream of getting right on board silicon. I expect to see more of this in the future, just not very much, probably.

                  Although I could be mistaken. There is some complexity function past which one will no further complicate a single cpu; at such a point, one would prefer multiple-on-die cpus, because the complexity is far easier to manage. IOW, managing a single cpu with a few billion transistors is probably a lot harder to do than to manage 8 exact copies of some similar cpu interconnected/routed by some simple but nevertheless high efficiency switch.

                  C//
      • Speaking of Java and threads, I think it's past time for someone to seriously think about creating a language with even more first class structures for dealing with parallelism.

        Erlang [erlang.org]
      • When writing multithreaded software will be as easy as writing multi-process software, that's where it will be at. Until then, most threaded software is a pain in the ass to write. I say most, because there are libraries which allow for much easier multithreaded software development, without a need for mutexes and locks. e.g.state threads.
      • Even better would be to change the OS to be inhierintly multi-threaded. BeOS comes to mind as being perfectly suited to this situation. Unfortunately, they are dead. :( Too much too soon I susspect...
    • Basically, CPU cooling has been hitting us for a good while. ... So - what else can we do to stop this trend? Relatively slow multi-processor machines.

      Perhaps someone can help me out here. Does power disappation scale linearly with clock speed and number of transistors? Or something else?

      If it does, wouldn't two 1GHz chips dissapate as much heat as one 2GHz chip, thereby erasing any gains?

      --Bruce

      • I believe the problem the poster mentioned was that of heat produced/area, rather than just the total heat produced. Same heat/more area(more processors) would be less of a problem. IANAPhysicist, but I also think the gains in processing ability - as you make tinier and tinier transistors and connections - don't scale with the increase in heat. There's probably some equation relating resistance to diameter that I'm forgetting...
    • Take something like a web browser. Given a bit of wizardry (obviously, we need to consider concurrency and critical sections), you could have separate images downloaded and processed by separate processors. Your flash ad would run on another processor.

      Web tasks tend not to be processor-bound. You're limited by your 'net connection for these (you can draw an image far faster than you can download it).

      It turns out that most of the tasks people do either aren't strong loads on the system at all (e.g. surfing, email, word-processing, spreadsheets) or are limited by some other part of the system (memory bandwidth, disk, or graphics card).

      Of the remaining tasks, most aren't easily parallelized (or at least not automatically). Of the ones that are partly parallelizable, the serial part of the task tends to cause bottlenecking, which gives you rapidly-diminishing returns (look up "Amdahl's Law" for a deeper explanation of this).

      The only processor-intensive, easily-parallelizable task that's currently done is 3D gaming, and the processing load for that is mainly handled by the video card, not the CPU. Graphics cards already parallelize to some degree on-die, but can't have more than one graphics chip without driving up the price of the card considerably. While this can be (and is) done for high-end cards, consumers prefer cards that are at a sane price.

      In short, in the one place where most people would benefit from a multi-chip solution, you won't see it.

      Frankly, I'm wondering what's stopping us from using this approach to increasing performance? Is this like the fact that OEMs equip the low-end PCs with too little RAM so that Joe Shmoe will buy a new one as quickly as possible, since he does not know that spending 100 bucks on more RAM will make his computer last another year or two?

      Actually, it's that Joe Schmoe *prefers* to buy as cheap a computer as he can get his hands on. This is why you don't see many machines sold with a vast amount of RAM, and why you don't see many dual-processor machines sold.

      People apparently really _do_ just want cheap machines, not optimized machines.

      And, really, as long as the focus is on the gigahertz, do the chip makers really concentrate on making their designs as efficient as possible?

      Yes - if you mean performance-efficient. Being able to say that you kick your competitor's ass in benchmarking does make some difference (especially if games are some of those benchmarks).

      There isn't much incentive to be power-efficient beyond the amount needed to keep your chip from melting into slag, for desktops, at least. There are many low-power offerings already used in palmtops and embedded devices.

      Power efficiency _is_ an issue, as reasonable power dissipation is the primary limit to a computer's clock rate. However, as long as people are willing to use computers with fans and heatsinks, your desktop processor will dissipate 50W+.
  • I do not use RDRAM. Not that I do not like its performance, thought that advantage is reduced when you use many modules because of its serialization, but because Rambus--the company--is nearly as evil as Microsoft.
    I'm not one to require all companies that I purchase from are ethical, else I would have to be a hermit, but Rambus has gone too far too many times.
    What gall a company must have to participate in open meetings of industry to discuss what to put inthe next few memory standards, without contributing, and then PATENT other peoples' ideas! Then to charge those same companies royalties to use their own innovations! Sickening!
    Here is a good, short article. I'm too lazy right now to write the html code. Sorry. :)
    http://www.theregus.com/content/archive/18849. html
  • DDR SDRAM, SRAM, DRAM, FLASH, ROM, CD, HARD DRIVES, MRAM, FRAM, etc. look at this new technology if you are a Geek and see what
    is in the Future of Data Storage.

    www.colossalstorage.net

  • next level? (Score:2, Insightful)

    by crow ( 16139 )
    What does "the next level" mean? Does that mean that mean that my fifth level fighter will have 35,001 experience points with the new technology? Does it mean my cube will be moved upstairs? Does it mean the little bubble will sit in the middle of the glass?

    That phrase should ring Dilbert-esque alarm bells. If there were awards for the most over-used marketing phrases, "the next level" would be due to win the grand prize this year.

    Did you know that there are about 788,000 hits on Google for that phrase?

    I'm sorry, but I have a bit of trouble taking any article seriously that uses that sort of marketing-speak.
  • Having not been in the market for new hardware for the past 2 years, would someone be kind enough to explain the differences between the different kinds of RAM mentioned in all the replies? PC2100, PC3200, DDRxx, etc.? Just a quick primer would be great. Thanks.
    • would someone be kind enough to explain the differences between the different kinds of RAM mentioned in all the replies? PC2100, PC3200, DDRxx, etc.? Just a quick primer would be great.

      In the beginning there was PC100 SDRAM. Well, actually, that was mid-nineties, but that's about when most Slashkiddies were born, so moving on. Obviously everything is just a marketing label, but this one meant 100 MHz. With SDRAM, each Hz gives you 64 bits, so the bandwidth is 6400 megabits per second.

      Thus PC133 and PC166 are 8500 and 10700 Mb/s.

      DDR is the same tech as SDRAM, except that it uses a trick to transfer data twice per clock cycle, so you get 128 bits per Hz. Thus PC100 DDR-SDRAM would be 12800 Mb/s. But Marketing decided that was unfair, so they labeled DDR based on twice the clock speed, so we have PC266 and PC333, which of course run at 133 and 166 MHz and give you 17000 and 21000 Mb/s.

      RDRAM is based on a new tech that gives you only 16 bits per clock cycle instead of 64 for SDRAM and 128 for DDR-SDRAM. The difference is that you can clock it way up. So there was PC600, PC700 and PC800 RDRAM, again based on MHz, so that gave you 9600, 11200, and 12800 Mb/s bandwidth. Basically you divide the number in four to compare with SDRAM speeds, since you only get 1/4 as many bits per cycle. Actually I believe modern Rambus controllers double this by interleaving two sticks, so now you divide by 2 - PC800 has four times the bandwidth of PC100, but requires a matched pair of sticks.

      Then the DDR people decided to start talking direct bandwidth, rather than megahertz. But unlike me, they mean megabytes, rather than megabits, per second. PC1600 is DDR-SDRAM at 100 MHz, since DDR gives you 128 bits or 16 bytes per cycle. PC2100 is DDR at 133 MHz, formerly known as PC266. PC2700 is DDR at 166 MHz, and PC3200 is DDR at 200 MHz.

      With interleaving, Rambus gives you 32 bits or 4 bytes per cycle. PC800 has the same bandwidth as PC3200 DDR, and the relatively new PC1066 has more - 4266 megabytes per second.

      Bandwidth is a good baseline for comparison, but RDRAM has a higher latency than SDRAM or DDR-SDRAM. That's why DDR, with its lower maximum bandwidth, is still speed-competitive with RDRAM (for a lot less money).

      • Don't own any RDRAM (using an Athlon+DDR mostly) but the "RDRAM costs much more" argument is bogus. Compare what Samsung originals PC800 cost compared to brand name--not generic or House Brand--true CAS Latency 2.0 PC2100. It's a wash.

        Quality PC2100 is frequently marketed as PC2400. On www.pricewatch.com, the difference between PC800 and PC2100 is $5 for 128MB.

        I don't pick my platforms for the DRAM. I went with an AthlonXP 2100+ (1733MHz). But if I was going to buy a Pentium-4, I would use the i850E with PC1066.
        • Quality PC2100 is frequently marketed as PC2400. On www.pricewatch.com, the difference between PC800 and PC2100 is $5 for 128MB.

          Perhaps, but you're probably comparing single-stick to single-stick. With RDRAM you have to buy a matched pair. So the right comparison is 2x128 PC800 ($80) versus 1x256 PC2400 ($51).

          Or go on up to 512MB. 2x256 PC800: $148. 1x512 PC2400: $114.

          So RDRAM costs an additional 57% for 256MB, or 30% for 512MB. Nice that it's no longer double the cost, but to me that is still a significant markup. Anyone know approximately how much of that is due to

          • (a) economies of scale,

          • (b) manufacturing cost after accounting for (a), or
            (c) patent licenses?
          • "So RDRAM costs an additional 57% for 256MB, or 30% for 512MB. Nice that it's no longer double the cost, but to me that is still a significant markup. Anyone know approximately how much of that is due to

            (a) economies of scale,
            (b) manufacturing cost after accounting for (a), or
            (c) patent licenses?"

            Samsung makes most of the RDRAM sold--even Kingston RIMMs have Samsung chips. So you have a bit of a monolopy supply issue (Elpida and Infineon also make some, but Samsung accounts for better than 80% of production, if memory serves.

            RDRAM has a bigger die penalty, but this has shrunk (no pun...) as Samsung shifted to .13-um process and 300mm wafers. Production and testing costs are about the same as higher-end DDR. DDR-II will be just as expensive as RDRAM (but will be made in far larger quantities, and have more competition).

            Rambus' royalty on RDRAM is 1% of the selling cost of the chips, so has memory prices have plummeted, so have Rambus' revenues.

            My original point is that when I decide to buy a computer, I pick a platform, not a memory. RDRAM is more expensive than quality DDR, but it amounts to less than the cost of shipping or a video card upgrade. For the last 2 years, I've been only building AMD systems. But the 2.53GHz P-4 looks pretty nice (for once). However, if you want to talk about a price difference, the premium you will pay for the higher-end P-4s makes the cost of memory wet change on the end of the bar.

            Athlon is about done. Hammer is in the wings, along with DDR-2. The problem with Athlon is that AMD's implementation of the EV6-bus spec. limits FSB to 133MHz, so adding memory bandwidth above what PC2100 can deliver makes no difference. I guess AMD could implement a 166MHz FSB/Memory bus, but why invest any more in validating an aging platform? Put it into Hammer, which has on-die memory controller(s), and can consume all the memory bandwidth you want to feed it.

  • They used the very latest RDRAM, but they used year-old, PC2100 DDR SDRAM. Hmm, I wonder who will win this battle!

    PC2400, PC2700, and PC3200 DDR SDRAM is out there. Why didn't they test against that?

    - A.P.
  • If you read the review, DDR & RDRAM are almost neck and next in all benchmarks, RDRAM only beats out DDR by less then 1% on most tests, and only beats it out in one benchmark overall.
  • by DarkHelmet ( 120004 ) <mark&seventhcycle,net> on Monday May 27, 2002 @07:52PM (#3592791) Homepage
    Pentium 2.4ghz, 1 gig memory

    RDRAM 1066: 2.04 fps
    RDRAM 800 2.03 fps

    DDRRAM 2100 2.03 fps
    DDRRAM 3200 2.05 fps.

    Conclusion

    I think we have a clear winner here. PC3200 DDR wipes the floor with the competition. Anyone who's invested in RDRAM is a loser, and knows it :). Too bad it took such a blatent lead in these upcoming Doom3 benchmarks in order to prove it.

    Tune in next week to our program to find out how you really should say it.... Tom-ay-to, or Tom-ah-to.

What is research but a blind date with knowledge? -- Will Harvey

Working...