Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Micron Demos SSD With 1GB/sec Throughput 120

Lucas123 writes "Micron demonstrated the culmination of numerous technology announcements this year with a solid state disk drive that is capable of 1GB/sec throughput with a PCIe slot. The SSD is based on Micron's 34nm technology and interleaving 64 NAND flash chips in parallel. While the techology, which is expected to ship over the next year, is currently aimed at high-end applications, a Micron executive said it's entirely possible that Micron's laptop and desktop SSDs could have similar performance in the near future by bypassing SATA interfaces."
This discussion has been archived. No new comments can be posted.

Micron Demos SSD With 1GB/sec Throughput

Comments Filter:
  • by electrosoccertux ( 874415 ) on Thursday November 27, 2008 @04:32PM (#25910901)

    This reminds me of all the demos of holographic disc technology. It'll be on the market in just 1 year! But it never is, and it's never affordable for us /. browsing types.

    • Re:Yes that's nice. (Score:5, Informative)

      by Joce640k ( 829181 ) on Thursday November 27, 2008 @04:44PM (#25910977) Homepage

      Yeah, but ... Intel is shipping SSDs with 220Mb/s read/write:

      http://hardware.slashdot.org/article.pl?sid=08/11/25/015209 [slashdot.org]

      What's so fantastic about 1Gb/s? It's only four times faster...a RAID with four Intel devices will do it so just put four of them in a box with a RAID controller and Bob's your uncle...

      • Re: (Score:1, Interesting)

        RAID does not actually work that way. Yes, you can get increased speeds with certain RAID configurations, but this is a whole different beast.

        • Re: (Score:3, Informative)

          by mikkelm ( 1000451 )

          Actually, in RAID 5, five 250MB/s drives will roughly offer you the same performance as a 1Gbps drive for most sequences of IO operations. SSDs feature almost linear scaling due to the extremely low seek times.

          • Re: (Score:3, Funny)

            by mikkelm ( 1000451 )

            Err, watching Thanksgiving football and posting on slashdot is not a good idea. s/1Gbps/1GB\/s

          • Re:Yes that's nice. (Score:5, Interesting)

            by lysergic.acid ( 845423 ) on Thursday November 27, 2008 @05:18PM (#25911167) Homepage
            that's still not quite as impressive as 1600 MB/sec throughput [micronblogs.com] using 2 drives (which can be integrated into a single-card solution).
          • by Smauler ( 915644 )

            That depends on your RAID system. I used to love the idea of RAID 5, until I actually looked at the benchmarks. Unless you have proper dedicated decent hardware calculating the parity (which really costs), writing to a RAID 5 is dog slow. Like, a lot slower than writing to a single disk in most cases. Reading is quicker, but far from wonderful on consumer level hardware.

            In my opinion, if you want a consumer level raid solution that will actually offer increased performance, RAID 1+0 is a good option. P

            • Re: (Score:3, Informative)

              by cheater512 ( 783349 )

              Erm my home server with four disks in RAID 5 (software RAID) handles wonderfully.

              I've never seen the RAID take more than 2% CPU and write speeds are far faster than a single drive.

            • Parity calculation sounds like another good use for GPGPU.
            • by Eivind ( 15695 )

              Nonsense, writing to raid-5 should not be "dog slow".

              In the absolute worst case (silly raid-setup with one deficated parity disk rather than spread-out-parity) and writing of only a single block, and the parity-disk and the disk holding that block both being on the same channel, so writing needs to be sequential, you'd absolutely worst case end up with half the speed of a single disk.

              In real-life a 4+1 raid-5 setup is about 3 times as fast as a single disk, sometimes more. It depends on write-pattersn offco

              • by Smauler ( 915644 )

                Have you actually looked at any benchmarks for RAID 5 on consumer devices? In real life, RAID 5 is useless for everything but very specialist applications. It just does not offer any performanec increase, at all. It does, however, provide redundancy if a disk goes down.

                • by Eivind ( 15695 )

                  Consumer raid-cards are generally crap, true. There's no reason for anyone ever to buy $100 raid-cards.

                  Either use software-raid, or if you're more serious, buy a good raid-card. The el-cheapo raidcards are pointless.

          • by kasperd ( 592156 )
            Actually with SSD the performance should be even better. Sequential reads on RAID-5 with n+1 harddisks will give you n times the speed of a single disk. The reason you don't get a factor of n+1 is that you have to skip the parity blocks on all the disks, and for skipping this few sectors the cost of a seek is going to be the same as just reading them. But with SSD you don't have the cost of seeks, so you should be able to read just the sectors you need and get n+1 times the speed of a single disk.

            Writes
        • Re:Yes that's nice. (Score:5, Informative)

          by Glonoinha ( 587375 ) on Thursday November 27, 2008 @05:04PM (#25911091) Journal

          Say what?

          Actually, RAID can work EXACTLY in this way. Set up a RAID 0 array of 250Mb/s devices and if the host controller can handle it - bingo, Gigabit throughput from the array. There's a guy out there that RAID'ed six of the Gigabyte iRAM cards on some high end RAID card a year ago - and he managed somewhere in the neighborhood of 800MB/s - surely a year later we can do better than that. The only limitations his rig encountered were the limited space available, and of course the volatile nature of the iRAM cards.

          The things by Micron appears to have handled the issue of volatile memory when the memory goes down, and getting all the bandwidth through a single channel bus. When it becomes commercially available - count me in for one (when the price comes down enough for me to afford it.)

          • "Can" is not the same as "does". Have you checked the actual performance in *all* the situations, not just raw read speed?

            • Re: (Score:3, Insightful)

              And you think the 1GB/sec quoted in the title is actual performance in all situations, not just raw read speed?

              • No, I was commenting on RAID performance, and on the anecdotal evidence of its workings by Glonoinha.

                • Re: (Score:3, Interesting)

                  by Glonoinha ( 587375 )

                  There are a few videos on youtube of guys that RAID'ed iRAM's showing just insane performance.

                  If it weren't for the cost of adding four of these (plus four 1G sticks of pc3200 on each) I would have already scored a similar rig - but right now I'm working on a limited R&D budget. Maybe next year.

                  That said - these are really, really sweet - but I have to ask whether the RAID'ed iRAM or the new Micron SSD can hold a candle to a ramdisk ( see also : http://www.ramdisk.tk/ [ramdisk.tk] ) - I figure on a machine that can

                  • Trouble with RAM is that it disappears when the power fails.

                    Even with the iRAM you lose it after 16 hours if I understand correctly.

                    So SSD has a real advantage there by the sound of things (in being more like a real hard disk).

                    • by hbr ( 556774 )

                      Trouble with RAM is that it disappears when the power fails.

                      Ooops - obviously I mean you lose the information stored in the RAM, and not the RAM itself!

          • by neoform ( 551705 )

            I bet he used that disk array for backup. I know I would.

          • Check out http://www.fusionio.com./ [www.fusionio.com] You can buy something like Micron's future product today from them, and it's available today. Albeit of course there is the little matter of price.

            C//

        • Yes, but you know what I mean. RAW transfer speeds can scale quite linearly when you put multiple storage devices in parallel.

          It's just a case of sorting out the controllers. SATA isn't fast enough for 1Gb/s so I assume it will be a mini-PCIe card or something like that.

          If it is mini-PCIe then I'll definitely be getting one for my Eee PC.

      • by nbert ( 785663 )
        If you are talking about RAID 0 you are almost correct (two disks in RAID 0 only get close to 2x the speed - they will never reach it) . The problem is that all data is gone in case that one hdd dies and the chances for that grow with every disk you add. The probability is 1 (1 p)^n (n being the number of disks). So at a failure rate of 2% per disk in 3 years you get 8% for 4 disks or even 28% for 8. Of course you could compensate this by combining parity with striping (RAID 0+1 or RAID 5 for example), bu
        • by dfghjk ( 711126 )

          The failure and scaling issues you mention aren't any different from those that Micron faces inside their product since the approach they take to performance is basically the same. You think doubling the flash parts and the number of channels isn't directly analogous to RAID 0?

          • by nbert ( 785663 )
            Not at all: If a disk in RAID 0 fails all data is gone. If one block within a SSD fails it will affect only the files which had parts stored in them.

            They are not using multiple disks. The reason why the article mentions two disks is because they needed a source and a target.
            • And if I short two pins on one of the flash chips, all data is lost from all the chips. :P

              What they are doing is getting a bunch of flash chips and using RAID 0 on them.
              One disk but with many chips in RAID 0 configuration hence the speed.
              A single flash chip cannot hit 1Gbps.

      • Bob's your uncle

        He is! But... how did *you* know that?

      • Joce, it doesn't say 1Gb/s. It says 1GB/s, which is 32x's faster than 220Mb/s SSDs. (bear in mind, 1GB/s may be a typo....)
    • Re:Yes that's nice. (Score:5, Informative)

      by Kjella ( 173770 ) on Thursday November 27, 2008 @04:45PM (#25910987) Homepage

      This reminds me of all the demos of holographic disc technology. It'll be on the market in just 1 year!

      That one has always been in the mysterious future (3-10 years away, never next year) and never really showed up outside of labs. SSDs on the other hand aren't really "new", they're in essence flash chips like we've been using in cameras and USB sticks for many years plus RAID0 that's been a well known way to make slow storage devices faster by running them in parallel. There's quite a bit more controller magic than that, but it's nothing really revolutional in the creation of SSDs. Only the regular miniturization process that's happening all around which means they are reaching capacities and speeds that are useful for main computer storage.

      • Re: (Score:3, Insightful)

        by TheRaven64 ( 641858 )
        300GB+ holographic disks are shipping now, but they definitely aren't in the 'affordable to /.-browsing types' category.
        • by Yvan256 ( 722131 )

          I can buy an external USB2, Firewire 400 and eSATA 1TB drive at Costco, today, for 235$ CAD. How much are those puny 0.3TB holographic disks, and how fast and reliable are they?

          • Re:Yes that's nice. (Score:4, Interesting)

            by TheRaven64 ( 641858 ) on Thursday November 27, 2008 @05:36PM (#25911265) Journal
            The InPhase disks are $180 and the drives are $18,000. Unlike your external disk, the disks are rated to last 50 years. Not sure how much the Optware versions cost, but they start at 1TB and go up from there.
            • by Yvan256 ( 722131 )

              Even if we take for granted that:
              - hard drives will never increase in capacity
              - the price of a 1TB drive will never drop

              Add to that the following assumptions:
              - a hard drive never lasts more than 1 year
              - we need to be in RAID 1 (two drives) to be safe

              That's 235$ CAD per drive, multiplied by two drives per year, multiplied by 50 years: 23 500$ CAD.

              I'll assume your 18 000$ InPhase drive price is in US$, which means it would cost me 22 000$CAD for the drive alone without any InPhase disks.

              So, it's either shell

              • Re: (Score:3, Informative)

                by TheRaven64 ( 641858 )

                You are assuming you'd only want a single disk. The target market is people who are generating several disks worth of data per day. If you are recording HD footage, and especially if you are editing it, then you burn through a TB very quickly. The cost of the drive becomes tiny per disk if you're using a lot of them. Even if you're only burning one disk a day, you're paying $50/disk over the course of the year. If you burn two a day then it brings the cost of disk and drive to around $200 each, very ch

                • Re: (Score:3, Interesting)

                  by Yvan256 ( 722131 )

                  In that context, then yes I do see how that would be a huge advantage.

                  I was looking at this as an average /. reader, as you say.

              • Hey that's great ! I gotta buy me some stock from this InPhase company .. apparently their drives never break ! Imagine that ! not like those sorry assed revolving dohickeys that break every year. Planned obsolescence ..
    • Re:Yes that's nice. (Score:5, Informative)

      by SanityInAnarchy ( 655584 ) <ninja@slaphack.com> on Thursday November 27, 2008 @04:51PM (#25911029) Journal

      It was about $300 extra for a 128 gig SSD in this Dell laptop. I just ran a casual test. Keep in mind, this is currently being used (lightly), and I haven't done anything to improve the results of this test -- in fact, probably just the opposite, as the file in question was downloaded via BitTorrent, and I've never defragmented this hard drive. It certainly hasn't been read since the last boot.

      dd if=foo.avi of=/dev/null
      348459+1 records in
      348459+1 records out
      178411124 bytes (178 MB) copied, 1.82521 s, 97.7 MB/s

      Keep in mind, that's throughput -- it gains nothing from the real killer feature of no seek times.

      I can always buy big, slow spinning disks and put them in a NAS somewhere. I can take old desktops, put Linux on them, and turn them into a NAS. For the kind of stuff that takes hundreds of gigs, I don't need much speed.

      But for the places where it counts -- like booting an OS -- there is a definite, real benefit, and it's not entirely out of reach, if you care about this kind of thing.

      • Re: (Score:3, Interesting)

        by nbert ( 785663 )
        AFAIK no SSD apart from Intel's newest line provides any real advantage over spinning disks. They are faster in some areas, but in others they perform very poorly (write times for example). You'll get far more realistic numbers if you specify a real file in of. Here is the difference:

        Desktop nerdbert$ dd if=test.zip of=/dev/null
        136476+1 records in
        136476+1 records out
        69876088 bytes transferred in 2.249553 secs (31062211 bytes/sec)
        Desktop nerdbert$ dd if=test.zip of=Herbietest
        136476+1 records in
        136
        • I'd really like to see how these drives perform compiling HUGE codebases ... java/c/whatnot
          • The limit on compiling has always been the processor for me, which spends most of its time at 90% or higher (one core only, really wish they'd muti thread gcc) so I doubt the drive would help.

        • by emj ( 15659 )

          It's funny, I seldom get better than 72MB/s with dd on my 4 year old system, even if it's serving from OS filesystem cache. I find that kind of slow actually..

          Though you can't beat the seek of flash devices, serving static images over HTTP from flash is a killer application.

        • I was deliberately testing only reads. If I was to test writes, it would probably be if=/dev/zero, of=whatever. But that would allow filesystem buffering to become more of a factor...

          A quick test shows somewhat less than 60 megs/sec. But then, I don't really need writes to be as fast, simply because I'm almost never writing that much.

      • Disk fragmentation is essentially unimportant on SSDs, and in fact, comparing a heavily-fragmented SSD vs. a heavily-fragmented mechanical disk would bias the result towards the SSD much more.

        Assuming a reasonably smart file system, fragmentation will only affect performance through increased seek times. So you may actually be getting a big benefit from your SSD when reading that file, depending on how much your BitTorrent client fragments its download files.

  • call me... (Score:1, Offtopic)

    by cosmocain ( 1060326 )
    ...when it's there. i'd take one - duke nukem forever would just run fine on this thingy!
  • No SATA, eh? (Score:5, Interesting)

    by fuzzyfuzzyfungus ( 1223518 ) on Thursday November 27, 2008 @04:43PM (#25910967) Journal
    SSDs built into mini-PCIe cards aren't new, so obviously they are possible(and I remember the concept going back as far as 44pin IDE drives on special PCI cards). Historically, though, these cards have appeared, from the perspective of the computer, as ordinary IDE or SATA adapters that just happen to have storage attached.

    Does anybody know if this widget from Micron is similar, or are they actually pushing some new flavor of interconnect that will require BIOS tweaks and/or special drivers?
    • by zdzichu ( 100333 )

      They may present SSD memory region accessible by PCI address space. The same way graphic cards present their memory. There's already driver for such kind of memory, it is included in Linux Kernel for few years and was used in implementing swap-over-videoram.

    • by kasperd ( 592156 )

      or are they actually pushing some new flavor of interconnect that will require BIOS tweaks and/or special drivers?

      BIOS tweaks shouldn't be necessary. ATA controller cards that you plugged in a PCI slot came with a ROM chip on the card containing a driver that would allow the BIOS to use the disk. You could even work around bugs in the BIOS own driver for the onboard controller that way. I'm sure this new card will come with a driver on the board that will allow the BIOS to boot from it. However such a drive

    • Historically, though, these cards have appeared, from the perspective of the computer, as ordinary IDE or SATA adapters that just happen to have storage attached.

      SATA can't handle half the bandwidth this part can provide.

  • Oblig (Score:2, Interesting)

    by PearsSoap ( 1384741 )
    64 NAND flash chips in parallel should be enough for anyone!
    I'm curious, what are the applications for this kind of disk speed?
    • Re:Oblig (Score:5, Funny)

      by Narnie ( 1349029 ) on Thursday November 27, 2008 @04:54PM (#25911043)

      Perhaps loading Vista in less than a minute?

      Maybe?

      • by Poorcku ( 831174 )
        actually you are not that off... i make my vista installs from a USB chip. Customized with vlite the install lasts no more than 15 mins. And i am a consumer not an IT specialist :)
    • Re: (Score:3, Insightful)

      by Ariastis ( 797888 )

      Porn, what else!

      Those videos have to load fast, yknow...

    • Re: (Score:3, Informative)

      by Yetihehe ( 971185 )
      Databases, file servers, anything which needs to load fast from a disk.
    • Re:Oblig (Score:5, Interesting)

      by im_thatoneguy ( 819432 ) on Thursday November 27, 2008 @05:01PM (#25911075)

      Uncompressed HD, 2k and 4k film playback and capture.

      At work we regularly are working with dozens of layers of 2048x1024 32bit uncompressed footage at the same time.

      • by sam0737 ( 648914 )

        How long could a drive of 128GB gives you?...

        2048*1024@32bbp @ 25fps...it's like 10 minutes?

        • Re: (Score:3, Informative)

          by rkww ( 675767 )
          Feeding a 4k digital projector [sonybiz.net] at 24 fps requires 4096 * 2160 * 4 * 24 = 810 MB / second, so 128GB gives you about 150 seconds (and a 90 minute film eats 4.2 TB). There aren't, currently, many systems which can sustain that kind of data rate. It takes a lot of drives, and multiple layers of striping.
    • Simple: Swap file.

      With this kind of SSD throughput it doesn't become necessary for an OSS to use cpu cycles populating a a file cache in memory. Which is the technique that hides the slack performance of modern storage compared to system ram.

      Infact you could swap ALOT of process memory out of ram, and only experience a tiny percieved slow down in application performance. To put a number guesstimate on it: you could run a application with a footprint 5-10 times bigger than system memory with only a smal
  • "throughput" isn't that important. Random reads/writes is what shows that most of SSD are crappy and weak unfortunately.

    The worse thing is that everyone things that throughput is so important :-/

    • Re: (Score:3, Interesting)

      by myxiplx ( 906307 )

      Trust me, throughput is still important if you're running these in a fileserver on a fast link (10Gb ethernet link, infiniband, fibre channel, etc). The read & write speeds of standard SSD's mean you need a whole bunch in parallel to prevent them becoming a bottleneck, which makes them hard to integrate into existing servers.

      In contrast, a single fast PCIe SSD can drop right in. There's definately a market for high bandwidth SSD's in high end storage devices.

    • Re: (Score:3, Informative)

      the Micron video shows a 2-drive setup performance of 200,000 I/Os per second. (2KB) random read = ~400MB/sec.

      a benchmark performed by Linux.com [linux.com] also shows that SSD absolutely creams SATA [linux.com], even 6 SATA drives in RAID 6, in terms of random seek. in other tests a single Mtron 16GB SSD gave 111 MB/s sustained read with .1 ms access time, outstripping the WD Raptor 150, which was the fastest SATA drive at the time the test was performed (12/13/07). the only area where SSD lags behind is random write, which it su

      • the only area where SSD lags behind is random write

        And $/bit.

        But it usually takes a superior technology to start the price curve declining rapidly. It looks like they've finally just achieved it.

        I hope they put blinky lights on these things so I can tell if the OS is hosed or not.

        • true. new technologies always start off prohibitively expensive, but a killer application--or in this case, implementation--could drive widespread adoption to the point where economies of scale come into play.

    • Re: (Score:3, Insightful)

      Actually, random reads are a very big strong point of SSDs, because they have 2 orders of a magnitude less seek time than a platter drive.

      Random writes are good on SLC SSDs (the expensive variety) and average on MLC SSDs (although, many MLC drives cause a pause after too many random writes at the moment).

  • So is this pretty much like placing the chips in a RAID within a single device? So 1 chip failure brings down all the data and makes the entire drive unreadable until you replace the bad chip? How easily can you plug in a new chip to recover all your data?

    • Re:Interleave (Score:5, Interesting)

      by billcopc ( 196330 ) <vrillco@yahoo.com> on Thursday November 27, 2008 @05:30PM (#25911227) Homepage

      You're implying that SSDs fail as often and disastrously as fast-spinning disk platters.

      They don't, which is why a beowulf cluster of SSDs is a beautiful thing, though my concern is DDR2 can deliver much faster throughput and ns-latency, while the density trails a bit behind SSD but not that bad.

      With 4gb DDR2 modules hitting the mainstream, and 8gb modules in the high end, what's stopping someone from putting a bunch of them on something like Gigabyte's i-Ram (minus the stupid SATA bottleneck) and having themselves a DIY uber-SSD ? Sure, there are differences but it's nothing a battery can't fix.

      • With 4gb DDR2 modules hitting the mainstream, and 8gb modules in the high end, what's stopping bunch of them on something like Gigabyte's i-Ram (minus the stupid SATA bottleneck) and having themselves a DIY uber-SSD ?

        Possibly power density and/or signal integrity hooking up so many devices.

        Not unsurmountable. But not straightforward, either.

        But you need a use case to pay for the development and devices. Fast long-turn bulk storage built out of active devices isn't it: With that many in a box backup power

      • by b4upoo ( 166390 )

        What alterations does a PC need to really take advantage of this blistering high speed device?Obviously we can't pump that kind of speed through the internet. I can see certain programs being rewritten to make good use of such speed such as compression and context search scripts.

        • Re:Interleave (Score:4, Insightful)

          by owlstead ( 636356 ) on Thursday November 27, 2008 @07:56PM (#25911991)

          You *are* joking right? Currently the memory bandwidth is only a minor problem against disk performance. Disk IO is either really slow or really really expensive. Even nowadays, I can download faster than that I can save / PAR2 and unrar my binaries. I won't go into playing games at the same time: impossible. Disk spaed is a slow crawl. And that's just consumer stuff, I won't go into tuning high throughput databases.

      • Re: (Score:3, Informative)

        by myxiplx ( 906307 )

        You mean like this: http://www.mars-tech.com/ans-9010b.htm [mars-tech.com]

        And the battery doesn't need to be huge either - it backs your data up to a flash drive if the power cut lasts more than a few seconds.

    • by kasperd ( 592156 )
      If the data is stored redundantly, then a single chip failing does not render your data inaccessible. You would be able to go on without even noticing (which in some sense is bad, because you wouldn't replace it until it is too late). If data is not stored redundantly, then you cannot replace a chip to get your data back. It may be possible to replace a chip to make the card work again, but you'd have to reformat and start from scratch.

      I guess the data is not stored redundantly. Making it redundant would
  • by Max Romantschuk ( 132276 ) <max@romantschuk.fi> on Thursday November 27, 2008 @05:09PM (#25911127) Homepage

    ...for really high bandwith stuff.

    For example, these puppies from Edgeware, designed for video streaming, can do 20GB/sec:
    http://www.edgeware.tv/products/index.html [edgeware.tv]

    (And these aren't vaporware, I've seen the actual hardware in action.)

    Granted it's very custom stuff, but putting tech like this in a box with a SATA interface is really just evolutionary... Cool none the less though. :)

    • Granted it's very custom stuff, but putting tech like this in a box with a SATA interface is really just evolutionary... Cool none the less though. :)

      Well, it's already happened. Take a look at the RAMSAN [superssd.com]. It kicks ass.

    • I was about to be really impressed, but their website shows hardware doing 20Gb/sec, not GB/sec. Did you really mean that?

      • Whoops... No, by bad, shift must have gotten stuck and the kids gotten my attention while previewing my post... ;)

        But 20 Gb/sec already saturates two 10 Gb network ports, which is enough to impress me for now...

  • So should we start thinking about replacing SATA with something else that can handle this?
    • by Yvan256 ( 722131 )

      The end of SATA? Dude I'm still using parallel IDE hard drives over here, and some are over FireWire 400 or USB 2.0.

      In any case, Firewire 1600 and Firewire 3200 are just around the corner.

    • While its true that SATA had very low sights set (unfortunately), its not like PATA's limitations.

      A single SATA-I channel can deliver 150MB/s and a SATA-II can deliver 300MB/s, but unlike PATA, SATA channels are independent (No master/slave sharing relationships.) Most motherboards come with 4 SATA ports of some kind right on the motherboard, so can be delivered 600MB/s or 1200MB/s via RAID0 or some other striped setup.

      An additional factor is that not all SATA controllers are equal. Most cannot handle a
      • by hbr ( 556774 )

        While its true that SATA had very low sights set (unfortunately), its not like PATA's limitations.

        ...like losing all your nails when you try to unplug the little bastards :-)

  • Storage on an expansion card is nothing new, my Amstrad 1512 had a 40mb hdd on an ide card.

  • ...some competition! Seriously, I think they'll be better off. There's probably too many nervous nellies out there unwilling to dive into non-SAS/SATA/FC storage with a newcomer like Fusion I/O.

    BTW, WTF is up with "The second generation of PCIe is expected out next year ..."? It's been out for a while now, I've seen motherboards, GPUs, and IB HCAs that support gen2.

  • It is exciting to see this sort of development on the server front, though these technologies never seem to offer the huge advantage we'd expect. The fact that multiple companies are going in multiple directions for storage technology is excellent for the marketplace.

    It seems unlikely that this will really benefit servers because generally for applications that need high IOPS numbers, you're looking at a SAN or some sort of fibre-optic storage.

    Database and related apps (like SAP, Oracle, or Exchange) needs

  • Bottleneck removed (Score:3, Interesting)

    by w0mprat ( 1317953 ) on Thursday November 27, 2008 @06:23PM (#25911495)
    This would be the first time a storage device would significantly saturate system memory bandwidth.

    Indeed Intels SSD has a internal NCQ like command queue system to mask latency of the host. Common storage controllers are (obviously) not up to the job.

    1gb/s from a single drive, that finally brings storage speed back in line with moore's law, which only capacity has followed it seems.
  • say with a power outage. I've lost a couple of drives over the years that way.

  • So if we are going to saturate our data links with fast SSDs now, why not get the SSD into the CPU die together with a GPU, BIOS, OS, and everything else. There are many embedded SOCs around built in this way, but these are aimed at low-power always-on applications. But I think that the era when we will have a desktop SOC is not so far away if we find a way to keep it cool and cheap.
  • Given that SATA 1 was capable of 1.5 Gbps and SATA 2 is capable of 3.0 Gbps, why the need to go PCIe? This SSD from Micron doesn't even exceed the throughput of the original SATA spec.

On the eighth day, God created FORTRAN.

Working...