Want to read Slashdot from your mobile device? Point it at m.slashdot.org and keep reading!

 



Forgot your password?
typodupeerror
×
Data Storage

Consumer-Grade SSDs Survive Two Petabytes of Writes 125

crookedvulture writes The SSD Endurance Experiment previously covered on Slashdot has reached another big milestone: two freaking petabytes of writes. That's an astounding total for consumer-grade drives rated to survive no more than a few hundred terabytes. Only two of the initial six subjects made it to 2PB. The Kingston HyperX 3K, Intel 335 Series, and Samsung 840 Series expired on the road to 1PB, while the Corsair Neutron GTX faltered at 1.2PB. The Samsung 840 Pro continues despite logging thousands of reallocated sectors. It has remained completely error-free throughout the experiment, unlike a second HyperX, which has suffered a couple of uncorrectable errors. The second HyperX is mostly intact otherwise, though its built-in compression tech has reduced the 2PB of host writes to just 1.4PB of flash writes. Even accounting for compression, the flash in the second HyperX has proven to be far more robust than in the first. That difference highlights the impact normal manufacturing variances can have on flash wear. It also illustrates why the experiment's sample size is too small to draw definitive conclusions about the durability of specific models. However, the fact that all the drives far exceeded their endurance specifications bodes well for the endurance of consumer-grade SSDs in general.
This discussion has been archived. No new comments can be posted.

Consumer-Grade SSDs Survive Two Petabytes of Writes

Comments Filter:
  • HDD endurance? (Score:3, Interesting)

    by Anonymous Coward on Thursday December 04, 2014 @03:28PM (#48524899)
    Just out of curiosity, how well do traditional HDD fare in comparison?
    • Re:HDD endurance? (Score:5, Informative)

      by PlusFiveTroll ( 754249 ) on Thursday December 04, 2014 @03:50PM (#48525043) Homepage

      In average desktop use, and even non video media workstation it's rare to see a drive that's written 10TB. Most people will never wear out a SSD due to straight out media wear.

      • by jedidiah ( 1196 )

        A PVR drive could easily see 17TB of writes during a year and that's just a very conservative estimate based on a small number of tuners and broadcast content.

        • Re:HDD endurance? (Score:5, Insightful)

          by PlusFiveTroll ( 754249 ) on Thursday December 04, 2014 @04:47PM (#48525505) Homepage

          Of course video writing is the perfect application for hard drives. A constant datastream at a fixed rate and large amounts of data over time, with few random IO and only bulk delete. If you are trying to stick a SSD in a PVR you are doing it wrong.

          • A constant datastream at a fixed rate

            Not completely fixed(depends on channel being recorded at the time), I would think, but yeah, a fixed rate that's substantially below a HD's write speed.

            Remember that SSDs are relatively slow at writing compared to reading. HDs are generally equally fast in either direction, so given a sufficiently sequential write process I can see them actually being able to write faster than the SSD.

            • It's been a while since you shopped for an SSD? The larger capacity ones have nearly equal read and write speeds except for the most extreme budget brands.

              • by Bengie ( 1121981 )
                SATA3.0 bottleneck
              • The SSD write speed will be much slower if the drive is full. It is much slower (2X? 3X?) than a brand new SSD.
              • by AmiMoJo ( 196126 ) *

                They only hit their advertised write speeds with highly compressible data in most cases. With random data the write speed drops to less than half the read speed typically. It also depends on the nature of the writes, as some scenarios will cause a massive amount of write amplification.

          • by AmiMoJo ( 196126 ) *

            Japanese PVRs need multiple HDDs because a single one can't keep up. A few years ago they started to record everything... All over-the-air channels simultaneously, 24/7, allowing you to watch anything that was broadcast at any time in the last week. No need to set up recording for anything, just grab it any time up to a week after broadcast.

            Once SSDs get up to capacity they would ideal for that application. Until they they use multiple HDDs and a fair size RAM cache.

            • I work for a Danish IPTV provider, and we do the same thing. Everything is recorded and kept for at least 7 days, so our customers can watch whenever they want. It's proven to be extremely popular.

        • Re:HDD endurance? (Score:4, Informative)

          by Hamsterdan ( 815291 ) on Thursday December 04, 2014 @04:56PM (#48525583)

          Recording TV is not a typical scenario. Besides, at around 8GB/hour (HD), that's around 2000 hours a year, which is little more than what my BeyondTV machine does, and its 3TB WD green is still alive and kicking. You just have to disable the insanely aggressive head parking on those drives otherwise they might die...

          http://www.storagereview.com/h... [storagereview.com]

        • So then, 0.1 times what an SSD will take, even if you keep it for a decade?

      • Re: (Score:2, Informative)

        Comment removed based on user account deletion
        • Re:HDD endurance? (Score:5, Informative)

          by Bengie ( 1121981 ) on Thursday December 04, 2014 @07:45PM (#48527031)
          The controller is just as likely to fair on a regular HD. Overall, SSDs have 1/2 of the warranty claim rate of mechanical HDs. Samsung is so sure of their SSDs, they have a 10 year warranty on their new ones, or 150TB written, which is a lot of writes for a 128GB drive. Show me a mechanical drive with a 10 year warranty for under $150
          • by Mashiki ( 184564 )

            Well before the market crash on HDD prices you could see the rare one with a 10yr warranty for $150. The Fujitsu drives I used to use were consumer grade, and had a 10yr warranty. Of course then we went to 5yr, then 3yr, and I think some are even 1yr now. It's just like the market crash back in the late 90's early 00's. Give it a few years, and the warranties will start coming back up...that is if they survived SSD's becoming the mainstream choice for storage.

        • Hairy difference is you can pick the controller. Sandforce and OCX RUN.

          Crucial buggy and will die without update.

          I run sansdisk and Samsung pros in raid 0 for years. Their controllers are good and asked microcenter which gave lowest RMA? You should switch Hairy it's not 2010 anymore and they improved. Yes I owned 2 2011 Seagates that died :-)

          No way will I go back. Swtor and battlefield 4 are unusable on a mechanical.

          Also Intel has trim in raid 0 that AMD doesn't which blows for AMD fans but with that combo

          • Comment removed based on user account deletion
            • I read maximum pc and Google a few others. Samsung and sansdisk are proprietary controllers. Toshiba uses Ocz which supposedly fixed their modded crappy sandforce.

              Intel back in 2010 had a few buggy firmwares. New are fine. I do not trust ocz, anything sandforce, or crucial. The newer ones use maxwell too which is ok. But OCZ truly does suck. I still have a mechanical drive for backup files and one drive too.

              I have 4 ssds for almost 2 years in 2 raids. Survived probably 10 reimages and full disk writes :-)

      • Doesn't anyone hibernate their computer at the end of the day? 8 GBx365 days = 3 TB in one year for my main machine.

        • Since every time I tried that it caused weird issues every week or so I would hazard a guess that next to nobody does that, yeah. At least not on Windows.
          But even if you do SSDs can handle that load. 1 PB /3TB = 300 years (or 100 if you count 16GB of non-hibernate writes per day). Thus the cell wear is not your most likely problem.

          • by bwcbwc ( 601780 )

            I wouldn't use 1 PB as the benchmark. Only half of the drives in the sample made it that long. but 3 TB per year means 33 years to even reach 100 TB. It's pretty likely your entire computer will be obsolete by then, even if Moore's law bottoms out in the next decade or so.

    • by Anonymous Coward on Thursday December 04, 2014 @03:55PM (#48525067)

      Just out of curiosity, how well do traditional HDD fare in comparison?

      They cave in big time after around 150 win98 shutdown restarts ..

    • Re:HDD endurance? (Score:4, Informative)

      by Luckyo ( 1726890 ) on Thursday December 04, 2014 @04:13PM (#48525179)

      Impossible to test in the same way due to time constraints. Filling the entire hard drive takes a very long time, unlike a much smaller and much faster SSD.

      • by rthille ( 8526 )

        Sequential I/O with big write sizes can be pretty fast (~180MB/s) on modern large drives. SSDs can be ~4x that, so the tests would only take ~4x as long for similar data sizes.

        • by Luckyo ( 1726890 )

          Now remind yourself of comparable hard drive size, comprehend that you're looking at something that is several times slower AND several times larger.

          Now consider that this test for SSDs has been running for well over a year now. How many years, or even decades would you need to get the same test of HDDs?

      • by afidel ( 530433 )

        Not really, for streaming writes a HDD is only about 1/3rd the speed of these drives (WD Caviar Black 1TB, 150MB/s sustained streaming write vs Intel 335 at 450MB/s streaming writes).

        • by Luckyo ( 1726890 )

          Now compare the relative size of similar HDD. Now comprehend that this test has been running over a year now. Do the math on how many years it would take for similar test to be done on HDDs.

          Then understand the original statement.

      • by MrL0G1C ( 867445 )

        Eh? It would take less than 100 days to write a 1PB to a 3TB drive. One could write to a 240GB area of the drive repeatedly if they wanted to.

        A 3TB drive can be filled roughly 3 to 4 times a day.

        • by Luckyo ( 1726890 )

          Seriously, you're a third person on slashdot who hasn't even read the OP.

          The SSD test has been going on for over a year now. Consider the fact that best case scenario for HDD means it's several times slower as well as several times larger. Understand that you're looking at many years, possibly over a decade of test time.

          • by MrL0G1C ( 867445 )

            You haven't done any math, if you had you'd realise they haven't been doing the test 24/7.

            Look at the screengrab on this page:
            http://techreport.com/review/2... [techreport.com]

            Here:
            http://techreport.com/r.x/ssd-... [techreport.com]

            It shows that their test was rather slow, they were only writing at 208MB/s

            At that speed it would take 58 Days 19 hours and 45 minutes to write 1 Petabyte.

            I already did the math for a HDD - it would take about 100 days to write 1 Petabyte to a HDD.

            Since they started the test in 2013, they could have done that easi

            • by Luckyo ( 1726890 )

              No the reason for the slowness is that they actually, shockingly, TEST the drive.

              Not just write on it.

              It's pretty sad that you actually went to the length of posting a link to the article that straight up debunks your claims.

              • by MrL0G1C ( 867445 )

                No, sad is people who can't admit they're wrong.

                • by Luckyo ( 1726890 )

                  Ditto. Now read the article and consider doing just that. I've been following the test for the year it's been running, and it's very obvious to anyone tech minded why this test is unfeasible on HDDs.

                  • by MrL0G1C ( 867445 )

                    and it's very obvious to anyone tech minded why this test is unfeasible on HDDs.

                    I have explained how it is feasible, you on the other hand have insisted it isn't feasible without giving any logical reason why.

                    • by Luckyo ( 1726890 )

                      No, you have explained why a straight up "write only and do nothing else" test you yourself devised is remotely feasible.

                      What you have not even touched upon was the actual subject - why the test that techreport has been performing is not feasible to perform on HDDs. This in spite of linking to the actual testing methodology a few posts before this one.

                    • by MrL0G1C ( 867445 )

                      This isn't rocket science, all that is needed is some script to copy files to the hdd, delete files, rinse and repeat, check smart stats occasionally.

                      HDDs write at over 100mb/s. the test they did wasn't a whole lot faster. It is simple to deduce that the test is easily possible on a hdd. No-one has said the drives have to be filled the same number of times, merely that a petabyte has to be written, that can be done, there is no reason why it can't be done and you haven't given any valid reason why it can't

                    • by Luckyo ( 1726890 )

                      Thanks for sharing your inability to read your own link.

    • Based on my file server usage, HDDs last about five years. Most have some kind of a mechanical problem. I replace all the drives every five years. Since SSDs have no mechanical parts, and become cheaper than HDDs, they should last at least ten years.
    • Came here to ask the same question.
    • HDDs usually die from mechanical failure rather than the magnetic surface wearing out. I'm not aware of the surface wearing out being something to worry about, since all the headers do is spin around the magnetic poles on the material. But the headers can scratch the surface causing bad sectors, the stepper motors can die, etc. In some cases it's possible to recover critical information by placing the platters on a non-damaged disk, although opening a modern HDD has to be done inside very clean rooms so tha
  • by RealGene ( 1025017 ) on Thursday December 04, 2014 @03:30PM (#48524915)

    However, the fact that all the drives far exceeded their endurance specifications bodes well for the endurance of consumer-grade SSDs in general.

    No, I think it means that the first ones were over-engineered, and the next generation will meet their stated MTBF number to within 1 standard deviation.

    • If the MTBF is in hours, then what are they going to determine is an average I/O for an SSD per hour? Since the total amount of data we use is only increasing that means that MTBF will always decrease with your logic. I have to say that is quite the opposite in most electronics even if they are being produced at cheaper rate.
    • by Anonymous Coward

      Either way, telling us the number of samples is too small to accurately determine anything, then extrapolating a happy future for consumers based on the faulty setup seems rather Pollyannaish.

  • by PlusFiveTroll ( 754249 ) on Thursday December 04, 2014 @03:37PM (#48524957) Homepage

    Most hard drive I see in consumer and business use write far less than that over their lifetimes. I have a customers hard drive I am copying data from currently. Has 15,147 power on hours, it has only written 1.3TB of data. It's very uncommon to see drives with over 6TB of data written (in the 500GB to 1TB drive range).

    The other client SSD in my computer is a Samsung 830 256GB SSD that I just migrated to a 1TB SSD for a customer. Was used for about a year and a half before they needed a bigger drive. They used Outlook, a number of Autocad applications, lots of project files, a good sized collection of work related photos. The drive has 995GB of writes and is showing no SMART issues.

    Average computer users have nothing to worry about when it comes to wearing a SSD out. Power users might have a problem depending on the nature of their work, but they also get the most benefit from high write speeds and IOPS. Servers, depending on their usage patters could have a problem, I certainly recommend the enterprise style drives that reserve a much larger amount of space.

    • by Anonymous Coward

      hmm this is interesting but how do you find the statistics for a drive (on a mac)?

      Thank you

    • I have a customers hard drive I am copying data from currently. Has 15,147 power on hours, it has only written 1.3TB of data.

      How can you tell? Does the HDD keep track of this info somewhere in the firmware?

      • by Anonymous Coward

        It's called S.M.A.R.T...

      • by Anonymous Coward

        Use smartmontools or another SMART program. If you are using smartmontools, execute smartctl --scan and it will spit out device names. Then run smartctl -A device-name and it will usually tell you. It has other useful command as well like -a -t -c etc.

        • by swb ( 14022 )

          What's the math to be applied to LBAs? How big is an LBA? A 512 byte sector?

          My nearly 4 year old Samsung shows just under 2 TB written if I multiply the SMART-provided Total LBAs written against a 512 byte block.

          • by unrtst ( 777550 )

            What's the math to be applied to LBAs? How big is an LBA? A 512 byte sector?

            My nearly 4 year old Samsung shows just under 2 TB written if I multiply the SMART-provided Total LBAs written against a 512 byte block.

            Correct.
            Though there could be differences depending on the model of drive you have, it's very likely 512B LBAs:
            http://www.samsung.com/global/... [samsung.com]

            Since you said you have a samsung, you can run the Samsung Magician 4.0 and it'll do the conversions for you (assuming you're running Windows or Mac; AFAIK, Magician isn't available for Linux).

    • I write a lot more to my SSDs than most do because of lost of application installs, playing with audio, etc, etc. 6TB to date, drive was purchased about 20 months ago. Ok well assuming I maintain that rate of writing (3.6TB/year) it would be 13 years before I'd hit 50 TB of writes, on a 512GB drive which can probably take 1PB or more.

      Even if you hit it harder than the norm, you still don't hit it that hard. It really has to be used for something like database access or a file server or the like before endur

      • It really has to be used for something like database access or a file server or the like before endurance becomes an issue.

        Even that isn't enough, because the drives in the test are being written essentially 24/7 (with just a little time off for the retention tests), and the drives remaining have been at it for 15 months.

        You have to have an insanely busy database or file server to never have any time off from writes.

    • Servers, depending on their usage platters could have a problem

      FTFY.

    • by Anonymous Coward

      You expect us to believe that Outlook AND Windows fit on 256GB? Bullshit.

  • Random failures (Score:4, Interesting)

    by MobyDisk ( 75490 ) on Thursday December 04, 2014 @03:47PM (#48525013) Homepage

    Great, so now we just need to fix the sudden random failures where the drive completely fails but it is 6 months old and showed no signs of degradation. A coworker of mine just had that happen with a Crucial SSD.

    • Re:Random failures (Score:4, Informative)

      by bill_mcgonigle ( 4333 ) * on Thursday December 04, 2014 @04:44PM (#48525479) Homepage Journal

      Great, so now we just need to fix the sudden random failures where the drive completely fails but it is 6 months old and showed no signs of degradation.

      Just counted - the stack on my workbench of completely dead SSD's is 13. I think I've seen one hard drive ever go completely dead. I literally don't understand how the vendors think they can get away with such junk on SSD controllers. I know flash will fail, but that's no reason to hang dead on the SATA bus and not talk to anybody. Admit defeat by SMART and move on.

      I don't always use SSD's for journals, but when I do they're in a RAID configuration. Stay speedy, my friends.

      • In the shop I work out of we have stacks of hundreds of hard drives with bad sectors and a large number that are just dead. We see very few dead SSDs, but we only use Samsung or Intel cards. Don't use anything else.

      • by Bengie ( 1121981 )
        You must be new to computers, the IBM DeathStar had similar problems. Just like mechanical HDs, there's a few bad batches.
        • As maligned as the DeathStars were, I never lost any data on them. They always gave signs of theilr impending doom, and lasted long enough to copy the data off of them. In comparison, I've seen enough SSDs suddenly just stop working, and anything stored on them is simply gone.

    • Was it a Crucial M4? If so, maybe it hit that firmware bug where it craps out after a few thousand hours? There is a firmware update to fix it.

  • by Anonymous Coward on Thursday December 04, 2014 @03:49PM (#48525029)

    Unfortunately these tests don't say much about the drives you can buy NOW, and write endurance in consumer drives is probably getting worse as geometry shrinks and relentless price pressure causes corners to be cut. It's good that the Samsung 840 Pro is holding up so well (its predecessor the 830 was also ridiculously durable) but it's now replaced by the 850 Pro which uses radical new technology (stacked chips). The Intel 320 was also very durable so the failure of the 335 doesn't bode very well for the idea that newer models should hold up better than older ones.

    Write wear isn't everything anyway. Another thing to test is whether the drive can brick if the power fails while the drive is writing. Better drives have capacitors to deal with this event. Consumer drives lack them and can lose data or fail unrecoverably.

    • It's good that the Samsung 840 Pro is holding up so well (its predecessor the 830 was also ridiculously durable) but it's now replaced by the 850 Pro which uses radical new technology (stacked chips).

      I suspect the 10 year warranty for the 850 Pro is a good indicator of how long Samsung expects it to last compared to the 840 Pro (which has a 5 year warranty).

    • Write wear isn't everything anyway. Another thing to test is whether the drive can brick if the power fails while the drive is writing. Better drives have capacitors to deal with this event. Consumer drives lack them and can lose data or fail unrecoverably.

      This is one reason to check that the computer you're using includes capacitors to deal with this event -- so you can use consumer drives and not have to worry about whether they've got built-in protection circuitry.

  • I think this has been a fantastic experiment, but do you still have any criticism regarding their test methodologies? Can we trust the results? For example, would we get different results if we leave the same data sitting on the drives for a longer time? Anything else that they are possibly not taking into account?
    • Re:Any criticism? (Score:4, Insightful)

      by Kardos ( 1348077 ) on Thursday December 04, 2014 @05:12PM (#48525721)

      The only weakness is that it needs to be repeated on newer ssds as they hit the market. The results of this test are relevant for drives released back when the experiment started in 2013, less so for drives released now and even less so for future drives. As the manufacturers realise that the drives are lasting much longer than they are specified to, they'll decide they are overengineered and rework them to wear out quicker. Aside from the obvious cost cutting benefit, it also keeps the market segmented in various grades between "low end consumer ssds" and "high end enterprise ssds".

  • by Anonymous Coward

    I think the idea is neat, but nothing meaningful can be said by sampling _one_ of each drive.

    Moreover, from what I understand about flash, the more writes you make to a cell, the more quickly those bits tend to rot when left alone.

    So being able to overwrite again and again and again isn't particularly important if those worn cells would just forget their contents over a few hours, days, weeks, etc.

    I'd much rather have a drive that can take a moderate write load and hold on to my data than an Alzheimer's dis

    • by mlts ( 1038732 )

      Maybe a tiering system would be useful. I've seen some drive arrays that use SSD for caching. So, a SSD that can take a lot of info and forgets it after a month or two can be good enough in this case, assuming enough ECC to realize the cache data is damaged and to fetch from the spinning platters the bits needed to complete the read. Another example of this would be a write cache on a HBA. That way, the machine could send writes to the SSD cache, the HBA tells the machine the write is complete and then

  • by BLToday ( 1777712 ) on Thursday December 04, 2014 @05:45PM (#48526009)

    From my experiences, most of SSD failures come from dead controllers and not wear. Or bad firmware, I'm looking at you Crucial and your 5000 hour bug. Also your weird incompatibles on your MX100 series.

  • How many units of each type did they test? because if they only tested one of each, you cannot make any assumptions about and can't even call it a good test.. I've seen so many different results with the same HDD's.. Or even with SDD's, where my SDD died after a couple of months, but the one of my collegue is still plowing away after 2 years (SDD's from the same batch)..
    • It wasn't even meant to be a scientifically correct study but just a fun MythBusters-type experiment.
  • A few years back I ran my own test. I had an unused 16 MB Canon SD card that came with one of my digital cameras (I bought a much larger one with the camera). Since it was unused I decided to see how long it would last. I wrote a script that repeatedly overwrote the entire card with one of several files of random data then checked it against the original. Each time overwriting, reading, and verifying the card took about 17 seconds. I had my first error after 120K writes. After that I got errors every

I tell them to turn to the study of mathematics, for it is only there that they might escape the lusts of the flesh. -- Thomas Mann, "The Magic Mountain"

Working...