Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Endurance Experiment Writes One Petabyte To Six Consumer SSDs

Unknown Lamer posted about 2 months ago | from the ditch-your-hard-drive dept.

Data Storage 164

crookedvulture (1866146) writes "Last year, we kicked off an SSD endurance experiment to see how much data could be written to six consumer drives. One petabyte later, half of them are still going. Their performance hasn't really suffered, either. The casualties slowed down a little toward the very end, and they died in different ways. The Intel 335 Series and Kingston HyperX 3K provided plenty of warning of their imminent demise, though both still ended up completely unresponsive at the very end. The Samsung 840 Series, which uses more fragile TLC NAND, perished unexpectedly. It also suffered a rash of cell failures and multiple bouts of uncorrectable errors during its life. While the sample size is far too small to draw any definitive conclusions, all six SSDs exceeded their rated lifespans by hundreds of terabytes. The fact that all of them wrote over 700TB is a testament to the endurance of modern SSDs."

cancel ×

164 comments

Sorry! There are no comments related to the filter you selected.

Sigh. (-1)

Anonymous Coward | about 2 months ago | (#47250025)

Yes, they are sooo reliable, every single SDD I've bought has been dead within 3 months.

Re:Sigh. (2)

MasterOfGoingFaster (922862) | about 2 months ago | (#47250077)

Yes, they are sooo reliable, every single SDD I've bought has been dead within 3 months.

Odd - I've got 5 and all are well. 1 Intel, 2 Samsung and 1 Critical. I guess I'm lucky and you are not.

Re:Sigh. (0)

Anonymous Coward | about 2 months ago | (#47250097)

I've had 150 of them, and all of them are half dead.

Re:Sigh. (3, Funny)

ArcadeMan (2766669) | about 2 months ago | (#47250197)

Rejoice then, you still have 75 SSDs!

Re:Sigh. (0)

Anonymous Coward | about 2 months ago | (#47250587)

They are now half dead, half something else. Wait for them to replenish the chips, violently.

Re:Sigh. (5, Funny)

ColdWetDog (752185) | about 2 months ago | (#47251269)

We seem to have the beginning of a trend here - AC's don't have very good luck with SSD's.

Try logging in and see if that changes your outlook.

Re:Sigh. (0)

Anonymous Coward | about 2 months ago | (#47251311)

Single board Computers and Sata Express...

Re:Sigh. (5, Funny)

msauve (701917) | about 2 months ago | (#47250285)

"I've got 5 and all are well. 1 Intel, 2 Samsung and 1 Critical. "

That apparently doesn't prevent you from dropping bits, though. 1+2+1=4.

Re:Sigh. (0)

Anonymous Coward | about 2 months ago | (#47250501)

not to mention a write error: "Critical" instead of "Crucial"

Re:Sigh. (1)

MasterOfGoingFaster (922862) | about 2 months ago | (#47250509)

not to mention a write error: "Critical" instead of "Crucial"

Hee hee. That's a "loose nut behind the keyboard" error - not an SSD error.

Re:Sigh. (2)

MasterOfGoingFaster (922862) | about 2 months ago | (#47250543)

I don't recall the brand of the fourth, got distracted and forgot to edit. But I knew someone would have fun pointing it out, so it would be rude for me to deny you the pleasure. So - yeah - I dropped a bit. :D

Re:Sigh. (3, Funny)

msauve (701917) | about 2 months ago | (#47250931)

"I don't recall the brand of the fourth"

There you go again. :-)

Re:Sigh. (1)

kimvette (919543) | about 2 months ago | (#47250897)

I have two different Crucial mSATA drives - one runs VMware in one workstation (well, "server"), and the other runs virtualbox in another. Each is a different generation SSD - and no problems. I've also shipped many to customers in servers (real servers on RAID controllers, not workstations posing as servers). Not one failure.

Re:Sigh. (2)

pezpunk (205653) | about 2 months ago | (#47250203)

hey thanks for sharing your anecdotal experience as if it carries any weight whatsoever compared to actual controlled experiments and statistics.

for comparison, I've owned 8 and no failures yet. I have a raid0 array of SSDs upstairs that has been working flawlessly since 2008. an aberration maybe. anecdotal evidence works like that.

Re:Sigh. (5, Insightful)

pezpunk (205653) | about 2 months ago | (#47250215)

that reminds me ... I should do a backup ....

Re:Sigh. (0)

Anonymous Coward | about 2 months ago | (#47250335)

hey thanks for sharing your anecdotal experience as if it carries any weight whatsoever compared to actual controlled experiments and statistics.

A controlled experiment with statistics. Too bad this article is not that.

Re:Sigh. (1)

AK Marc (707885) | about 2 months ago | (#47250255)

Stop storing them in the oven...

Re:Sigh. (1)

beelsebob (529313) | about 2 months ago | (#47250391)

Let me guess, every single SSD you bought was a low capacity sand force controlled one.

Re:Sigh. (5, Funny)

fuzzyfuzzyfungus (1223518) | about 2 months ago | (#47250493)

Yes, they are sooo reliable, every single SDD I've bought has been dead within 3 months.

A happy OCZ customer, I take it?

Re:Sigh. (4, Funny)

gukin (14148) | about 2 months ago | (#47250807)

Amen to this, I STUPIDLY bought a REFURBISHED OCZ drive which, coincidentally failed shortly after OCZ announced bankrupcy. The other drive I bought was a Corsair that, like it's OCZ bretheren died three weeks after put into service. The speed is wonderful but the life is pathetic. Despite this, I have a Kingston and a Samsung which are both going strong so I can confidently state that HALF OF ALL SSDs FAIL AFTER THREE WEEKS, THE OTHER RUN FOREVER!

Perhaps I need to work on my sample set and my over-use of capital letters.

Re:Sigh. (0)

Anonymous Coward | about 2 months ago | (#47251577)

I seem to have the exception. I have almost 2 years on an OCZ drive I purchased from NewEgg. It's seen daily use for those two years.

Re:Sigh. (1)

Hamsterdan (815291) | about 2 months ago | (#47251929)

Might be luck, might be an exception, but my Agility 2 is still kicking after 3 years, half of that was under XP (no TRIM).

I've had 4 spinning drives (Seagate) die or get bad sectors in the same time frame.
 

Re:Sigh. (0)

Anonymous Coward | about 2 months ago | (#47251757)

Which reminds me, my VISA extended warranty claim on 1 of my terrible OCZ drives came back last week. VISA actually paid for the original price of the drive + tax + registered mail cost.
 
I had to do this because OCZ refuses to honor my 'intermittent' disconnect-prone ssd harddrives.

Re:Sigh. (0)

Anonymous Coward | about 2 months ago | (#47250861)

Stop defragmenting them.

context (1)

pezpunk (205653) | about 2 months ago | (#47250103)

has anyone tried this with platter drives? would it simply take too long?

it's hard for me to judge whether this is more or less data than a platter drive will typically write in its lifespan. I feel like it's probably a lot more than the average drive processing in its lifetime. and anyway, platter drive failure might be more a function of total time spent spinning or seeking or simply time spent existing for all I know.

Re:context (1)

travisco_nabisco (817002) | about 2 months ago | (#47250145)

I am sure someone has done it with platter drives, however it would take substantially longer to reach the same transfer quantities as the SDD's have much higher transfer rates than the spinny drives.

Re:context (3, Interesting)

afidel (530433) | about 2 months ago | (#47250253)

Not that much higher for streaming reads and writes, the new Seagate 6TB can do 220MB/s @128KB [storagereview.com] streaming reads or writes. That works out to ~19TB/day so it would only take around 2 months to hit 1PB.

Re:context (4, Informative)

timeOday (582209) | about 2 months ago | (#47251819)

But contiguous writes is the absolute (and unrealistic) best case in terms of MB transferred before failure for an HDD, because it minimizes the number of revolutions and seeks per megabyte written. For whatever it's worth, it used to be said that "enterprise grade" drives were designed to withstand constant seeking associated with accesses from multiple processes, instead of fewer seeks associated with sporadic, single-user access.

If seeking does wear a drive, then using an SSD for files that generates lots of seeks will not only greatly speed up the computer, but also extend the life of HDDs relegated to storing big files.

Re:context (2)

thesupraman (179040) | about 2 months ago | (#47250189)

Why? The failure modes are completely different (and yes there are quite a few reports around on this subject..)

SSDs have a write capacity limitation due to write/erase cycle limitations (they also have serious long term data retention issues).
Mechanical drives tend to be more limited by seek actuations, head reloads, etc. The surfaces dont really have a problem write erase/write cycles.

Nether are particularly good for long term storage at todays densities. Tape is MUCH better.

Re:context (2)

pezpunk (205653) | about 2 months ago | (#47250229)

the problem with tape is by the time you can retrieve the data you're interested in, it no longer matters.

Re:context (3, Informative)

LordLimecat (1103839) | about 2 months ago | (#47251049)

Tape actually has pretty high transfer rates. Its seek times are what sucks, but if you're doing a dump of tape you arent doing any seeking at all.

Re:context (1)

dshk (838175) | about 2 months ago | (#47251435)

I regularly do restores from an LTO-3 drive, and the whole process takes no more than 5 minutes. If your data is useless after 5 minutes, then it is indeed unecessary to backup it, not to mention archiving it.

Re:context (0)

Anonymous Coward | about 2 months ago | (#47251903)

For the cost of a single tape drive I could buy many, at least ten or twenty, 4TB hard drives for backups.

Re:context (3, Informative)

ShanghaiBill (739463) | about 2 months ago | (#47250205)

has anyone tried this with platter drives?

A few years ago, Google published a study [googleusercontent.com] of hard disk failures. Failures were not correlated with how much data was written or read. Failures were correlated with the amount of time the disk was spun up, so you should idle a drive not in active use. Failures were negatively correlated with temperature: drives kept cooler were MORE likely to fail.

Re:context (0)

fnj (64210) | about 2 months ago | (#47250499)

Failures were correlated with the amount of time the disk was spun up, so you should idle a drive not in active use.

That makes no logical sense unless the statement is missing a "not" somewhere, or unless you WANT failures.

Re:context (3, Informative)

viperidaenz (2515578) | about 2 months ago | (#47250641)

While ShanghaiBill apparently struggles with the English language, the phase "you should idle a drive not in active use" means the drive will spin up fewer times. You should disable spin down and leave the drive idling, not on standby.
You'll reduce the number of head load/unloads.
You'll reduce peak current consumption of the spindle motor.
The drive will stay at a more stable temperature.

Re:context (2)

compro01 (777531) | about 2 months ago | (#47250699)

Failures were correlated with the amount of time the disk was spun up, so you should idle a drive not in active use.

That makes no logical sense unless the statement is missing a "not" somewhere, or unless you WANT failures.

You're reading the sentence wrong. You're reading it as "Times the disk was spun up".

What they mean is the total amount of time the disk has spent spinning over its lifetime.

Re:context (1)

ShanghaiBill (739463) | about 2 months ago | (#47250915)

What they mean is the total amount of time the disk has spent spinning over its lifetime.

Yes, this is correct. It is the total amount of time spent spinning that you want to minimize, not the number of "spin-up/spin-down" cycles. The longer the disk spins, the more wear on the bearings.

Re:context (4, Interesting)

dgatwood (11270) | about 2 months ago | (#47251445)

That's curious. Almost all of the drive failures I've seen can be attributed to head damage from repeated parking prior to spin-down, whereas all the drives that I've kept spinning continuously have kept working essentially forever. And drives left spun down too long had a tendency to refuse to spin up.

I've had exactly one drive that had problems from spinning too much, and that was just an acoustic failure (I had the drive replaced because it was too darn noisy). With that said, that was an older, pre-fluid-bearing drive. I've never experienced even a partial bearing failure with newer drives.

It seems odd that their conclusions recommended precisely the opposite of what I've seen work in practice. I realize that the plural of anecdote is not data, and that my sample size is much smaller than Google's sample size, so it is possible that the failures I've seen are a fluke, but the differences are so striking that it leads me to suspect other differences. For example, Google might be using enterprise-class drives that lack a park ramp....

Re:context (1)

fuzzyfuzzyfungus (1223518) | about 2 months ago | (#47250561)

I suspect that direct comparisons are tricky: magnetic platter surfaces should, at least in theory, have virtually infinite read and erase capability; but every mechanical part dies a little when called on to move(and if the lubricants are a problem, when not called on to move for too long).

With SSDs, we know that the NAND dies a bit every time it is erased and rewritten; sometimes after surprisingly few cycles with contemporary high density MLC NAND; but the supporting solid state stuff should last longer that the person who owns the drive, barring firmware bugs or severe shoddiness.

And the winners are... (-1, Troll)

fustakrakich (1673220) | about 2 months ago | (#47250149)

Oh wait, I have to read the article? Homey don't do that.

By the way, 700TB isn't all that much these days. Betcha I could do it in a week's worth of video editing.

Intel - weird failure mode. (0)

Anonymous Coward | about 2 months ago | (#47250183)

TFA:

After a reboot, the SSD disappeared completely from the Intel software. It was still detected by the storage driver, but only as an inaccessible, 0GB SATA device.

According to Intel, this end-of-life behavior generally matches what's supposed to happen. The write errors suggest the 335 Series had entered read-only mode. When the power is cycled in this state, a sort of self-destruct mechanism is triggered, rendering the drive unresponsive. Intel really doesn't want its client SSDs to be used after the flash has exceeded its lifetime spec.

*blink*

Nice that the MWI provided advanced warning, but the actual behavior when it ran out seems to be the opposite of what's supposed to happen: the drive should be readable but not writable.

I had an X-25M that failed in similar fashion; although it had an MWI of 100% when it died and had barely seen its first couple of terabytes of writing, it was in a situation where there would have been heavy write amplification on whatever space it had left. When it died, applications fell over, and it showed up as an 8MB drive on powerup. 100% data loss. I should probably pull the chips off it and dump them - it was one of the pre-encryption drives - just to see if I can get anything back.

Re:Intel - weird failure mode. (2)

marcomarrero (521557) | about 2 months ago | (#47250377)

The 8MB problem is an Intel firmware bug (older, non-Sandforce controllers). If you don't care about your data, ATA "security erase" can make it usable again. I think I used the DOS-based hdderase, and after a few problems it went through. Intel's DOS-based flash idiotically ignores the SSD because it identifies itself as "BAD_CTX"...

Re:Intel - weird failure mode. (1)

viperidaenz (2515578) | about 2 months ago | (#47250687)

but the actual behavior when it ran out seems to be exactly what's supposed to happen

FTFY
When a flash cell fails, it can no longer hold the charge that stores the bit.
It will always be read as if it had no charge, therefore read checksums will fail and the drive is unreadable.

Re: And the winners are... (1)

Anonymous Coward | about 2 months ago | (#47250219)

100TB a day? Roughly 1.2GB per second? No. No you won't.

Re: And the winners are... (0)

fustakrakich (1673220) | about 2 months ago | (#47250419)

100TB a day? Roughly 1.2GB per second? No. No you won't.

Yes. Yes I will [forret.com] ! Any other questions?

Re: And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47250583)

100TB a day? Roughly 1.2GB per second? No. No you won't.

Yes. Yes I will [forret.com] ! Any other questions?

Um, sir? Yes, um, I have a question...what sort of device can I stick in my computer that will write data at 1.2 GB/sec?

Re: And the winners are... (1)

viperidaenz (2515578) | about 2 months ago | (#47250729)

SATA 3.2 isn't out yet for consumer drives, so no you won't.

1.2GB/s twice the bandwidth of SATA 3.0
.

Re: And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47251045)

Who says he's using SATA? There are PCIe 4x SSDs out there you know.

Re: And the winners are... (1)

LordLimecat (1103839) | about 2 months ago | (#47251077)

Its all irrelevant, because theres no SSD out there that could handle that write rate, and theres no way hes generating that much data 24/7.

Hes full of crap and doesnt want to admit it.

Re: And the winners are... (1)

tysonedwards (969693) | about 2 months ago | (#47251323)

Um... a good PCI-E drive, such as a Fusion-IO board will certainly handle that write rate. That *he* is generating enough content to fill that pipe for a week strait is unlikely though as it would require multiple 10Gbase connections to do so. Since he is talking about video editing, let's say this is a surveillance system taking uncompressed HD streams that are being written natively to disk without transcoding prior to editing; we are still talking about 188 cameras coming to this one server.

That the likes of Facebook would be generating sufficient content to saturate these cards, again possible in terms of server to server replication to keep their cluster in sync and maintain live backups and hot standbys, however unlikely that they would want to fully saturate their bandwidth to single nodes as opposed to just adding some more servers to ensure that capacity exists so their users can connect.

Re: And the winners are... (1)

LordLimecat (1103839) | about 2 months ago | (#47251065)

You're editing 4k video 24/7? Thats quite impressive, but not terribly believable.

Re: And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47251957)

Automation, baby [youtube.com] ! It's the next big thing.

Posting AC because the asshole moderators are trolling my account

-F

Re:And the winners are... (4, Insightful)

travisco_nabisco (817002) | about 2 months ago | (#47250227)

Good luck with that. This experiment has been running since Aug 20, 2013 and running almost continuously at that. Even the heaviest consumer/prosumer work load would have trouble reaching the amount of data written in this experiment.

Re:And the winners are... (0)

fustakrakich (1673220) | about 2 months ago | (#47250401)

See for yourself [forret.com] . Sure, that's high end now, but in the future? Anyway, there you go, ten days (so sue me) will eat a little more than a petabyte. So now I would have to stripe 10 or 20 of these SSDs to hold it all. Now what will my failure rate be?

On the other hand I still prefer SSDs over all the monkey motion going on in a hard drive. I'm just pointing out that a petabyte doesn't mean much anymore. And I still remember having a 20 meg drive and thinking I'll never use it all.

Re:And the winners are... (-1)

Anonymous Coward | about 2 months ago | (#47250533)

Ooooo! Mod bombing cunts are out in force! And on top of that, they are math illiterate to boot! Fuck yourself sideways! Then try reading the posts, you stupid fucks!

Re:And the winners are... (1)

swb (14022) | about 2 months ago | (#47250661)

You couldn't sustain that bit rate on a SATA interface. No normal workflow would sustain that volume of writes or encoding, especially prosumer or lower.

There may be broadcast or industrial uses but they would be writing to industrial strength storage via 16 gig fc to SAS SLC arrays.

Re:And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47251061)

Not everyone - I work in broadcast. We're cheap. Not just cheap, fucking ultra cheap. Our sister station get whatever they want whenever they ask for it, but we get nothing. We use outdated consumer grade IT gear. Our network is much slower than 100 megabit should be. Our top editing computers are a couple of years old. We have single terabyte drives in them, and use those as our capture and export drives.

Re:And the winners are... (1)

viperidaenz (2515578) | about 2 months ago | (#47250769)

You failed at math.

You won't be writing 1.2GB/s to any SSD currently available. They all max out at SATA 3.0 - 600MB/s.
Since you'd need at least 3 striped drives to even try to sustain 1.2GB/s, your endurance has now tripled from 700TB to 2.1PB.

Re:And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47250849)

Since you'd need at least 3 striped drives...

That's just multiplies the possible failure rate by three, regardless of the reason for such failures.

Re:And the winners are... (1)

LordLimecat (1103839) | about 2 months ago | (#47251091)

Thats not really how it works. The wear is leveled across cells, so increasing the number of drives in a RAID0 really does increase the amount of data till a predictable failure (ie, the "write limit").

Re:And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47251093)

Not true... SSDs with direct PCI connections, such as those from FusionIO, Intel, and in current Macs can reach those speeds.

Re:And the winners are... (2)

nabsltd (1313397) | about 2 months ago | (#47251557)

See for yourself [forret.com] .

Why didn't you just refer to the LHC web page and imply that you are writing at that same data rate to a single SSD...it would have exactly the same value as an argument.

Re:And the winners are... (1)

ShanghaiBill (739463) | about 2 months ago | (#47250275)

By the way, 700TB isn't all that much these days. Betcha I could do it in a week's worth of video editing.

I'll take that bet. Most SSDs have physical bandwidths of less than 1GB/sec. So even if you were writing continuously, without sleep or bathroom breaks, and reading nothing back, you would still need more than a week to write that much data.

Re:And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47250519)

Protip: A computer is capable of performing actions without a person sitting in front of it 24/7.

Re:And the winners are... (1)

viperidaenz (2515578) | about 2 months ago | (#47250819)

Protip: less than 1GB/sec is much less than 700TB/week.
Protip 2: SATA 3.0 is only 600MB/sec, the peak interface bandwidth is only 346GB/week.

Re:And the winners are... (1)

kcitren (72383) | about 2 months ago | (#47250949)

I think your math is off a bit by a factor of 1000:

600 MB/s and 604,800 sec/wk = 362,880,000 MB/wk = ~362,880 GB/w = ~ 362 TB/w

Re:And the winners are... (1)

viperidaenz (2515578) | about 2 months ago | (#47251019)

Off by a letter.
s/G/T

Re:And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47251241)

I did some calculations on it as well, I think Tera is actually correct here. That also makes more sense to me, as I've certainly moved a few hundred GB from HD to HD with in a day before.

Re:And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47251079)

Protip: A PCIe 4x SSD can reach 930MB/s write speed

Protip 2: When you make sweeping assumptions, you look like an idiot.

Re:And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47250563)

I have to reply anonymously to avoid the trolls mod bombing the account, but read the links in the other replies I made. A video house will chew these things up. And yes, you would have to stripe at least ten of these things to a very fast interface.

Re:And the winners are... (2)

gman003 (1693318) | about 2 months ago | (#47250707)

Which will also spread around the writes. If you're writing a 4TB video across 10 disks, that's only 410GB to each, so you only get that much endurance used up.

Re:And the winners are... (4, Informative)

jcochran (309950) | about 2 months ago | (#47250279)

You might want to do a bit of math before making such a statement. 700TB is a very large amount of data. And in order to do that in a week, would require quite a bit of data transfer bandwidth. To wit:

700,000,000,000,000 / 7 days = 100,000,000,000,000 / 24 hours = 4,166,666,666,666 / 3600 seconds = 1,157,407,407 bytes per second.

Do you really write 1.157GB/second every second for a week? And if so, what data interface are you using? I'd really like to know since SATA 3.0 can only handle 600MB/second. Perhaps you're using SATA 3.2 which does have the required speed?

Now in an environment using multiple drives, you can get to the 700TB mark much more rapidly with much lower per drive bandwidth. But then again, that's not the test criteria. They are testing how much endurance individual SSDs have.

Re:And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47250485)

He's using OooA interface.

Re:And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47250633)

You need to read the replies I already made. There's a link to shows how fast you can wreck an SSD, and if you have that kind of money, you probably wouldn't care.

And here's a big fuck you! to the idiot moderator(s).

Re:And the winners are... (1)

gman003 (1693318) | about 2 months ago | (#47250677)

Good luck with that.

The Intel 335 has a sequential write speed of about 350MB/s (the rest are around the same speed). Writing 700TB at that speed would take 24 days and change, with no breaks to do things like read any of that data.

Re:And the winners are... (0)

Anonymous Coward | about 2 months ago | (#47251729)

I think you're wrong about the week part, but otherwise correct. When we attempted to swap-out the drives on a MySQL server that had a pair of five year-old 15k drives quit, it was less than ninety days before we had the first failure and four days later the second drive quit. They were Samsung 840 250GB drives. I was able to swap them out quickly because I had bought several extra for desktops. I think at the end of five months before we finally replaced them with new 300 GB 15k SAS drives, we had replaced nine drives. The RAID array was only eight drives! Fortunately we were using RAID 6 so we didn't lose any data, but it was scary. I hated to pay more for slower drives, but for a server you must have reliable drives. There's a reason Dell's cheapest (well, the last time I looked) server SSD is $1,500.

Samsung would only replace four of the drives. Those four have been in heavy use in developer machines since then without a failure, but they just didn't work for a write-heavy DB server. I would be weary buying another Samsung product because of their horrible support, but the drives do work for their intended purpose.

Rated Lifespan (0)

Anonymous Coward | about 2 months ago | (#47250161)

"all six SSDs exceeded their rated lifespans by hundreds of terabytes" - Interesting and probably relevant data, but doesn't the "rated lifespan" include retaining the data for at least one year after the last write is performed?

Re:Rated Lifespan (1)

AK Marc (707885) | about 2 months ago | (#47250269)

Can you link to that claim, or did you make it up?

Re:Rated Lifespan (1)

dc_gap (3696301) | about 2 months ago | (#47250917)

Did not make it up, or at least not on purpose. Link: http://www.jedec.org/sites/def... [jedec.org] Go to page 24 "Endurance Rating" and you see the last requirement is: "4) the SSD retains data with power off for the required time for its application class." Then go down to page 25 and you will see that the above "required time for its application class" for a "client class" device is 1yr. This is consistent with many NAND device datasheets that I've been dealing with. It is common to spec 10 years min power off data retention when new and 1 year when they've reached their max write rating.

All still going (1)

m.dillon (147925) | about 2 months ago | (#47250301)

I have around 30 ranging from 40G to 512G, all of them are still intact including the original Intel 40G SSDs I bought way at the beginning of the SSD era. Nominal linux/bsd use cases, workstation-level paging, some modest-but-well-managed SSD-as-a-HDD-cache use cases. So far wearout rate is far lower than originally anticipated.

I'm not surprised that some people complain about wear-out problems, it depends heavily on the environment and use cases and people who are heavy users who are not cognizant of how they are using their SSDs could easily get into trouble.

For the typical consumer however, the SSD will easily outlast the machine. Even for a pro-sumer doing heavy video editing. Which, strangely enough, means that fewer PCs get sold because many consumers use failed or failing HDDs as an excuse to buy a new machine, and that is no longer the case if a SSD has been stuffed into it.

A more pertinent question is what the unpowered shelf-life for typical SSDs is. I don't know anyone who's done good tests (storing a SSD in a hot area unpowered to simulate a longer shelf time). Flash has historically been rated for 10-years data retention but as the technology gets better it should presumably be possible to retrieve the data after a long period on a freshly written (only a few erase cycles) SSD. HDDs which have been operational for a time have horrible unpowered shelf lives... a bit unclear why, but any HDD I've ever put on the shelf (for 6-12 months) that I try to put back into a machine will typically spin-up, but then fail within a few months after that.

-Matt

Re:All still going (0)

Anonymous Coward | about 2 months ago | (#47250719)

>but as the technology gets better it should presumably be possible to retrieve the data after a long period on a freshly written

Nope. It is not like you can update the physical structure of a silicon chip by firmware upgrade.

As technology gets "better", the margins gets smaller and smaller. Instead of trapping a certain amount of electrons to represent a '0', you now have a smaller amount of electrons (smaller geometry) and trying to use the number to represent a a few bits instead of 1.

As a matter of fact, read the datasheet of the MLC chips and see for your self. it is all there under data retention.

Re:All still going (1)

BitZtream (692029) | about 2 months ago | (#47251337)

a bit unclear why, but any HDD I've ever put on the shelf (for 6-12 months) that I try to put back into a machine will typically spin-up, but then fail within a few months after that.

The lubrication in the bearings of the platters and head arms gets thicker over time after being heated a few times. It needs to stay warm to keep a lower/workable viscosity. The drag becomes too great fairly rapidly after even a few months initial use when then stored on the shelf.

Good news for me (1)

Snotnose (212196) | about 2 months ago | (#47250305)

Considering 90% of my storage is write once, read many (email, mp3, dvds, programs, etc), this is good for me as long as the drive has a good, errr, brain fart, scheme so when I write a byte it chooses one I haven't written to in a while. My SSD should last forever, or until the electron holes break free of their silicon bonds.

Dammit (-1)

Anonymous Coward | about 2 months ago | (#47250323)

I just purchased an EVO 840 and put my entire life on it. Now I find it's fragile. OH NOES!

Re:Dammit (0)

Anonymous Coward | about 2 months ago | (#47250567)

I just bought that same drive a week ago. It has very good reviews and very few failures.

I've been using a 256GB G.Skill Sniper SSD for years now, and it has served me well. I figure it was time to upgrade as SSDs have proven themselves to be very reliable; often more reliable than their platter counterparts. Still, it doesn't seem to stop this spew of "lol but ssds sux n fail alot!" nonsense from the fools that can't actually be bothered to do any research on the subject.

In short, no worries. Your drive will almost certainly be fine. And if it isn't, then contact Samsung for a replacement. You should *always* backup sensitive data no matter what storage medium you are using.

I am sticking to rated lifespan (1)

iamacat (583406) | about 2 months ago | (#47250841)

Ability to write hundreds of terrabytes more is nice. But it's reading them back that I am really worried about. Great news for someone deploying a short term cache.

Ho8o (-1)

Anonymous Coward | about 2 months ago | (#47250873)

to make sure the 7or hthe state of

extremesystems test (3, Informative)

0111 1110 (518466) | about 2 months ago | (#47250993)

There was also a very interesting endurance test [xtremesystems.org] done on extremesystems.org. Very impressive stuff. I don't yet own an SSD, but I'll continue to consider buying one! Maybe next Black Friday. Just waiting for the right deal.

Re:extremesystems test (1)

camperdave (969942) | about 2 months ago | (#47251987)

I bought two a few years back, and both are working like champs. The only problem I encountered is that my laptop now boots too fast. The keyboard becomes unresponsive for about 30 seconds (both Win7 and Linux), so I have to twiddle my thumbs at the login prompt. Before, this was hidden by the slow turning of the platters.

IO pattern (3, Insightful)

ThePhilips (752041) | about 2 months ago | (#47251097)

That's a heck of a lot of data, and certainly more than most folks will write in the lifetimes of their drives.

Continued write cycling [...]

That's just ridiculous. Since when the reliability is measured in how many petabytes can be written?

Spinning disks can be forced into inefficient patterns, speeding up the wear on mechanics.

SSDs can be easily forced to do a whole erase/write cycle just by writing single bytes into the wrong sector.

There is no need to waste bus bandwidth with a petabyte of data.

The problem was never the amount of the information.

The problem was always the IO pattern which might accelerate the wear of the the media.

Re:IO pattern (2)

m.dillon (147925) | about 2 months ago | (#47251701)

Yes, but it's a well-known problem. Pretty much the only thing that will write inefficiently to a SSD (i.e. cause a huge amount of write amplification) is going to be a database whos records are updated (effectively) randomly. And that's pretty much it. Nearly all other access patterns through a modern filesystem will be relatively SSD-efficient. (keyword: modern filesystem).

In the past various issues could cause excessive write amplification. For example, filesystems in partitions that weren't 4K-aligned, filesystems using a too-small a block size, less efficient write-combining algorithms in earlier SSD firmwares. All of those issues, on a modern system, have basically been solved.

-Matt

limited endurance? (0)

Anonymous Coward | about 2 months ago | (#47251397)

why does flash memory have limited endurance? Too bad companies can't use regular DDR3 memory in SSDs.

Re:limited endurance? (1)

camperdave (969942) | about 2 months ago | (#47252009)

DDR3 must be continuously powered and actively accessed in order to keep the data alive. No power - no data. Great for RAM, but completely wrong for long term storage.

How is 700TB "endurance"? (-1, Troll)

Anonymous Coward | about 2 months ago | (#47251641)

How is 700TB "endurance"? I copy near a TB of data from Backups at work almost daily. So 1-2 years (if that) is "endurance"? Screw that! Sounds more like modern SSD's suck hard and aren't designed to last past 1-2 years of work. I'll stick with traditional HD's until they figure out DRAM drives that don't need batteries or constant power.

Re:How is 700TB "endurance"? (2)

unrtst (777550) | about 2 months ago | (#47251911)

How is 700TB "endurance"? I copy near a TB of data from Backups at work almost daily. So 1-2 years (if that) is "endurance"? Screw that! Sounds more like modern SSD's suck hard and aren't designed to last past 1-2 years of work. I'll stick with traditional HD's until they figure out DRAM drives that don't need batteries or constant power.

How large is your backup filesystem(s)? This was 700TB written to a 250gb drive. If you're copying "near a TB of data from Backups ... almost daily", then I'm betting you have many many TB of storage in the backup pool... so divide that by 250gb and multiple that by 700TB and that's the endurance the SSD's would have. However, even then it doesn't really apply... your backups are not likely to be rewriting a lot of sectors (ex. deduplication, if used, means few files are actually written). You also said you copied FROM backups, so those are just reads (I'm presuming those are going out to multiple clients).

In any case, the 700TB "endurance" figure is still acurate, even if you consider that fragile - it's a level of endurance under a specific use case.

FWIW, for a backup system, I'd also stick with spinning disks (or tape) for now and well into the foreseeable future. Throughput and IOPs are not very important to backup storage, and you'll get way more GB/dollar from HDD's.

Re:How is 700TB "endurance"? (-1)

Anonymous Coward | about 2 months ago | (#47252013)

How is 700TB "endurance"? I copy near a TB of data from Backups at work almost daily. So 1-2 years (if that) is "endurance"? Screw that! Sounds more like modern SSD's suck hard and aren't designed to last past 1-2 years of work.

Well, they're obviously adequate for use in applications that don't require a lot of writing, but they're not ready to replace RAM/platters in systems that have a lot of throughput.

It's a shame, since you'd think solid-state memory would be essentially unlimited in reuse capacity, but flash technology is not the be-all-end-all. Call me back when there's a memory tech that is as fast as RAM and never wears out!

(Looking at you, memristors...)

Graceful Failover ? What Graceful Failover? (1)

citizenr (871508) | about 2 months ago | (#47251707)

Even Intel, behemoth of reliable server hardware, wasnt able to fix Sandforce problems.
According to Intel representative Graceful Failover of SSD drive means you _kill_ the drive in software during a reboot :DDD and not switch it to read only mode (like you promise in the documentation).

Kiss your perfectly readable data goodbye.

Endurance Experiment Writes One Petabyte To Three (1)

Culture20 (968837) | about 2 months ago | (#47251767)

Endurance Experiment Writes One Petabyte To Three Consumer SSDs
"how much data could be written to six consumer drives. One petabyte later, half of them are still going."
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?

Submission Text Formatting Tips

We support a small subset of HTML, namely these tags:

  • b
  • i
  • p
  • br
  • a
  • ol
  • ul
  • li
  • dl
  • dt
  • dd
  • em
  • strong
  • tt
  • blockquote
  • div
  • quote
  • ecode

"ecode" can be used for code snippets, for example:

<ecode>    while(1) { do_something(); } </ecode>