Beta
×

Welcome to the Slashdot Beta site -- learn more here. Use the link in the footer or click here to return to the Classic version of Slashdot.

Thank you!

Before you choose to head back to the Classic look of the site, we'd appreciate it if you share your thoughts on the Beta; your feedback is what drives our ongoing development.

Beta is different and we value you taking the time to try it out. Please take a look at the changes we've made in Beta and  learn more about it. Thanks for reading, and for making the site better!

Solid State Drives Tested With TRIM Support

samzenpus posted more than 5 years ago | from the try-them-out dept.

Data Storage 196

Vigile writes "Despite the rising excitement over SSDs, some of it has been tempered by performance degradation issues. The promised land is supposed to be the mighty TRIM command — a way for the OS to indicate to the SSD a range of blocks that are no longer needed because of deleted files. Apparently Windows 7 will implement TRIM of some kind but for now you can use a proprietary TRIM tool on a few select SSDs using Indilinx controllers. A new article at PC Perspective evaluates performance on a pair of Indilinx drives as well as the TRIM utility and its efficacy."

Sorry! There are no comments related to the filter you selected.

I love trim (-1, Offtopic)

Anonymous Coward | more than 5 years ago | (#28367839)

but I love bald pussy even more!

But its the future (5, Interesting)

telchine (719345) | more than 5 years ago | (#28367857)

I finally got the opportunity to test out SSDs this year. There may be the odd teething problem to get over, but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago.

Re:But its the future (1)

Rockoon (1252108) | more than 5 years ago | (#28367915)

The mechanicals may be able to stay ahead in capacity for a long long time, even though they obviously have no hope of competitng in the performance arena ever again.

Re:But its the future (1, Interesting)

Anonymous Coward | more than 5 years ago | (#28368217)

The mechanicals may be able to stay ahead in capacity for a long long time, even though they obviously have no hope of competitng in the performance arena ever again.

I disagree with this. With mechanicals, we're adding 250-500GB with each iteration. With SSD, they're doubling with each iteration. Considering they're basically at 1/8 the capacity of mechanical drives, it'll only be another couple of years before they surpass mechanical drives.

Re:But its the future (4, Informative)

rm999 (775449) | more than 5 years ago | (#28368347)

Actually, magnetic disks have exponentially increased in capacity since the 50s. In fact, the rate of increase has been higher than the growth of transistor count.

See: http://www.scientificamerican.com/article.cfm?id=kryders-law [scientificamerican.com]

Re:But its the future (2, Interesting)

dgatwood (11270) | more than 5 years ago | (#28368677)

Things have changed a lot in four years. Since 2005, hard drives have only increased from 500 GB to 2 TB---a factor of 4. In that same time, Compact Flash cards increased from 8GB to 128 GB---a factor of 16. Flash density increases are severely outpacing hard drive density increases, and unlike hard drives, flash storage isn't rapidly becoming less reliable as the density increases....

Re:But its the future (2, Insightful)

j-turkey (187775) | more than 5 years ago | (#28369233)

...and unlike hard drives, flash storage isn't rapidly becoming less reliable as the density increases....

I can see the logic behind the argument that hard drives should become more failure prone as the platter density increases, but I've yet to see any data substantiating this point. Your claim that hard drives are rapidly becoming more unreliable makes your statement come off as even more dubious to me.

I don't mean to attack you or come off as a complete dickhole, but do you know of any data to back this up? I'm legitimately curious, as in my (completely anecdotal) experience, magnetic hard drives seem to be getting more and more reliable.

(Mind you, I'm seriously knocking on wood...I know that I'm going eat my words when I wake up to multiple simultaneous drive failures just for opening my big fat mouth about my good fortune with magnetic data.)

Re:But its the future (3, Informative)

Courageous (228506) | more than 5 years ago | (#28369359)

Flash drives have longer MTBF than spinning media... so they last longer. However, a less well known fact is that flash drives have a URE rate 10-100X worse than spinning media does typically today. It's getting fixed, but the fellow you're replying to is basically wrong.

C//

It is yesterdays future ... (0)

Anonymous Coward | more than 5 years ago | (#28368785)

> Actually, magnetic disks have exponentially increased in capacity since the 50s.
> In fact, the rate of increase has been higher than the growth of transistor count.

No, it hasn't!

According to the - slightly stale, from 2005 - article, magnetic storage density has increased by a factor of 50 million in 49 Years, from 2000bit/sq.in. in 1956 to 100Gbit/sq.in. in 2005.

In the same time, transistor count has increased from 1 (single Transistor) in 1956 to a billion Transistors (Gbit Ram) in 2005.

Even if you start in only 1970 with the 1Kbit Dram, you get a millionfold (2^20) increase in just 35 years, or a doubling every 1.75 years, while Disks increased by less than 2^26 in 49 years, doubling only every 1.9 years.

And these days, Hard disks are as dead as a certain parrot!

Unlike in 2005, none of the current must-have gadgets like Iphones, Navigation Systems or high-end Netbooks still sports one, they are increasingly relegated to a role in Grandma's cheap walmart computer holding 10 years worth of crappy photos and videos.

Re:It is yesterdays future ... (4, Insightful)

geekboy642 (799087) | more than 5 years ago | (#28369115)

I can buy a terabyte hard drive for around $100. For the same hundred dollars, the best SSD I can find is 32GB. On my computer, Steam's cache folder is bigger than 32GB. My music player has a 120GB drive, my DVR has a 350GB drive, and my backup server has a 1.5TB raid. Just because expensive mobile gadgets use expensive solid-state drives does not mean hard drives are dead, dying, or even decaying.

Re:It is yesterdays future ... (1)

j-turkey (187775) | more than 5 years ago | (#28369267)

I can buy a terabyte hard drive for around $100. For the same hundred dollars, the best SSD I can find is 32GB. On my computer, Steam's cache folder is bigger than 32GB. My music player has a 120GB drive, my DVR has a 350GB drive, and my backup server has a 1.5TB raid. Just because expensive mobile gadgets use expensive solid-state drives does not mean hard drives are dead, dying, or even decaying.

I totally agree, the fat lady hasn't sang when it comes to magnetic hard drives. It does seem like SSD's will soon find their place in performance-oriented systems though. I'm looking forward to having them sorted out enough that my next desktop will have a SSD for an OS, swap, and perhaps applications (which all tend to be hindered by the slow random access of magnetic media) - and a big honkin' magnetic drive for storage.

Re:It is yesterdays future ... (2, Informative)

setagllib (753300) | more than 5 years ago | (#28369557)

If you can afford an SSD, why would you waste it on swap? Why not just buy more RAM? If you ever actually need swap, you are doing something wrong.

Re:But its the future (1)

complete loony (663508) | more than 5 years ago | (#28369347)

Unfortunately the speed of the interface to read and write to magnetic disks hasn't been increasing at the same rate as the volume of the disk.

Re:But its the future (1)

Courageous (228506) | more than 5 years ago | (#28369367)

It will be more than 6 years before SSD's surpass the commodity SATA segment in $/GB, and $/GB is definitively what drives Tier 2 storage. So while the enterprise 10K/15K's years are numbered (I expect days of total destruction NLT 2011), SATA will be around for a while.

C//

Re:But its the future (3, Insightful)

Anonymous Coward | more than 5 years ago | (#28368209)

I finally got the opportunity to test out SSDs this year. There may be the odd teething problem to get over, but in my mind there is no market in the future for mechanical drives except maybe as cheap low-speed devices for storing non-critical information... in much the same way as tape drives were used a few years ago.

Well damn, I'll just have to tell our customer that has something like a 30 petabyte TAPE archive that's growing by about a terabyte or more each and every day that they're spending money on something you say is, umm, outdated and these newfangled devices that the next power surge will totally fry are the wave of the future.

Guess what? There's a whole lot more money spent on proven rock-solid technology by large organizations then you apparently know.

Tape and hard drives are going NOWHERE. For a long, long time to come.

Re:But its the future (4, Interesting)

MeatBag PussRocket (1475317) | more than 5 years ago | (#28368349)

if by "proven rock-solid" you mean horrid fidelity and media degradation rates, i'd say you are correct about tapes. if you're client has a 30 petabyte tape archive there is probably some horrible inefficiency goin on. (i'm sure you probably have little control ofer the situation, i have similar clients) but if they have 30Pb of data on tape that they access regularly, they're wasting a LOT of time just retrieving data. you should really consider a SAN NAS or similar. HDD storage is very cheap these days and LTO4 tapes are pretty pricey. we all know they have shoddy storage quality to boot. if they dont access it regulary then its probably a real waste of money to own, record and store 30Pb of data. either way, just the physical storage of that many tapes is probably about equivelant to the sq. footage needed for a rack or 2 (or 3) of blade servers with the same storage capacity.

Re:But its the future (1)

morgan_greywolf (835522) | more than 5 years ago | (#28368769)

Mod parent up. SSDs are a very immature technology and are not, yet, ready for the enterprise data center. Wait a few years until the technology matures. Magnetic hard drives have been around for what? 30-40 years? They're stable and proven. How many multi-petabyte enterprise data centers have you seen running SSDs as their primary storage? None. Yeah, that's what I thought.

They also have a long way to go before they compete with mangetic hard drives in terms of cost.

Re:But its the future (2, Insightful)

Phishcast (673016) | more than 5 years ago | (#28368975)

How many multi-petabyte enterprise data centers have you seen running SSDs as their primary storage? None. Yeah, that's what I thought.

Agreed that SSDs have a long way to go on price to compete, but it's simply not true that they're not yet ready for the enterprise datacenter. All the larger enterprise storage array vendors (EMC, HDS, IBM, NetApp) say they're ready, and most are shipping them with decent sales. Despite their price and the "fact" you've so eloquently stated, you'll find them in many Fortune 500 datacenters simply because they outperform spinning disks by such a factor that they're cheaper per IO. I believe today the vast majority of vendors providing enterprise-class SSD drives are sourcing them from STEC. They play some tricks to work around write limits, but they've got ~5 year MTBF ratings.

Re:But its the future (2, Interesting)

billcopc (196330) | more than 5 years ago | (#28369137)

All the larger enterprise storage vendors are full of shit. They say the SSD is "ready" because it's the hottest buzzword in the industry, which always commands huge profit margins.

On one hand, I can use cheap fast 2.0TB SATA drives for 11 cents a gig, or I can go the SSD route with 256gb drives at $4.00 a gig. That's OEM cost, which means EMC and friends will triple that number, to convince your boss these drives are "special".

Yeahhh... give me the one that costs 36 times more, takes up 4 times more space, requires 8 times more controllers and is guaranteed to wear out in a few years. If your I/O patterns are so messed up that today's horrendous SSDs actually lower your cost per I/O, you need to rethink your information architecture.

Re:But its the future (1, Insightful)

rcw-home (122017) | more than 5 years ago | (#28369289)

Yeahhh... give me the one that costs 36 times more, takes up 4 times more space, requires 8 times more controllers and is guaranteed to wear out in a few years. If your I/O patterns are so messed up that today's horrendous SSDs actually lower your cost per I/O, you need to rethink your information architecture.

There are two schools of thought regarding SSDs:

  1. Those who talk shit about them
  2. Those who have used them [newegg.com]

Re:But its the future (1)

Phishcast (673016) | more than 5 years ago | (#28369299)

How many of these 2.0TB SATA drives are you going to purchase to do the same number of random cache miss IOPS that a single SSD can do? The math does not lie, applications are out there that can gain massive performance improvements and save money at the same time using SSDs. It's so easy to say hey, re-architect your application. Guess what? Mission critical apps grow organically and are not always optimized. How heavily used will your application get before even your optimized IO creeps into the realm of "I/O patterns so messed up that today's horrendous SSDs actually lower your cost per I/O"? How much money do you think the bank/nation-wide retailer/wall street firm would need to spend to "rethink their information architecture"? Not to mention power and cooling of a room full of short-stroked 2TB SATA disks vs one cabinet of SSDs.

SSD is not gaining traction simply because it's a buzzword and commands huge profit margins (both are true). It works. It solves real problems. In the right cases it saves money. If you spent some time in a larger organization I suspect you'd change your tune. You're comparing 2TB SATA apples to 256GB SSD oranges. Both may be fruit, but they're not interchangeable.

Re:But its the future (1)

timmarhy (659436) | more than 5 years ago | (#28368505)

nope. tape is STILL the only way to backup your data if your serious. i've been hearing about the death of the spinning platters for a decade now and it's still just around the corner, much like fusion and peak oil.

Re:But its the future (1)

hedwards (940851) | more than 5 years ago | (#28368561)

What are you talking about? We've already seen peak oil, that came about a few years back.

Re:But its the future (1, Troll)

timmarhy (659436) | more than 5 years ago | (#28368649)

you don't have a fucking clue, because not every country declares it's reserves. at some point oil WILL run out, but anyone claiming to know when is a damned liar and not to be trusted.

Re:But its the future (1)

uncqual (836337) | more than 5 years ago | (#28369183)

You kids today... I've been hearing about the death of spinning platters for two decades.

Eventually they will virtually disappear as paper tape, cards and, more recently, floppies have -- but it will take a lot longer than most expect.

Now get off my lawn.

Re:But its the future (1)

noidentity (188756) | more than 5 years ago | (#28368621)

As long as magnetic drives give lower effective price per bit, they will be used.

Re:But its the future (0)

Anonymous Coward | more than 5 years ago | (#28368653)

What do you mean "a few years ago" .... my company still uses tape drives >.

Re:But its the future (0)

Anonymous Coward | more than 5 years ago | (#28369111)

The thing, you see, is that hard drives have a much better linear write speed than SSDs.

The problem (for HDs) is that the IDE protocol (regardless of whether it's over IDE or SATA), and likewise the SCSI protocol (over... whatever) try to expose a high level interface: linearly addressed blocks. And you don't know (at the OS level) where they are, how long it's going to take to get there and write to them and so on. On top of that, the OS exposes a higher again level interface: files.

The result is an interface that's much slower than could be.

To take a simple example, let's say you're running a database. A transaction commits (or prepares). The DB doesn't care *where* the prepare or commit is written, and it would be quite happy to reserve 4 blocks per cylinder (latency .5 ms on a 15k rpm disk) or 16 (.13 ms) to write it down ASAP, instead of the current 5-10 ms. But it can't. It'd need know where the heads are and then say "on that same cylinder (or one next to it), platter m, sector n1, or n2, or n3, write *this* *next time you're on it*, then tell me (I'll clean up later). Not possible.

The OS gets in the way. Theoretically it could expose a different FS access layer that would allow it, but the HD interfaces we have just can't support it.

And that's the issue. And it's the same issue with SSD. OSes are perfectly capable of handling 64k write blocks, avoid rewriting too many times, packing writes, and so on, but we're stuck with this "nice" "addressable" "block" crap and drives trying to do too many things that the OS could do much better.

It's a really f*cked up situation.

What I really want to know (2, Insightful)

earthforce_1 (454968) | more than 5 years ago | (#28367891)

Which Linux filesystem works best with SSDs? I don't intend to touch Win7.

Re:What I really want to know (-1)

Anonymous Coward | more than 5 years ago | (#28367967)

FAT. Seriously. Modern file systems are more complicated and have expectations about the underlying storage medium. SSD breaks those assumptions like CmdrTaco breaking the ass cherry on an 12 year old boy -- with similar results.

You know how python fanboys insist that the compiler can better optimize high level code since it sees the big picture? It's the same idea, except with FAT and SSD it's not a hypothetical bullshit scenario.

Re:What I really want to know (1)

loufoque (1400831) | more than 5 years ago | (#28368057)

NILFS2 I suppose.
Supposedly beats the crap out of LogFS, YAFFS and JFFS2 when using SSDs.

Re:What I really want to know (2, Informative)

vadim_t (324782) | more than 5 years ago | (#28368151)

That's because JFFS and such are intended to be used on top of a raw flash device.

SSDs do wear levelling internally already, so a filesystem that tries to do it as well is redundant.

Re:What I really want to know (3, Insightful)

SanityInAnarchy (655584) | more than 5 years ago | (#28368267)

That's my biggest complaint about them, actually -- these "teething problems" people mention are pretty much directly a result of OSes treating SSDs as though they were spinning magnetic disks.

No, the OS should be able to do its own wear leveling. If you need to pretend it's a hard drive, do it in the BIOS and/or the drivers, not in the silicon -- at least that way, you can upgrade it later when things like this come out.

Re:What I really want to know (0)

Anonymous Coward | more than 5 years ago | (#28368671)

Problem with that - Windows.

Treating the SDD as an SDD, and letting the OS do it's own wear levelling, would require operating system support. It would likely also require a different filesystem. Microsoft would have to support this in a future version of Windows. Hardware manufacturers are not going to wait for Microsoft to catch up, and their hardware has to work with OSes available right now. So they have no choice in the matter.

Re:What I really want to know (0)

Anonymous Coward | more than 5 years ago | (#28368999)

Fuck that. You can "upgrade it later when things like this come out" through firmware updates. Thats kindof the whole point of the article (the performance changes from adding TRIM support through a firmware update and a utility since the OS doesn't support it yet). God forbid we RTFA though.

If OS support vs Silicon support would result in problems similar to USB vs Firewire (in addition to using system resources that dont need to be used) it's not worth the potential portability or updateability.

Re:What I really want to know (3, Insightful)

gad_zuki! (70830) | more than 5 years ago | (#28369019)

No way, lets have the firmware do this. The problem with your approach is that the OS wont understand the drive as well as the manufacturer does, so it will always be a sub-optimal solution. Dont tie the hands of the manufacturer to put intelligence in his drives. For instance, the best way to wipe a disk is via an ATA command [zdnet.com] , and not through multi-passes of wipes. The manufacturer knows where the heads are and how the drive writes. The SSD situation is somewhat similar.

Re:What I really want to know (1)

complete loony (663508) | more than 5 years ago | (#28369317)

Yes and No.

The linux kernel's recent UBIFS [wikipedia.org] flash support is I believe separated into 2 distinct layers. There's a layer for logical to physical address translations with wear-leveling and free space tracking (UBI). And a separate layer for organising the storage of the filesystem within those used blocks while keeping stored data in block sizes that match the underlying physical media and re-writing the whole block at once.

I think that kind of abstraction is useful enough for the OS, potentially with the UBI layer provided by a hardware device.

Re:What I really want to know (1)

Ilgaz (86384) | more than 5 years ago | (#28369475)

It is all NTFS'es fault. Impossible to turn off journaling, OS doesn't move journal etc.

On HFS+ , journal is also an ordinary file and backwards compatible. In fact theoretically, OS X can even journal FAT if it wanted to.

So, if you turn off journaling, half (or more) of the potential problem is gone. First, there won't be a journal in some area being written over and over. Second, OS X won't enable "hot band" function which puts the most accessed files (hot files) to beginning of disk, a specific area in many strict and fail safe conditions. It won't dare to move files on the fly if journaling is not enabled and hot band makes no sense on SSD anyway. It even erases hotfiles.btree (the database) right when journaling is disabled.

I have a feeling that the issue is mostly a Windows issue, because of the filesystem. How come we never hear horror stories from "Macbook Air" owners who preferred SSD? Could we be arguing something which ActiveWin etc. guys argue about?

Re:What I really want to know (4, Informative)

blitzkrieg3 (995849) | more than 5 years ago | (#28368283)

You beat me to it, but in the spirit of adding value, there's a good article here [linux-mag.com] . Another benefit of nilfs2 is that you can easily snapshot and undelete files, giving it a sort of built in "time machine" technology (to use apple's terminology).

I'm just surprised that none of the linux distros are talking about it yet. You would think with the apple and ibm laptops using SSD today that there would be some option somewhere. I think everyone is distracted by btrfs.

Re:What I really want to know (1)

MeatBag PussRocket (1475317) | more than 5 years ago | (#28368361)

i would think EXT4 would be the FS of choice for a SSD, if i'm wrong, i wouldnt be surprised but why those over EXT4?

Re:What I really want to know (0)

Anonymous Coward | more than 5 years ago | (#28368541)

Normal file systems are designed for hard drives. That means a file's blocks are contiguous, if possible, and if you need to update a block, you overwrite it in place. With SSDs, there's no rotational delay, seek delay, etc, so there's no advantage to having a file's blocks being near each other. However, there is a big disadvantage in updating a block in place. Log file systems don't update blocks in place, they leave the old block as is and write data to an unused block. That means you get free snapshot capabilities, but it also means it works well with SSD's limited write cycles.

Re:What I really want to know (1)

onefriedrice (1171917) | more than 5 years ago | (#28368611)

I've got ext4 on my SSD. It performs very well, but nilfs is a better fit for an SSD. I'll reformat to nilfs sometime within the next few kernel release cycles. Nevertheless, ext4 is just fine--I even have journaling and all the other bells and whistles. I'm not afraid of the additional wear as I suspect the drive will fail by some other technical malfunction long before the flash cells wear out.

By the way, it's true what they say: An SSD is the one component that will provide you with the most noticeable performance boost your computer has ever had, and it's one of the cheapest, too. I just got a 30G for ~$120 and my root filesystem fits comfortably on it (obviously my data is on a spinning disk). Now I boot in seconds and applications (yes, even Firefox) load instantly--makes "bloat" virtually irrelevant. Seriously, I still like platter drives for their capacity, but you don't need a lot of space to store your root filesystem and you can't beat the performance improvement for just over a hundred bucks spent.

In my opinion, an SSD need no longer be considered a toy for early adopters. I certainly don't consider myself an early adopter. It just makes sense. Obviously SSD drives aren't as "mature" as our beloved platter drives, but they're not exactly brand new technology either.

Re:What I really want to know (1)

MichaelSmith (789609) | more than 5 years ago | (#28368977)

The SSD on my eeepc has similar read speeds to a laptop hard disk but read times are more consistent because there is no waiting for head and platter seek. This makes it seem faster to me.

Re:What I really want to know (1)

benow (671946) | more than 5 years ago | (#28369715)

Yes, the speedup is dramatic. The random access and multi-threaded speedup play a large role, and are left out of many comparisons. MLC and a good interface make a difference, certainly, but the major speedup is from random access.

Lots of RAM and an SSD will make a box fly.

Re:What I really want to know (1)

RiotingPacifist (1228016) | more than 5 years ago | (#28369153)

since when has ext been the best choice for anything, ext has always been about balance i doubt its the best choice for SSD, Id put my money on a log filesystem [wikipedia.org] , e.g you couldn't be more wrong and GP is correct because NILFS2 will write to used blocks much less often than conventional systems. OFC ext will be better than FAT because file-allocation table block is going to be a problem and it turns out ext4 with COW will also be good (but not as good as a log system and the journal itself will be a problem)

ReiserFS 3 (1)

bobbuck (675253) | more than 5 years ago | (#28369393)

I bought a WinTec FileMate Ultra 24G from Tiger Direct that plugs into the ExpressCard Slot. I am now using that as the boot partition with reiserfs (v3), elevator=noop, and mounted noatime. This might not give the very best performance but it is much faster than the stock HD. OpenOffice loads in 2 seconds. I turned down the /sys/block/sdb/queue/read_ahead_kb but I'm not sure where it should be. I put my logs on tmpfs. Some people put the Firefox cache on tmpfs.

fragmentation? (1)

convolvatron (176505) | more than 5 years ago | (#28367935)

can someone explain why fragmentation in the mapping between logical blocks and
physical addresses causes performance degradation?

is it an issue with logically sequential reads being spread across multiple pages?

a multi-level lookup to perform the mapping?

?

Re:fragmentation? (4, Informative)

Vigile (99919) | more than 5 years ago | (#28368019)

This older Slashdot post linked in the story links to a story that covers that topic very well: http://www.pcper.com/article.php?aid=669 [pcper.com]

Re:fragmentation? (3, Interesting)

sexconker (1179573) | more than 5 years ago | (#28368197)

Because, basically, flash drives are laid in levels.

When you delete, you simply map logical space as free.

If you go to use that free space later, you find that area, and drop shit into it. It's I dunno, a 32 KB block of memory called a page. If the page is full (to the point where you can't fit your new shit) of "deleted" files, you first need to write over those deleted files, then write your actual data.

If the logical space is full with good, fragmented (with deleted files interspersed) files, you need to read out to memory, reorder the living data and remove the deleted data, add in the full page back.

Think of it as having a notebook.
You can write to 1 page at a time, only.

Page 1 write

Page 2 write

Page 3 write

Page 2 delete

Page 2 write (still space)

Page 2 write (not enough space, write to page 4 instead)

Page 2 delete

Page 2 write (not enough space, no more blank pages, read page 2 and copy non-deleted shit to scratch paper, add new shit to scratch paper, cover page 2 in white out, copy scratch paper to whited-out page 2)

Re:fragmentation? (5, Funny)

iluvcapra (782887) | more than 5 years ago | (#28368353)

If you go to use that free space later, you find that area, and drop shit into it.

Knock it off with all the fancy jargon!

Re:fragmentation? (5, Informative)

cbhacking (979169) | more than 5 years ago | (#28368199)

Disclaimer: I am not a SSD firmware author, although I've spoken to a few.*

As best I can understand it, the problem is that writes are scattered across the physical media by wear-leveling firmware on the disk. In order to do this, the firmware must have a "free list" of sorts that allows it to find an un-worn area for the next write. Of course, this unworn area also needs to not currently be storing any relevant data.

Now, consider a SSD in use. Initially, the whole disk is free, and writes can go anywhere at all. They do, too - you end up with meaningful (at some point) data covering the entirety of the physical memory cells pretty quickly (consider things like logfiles, pagefiles, hibernation data, temporary data, and so forth). Obviously, most of that data doesn't mean anything anymore - to the filesystem, only perhaps 20% of the SSD is actually used, after 6 months. However, the SSD's firmware things that every single part has now been used.

Obviously, the firmware needs to be able to detect when data on disk gets obsoleted, and can safely be deleted. The problems with this are that this leads to *very* complicated translation tables - logical disk blocks end up having no relation at all to physical ones, and the SSD needs to track those mappings. The other problem is that these tables get *huge* - a typical home system might have between 100K and 1M files on it after a few months of usage, but probably generates and deletes many thousands per day (consider web site cookies, for example - each time they get updated, the wear leveling will write that data to a new portion of the physical storage).

Maintaining the tables themselves is possible, and when a logical block gets overwritten to a new physical location, the old location can be freed. The problem is that this freeing comes at the same time that the SSD needs to find a new location to write to, and the only knowledge it has about physical blocks which can safely be overwritten is ones where the logical block has been overwritten already (to a different physical location). Obviously, the lookup into the table of active blocks has to be indexed by logical block, which may make it difficult to locate the oldest "free" physical blocks. This could lead to searches that, even with near-instant IO, result in noticeable slowdowns.

Enter the TRIM command, whereby an OS can tell the SSD that a given range of logical blocks (which haven't been overwritten yet) are now able to be recycled. This command allows the SSD to identify physical blocks which can safely be overwritten, and place them in its physical write queue, before the next write command comes down from the disk controller. It's unlikely to be a magic bullet, but should improve things substantially.

* As stated above, I don't personally write this stuff, so I may be mis-remembering or mis-interpreting. If anybody can explain it better, please do.

Re:fragmentation? (5, Informative)

aztektum (170569) | more than 5 years ago | (#28368399)

For a thorough (RE: long) primer on SSDs and long term performance woes, Anand's overview [anandtech.com] is a must read.

Re:fragmentation? (2, Interesting)

sootman (158191) | more than 5 years ago | (#28368699)

Obviously, the firmware needs to be able to detect when data on disk gets obsoleted, and can safely be deleted. The problems with this are that this leads to *very* complicated translation tables - logical disk blocks end up having no relation at all to physical ones, and the SSD needs to track those mappings.

Would it solve the problem (or, I guess I should say, remove the symptoms... for a while, at least) to do a full backup, format the SSD, and restore? I know it's not an ideal solution but rsync or Time Machine would make it pretty painless.

Also, if I had an SSD and was browsing a lot I could see making a ramdisk for things like browser cache files. Too bad Safari and Firefox don't seem to let you specify where you want your cache to be anymore, like old browsers used to. I guess you could make a symlink or something but then you'd HAVE to have that drive mounted.

Re:fragmentation? (1)

Bigjeff5 (1143585) | more than 5 years ago | (#28369089)

It might ease the problem, but it wouldn't solve it. The controller on the drive needs to clear the pages and re-set everything back to zero, it doesn't do that with just a format as far as I am aware. You'd need a format -and- a trim to get it back to like-new speeds.

I like your idea for ramdisks and cache though.

Re:fragmentation? (2, Informative)

Anonymous Coward | more than 5 years ago | (#28369717)

browser.cache.disk.parent_directory

Re:fragmentation? (2, Informative)

42forty-two42 (532340) | more than 5 years ago | (#28369185)

The problem isn't scanning metadata - the problem is relocating data prior to an erase. Flash memory is built into erase blocks that are quite large - 64k to 128k is typical. You can write to smaller regions, but to reset them for another write you have to pave over the neighborhood. However the OS is sending writes at the 512-byte sector granularity. So the drive has to essentially mark the old location for the data as obsolete, and place it somewhere else.

When the drive has been used enough, however, it may have trouble finding an empty, erased sector to write to. So it has to erase some erase block. But if all erase blocks still have good data (eg, each has half used, important data and half obsolete, overwritten data), you need to relocate some of that data elsewhere.

What the trim command does is tell the drive that it need not preserve the data of a given sector - otherwise, if you were to delete a file, the drive would still have to preserve its data each time one of these relocation operations occur, since it doesn't know anything about the filesystem's allocation maps. By using TRIM, the drive is aware of what data is deleted, and can thus be discarded when it's time to erase blocks. It also increases the percentage of truly unused flash sectors, increasing the probability that a write can go through without having to wait for a relocation.

Note that this is completely independent from filesystem fragmentation - indeed, a defrag can even make things worse, by making the flash drive think both old and new locations for some data need preserving.

Re:fragmentation? (2, Insightful)

Bigjeff5 (1143585) | more than 5 years ago | (#28368211)

In very simple terms (because I'm no expert), it's because of the way SSDs deal with wear leveling and the fact that a single write is non-sequential. When it writes data, it is writing to multiple segments across multiple chips. It is very fast to do it this way, in fact the linear alternative creates heavy wear and is significantly slower (think single chip usb flash drives) than even spinning disk tech, and so this non-sequential write is essential.

Now, to achieve this, each chip is broken down into segments, and those segments are broken down into smaller segments, which are broken down into bytes, which are then broken down bits. When the SSD writes, it writes to the next available bit in the next available segment on each of the chips in the drive. Because it keeps track of exactly where it left off, this process is extremely fast, as all new data goes to the next place in line.

The problem comes when you fill up the hard drive and then delete data. When you delete data, you are deleting little bits spread all over the physical drive. Unless it is a tiny file, every chip will have a little bit of the file. What's worse, unless it was a massive file, you probably wont be clearing whole sequential segments on the drive. To add to that even further, the OS doesn't actually delete anything, it just flags it! So what this means is after you cleared a bunch of room on your hard drive, when writing new data your SSD is still massively fragmented, and to write new data the drive has to find free bits and clear them first. Think worst case scenario for spinning disk fragmentation and that's what you have - and you will get it every single time you fill up an SSD. You can actually re-format the drive and it won't necessarily fix the fragmentation problem, because formating won't reset the segments on the chip to factory state and update the internal drive index in such a way that it maximizes speed again.

Now, because the SSD is sort of like a very large RAID array with very tiny disks, even in this state is still faster than a conventional spinning-disk hard drive. But it is nowhere near as fast it was when it was clean and new.

Thus, the TRIM functions that have been mentioned. Basically these go through and do a de-frag of the data, which requires maximising the space at the "back" of each chip, then re-setting those free segments to the factory state. Depending on how much needs to be moved, this can have wear concerns, so you don't really want to do this all the time. The idea with SSDs is to fill them all the way up, then clear out as much room as you possibly can before trimming the drive. Once trimmed the drive should be back to pre-fragmentation speeds, but you have also just written many more times to some bits on the drive than others, which raises wear concers if the process has to be repeated too many times.

Re:fragmentation? (1)

complete loony (663508) | more than 5 years ago | (#28368343)

When you delete data, you are deleting little bits spread all over the physical drive.

The biggest problem is that a delete in most filesystems simply marks the space in the index on the device as free. However most filesystems leave the deleted data in place without writing anything over the top until that space is re-allocated. Hard disks don't typically need to know which sectors of the physical storage are actually in use. If you tell an SSD that this block is no longer required it can start erasing the physical chips and add them to the internal free list ready for the next data to be written.

Ideally filesystems will need to be modified so they are aware of the different characteristics of SSD's.

Re:fragmentation? (3, Interesting)

ls671 (1122017) | more than 5 years ago | (#28368551)

Very interesting, I assumed the problem was similar to fragmentation and wondered why nobody compared it as such.

Now, your explanation makes things much more clearer, the global problem is amplified by the additional problem you described.

Now would implementing the logic to control the SSD entirely at the OS/FS level be much slower than implementing it in silicon in the SSD itself ?

As you said, I now understand that the OS/FS would now have to be aware of the underlying media ;-)

Re:fragmentation? (5, Informative)

7 digits (986730) | more than 5 years ago | (#28369397)

Once upon a time, a technical subject on /. gave insightful and informative responses that were modded up. Time changes, I guess.

The "fragmentation" that SSD drive have don't really come from wear leveling, or from having to find some place to write things, but from the following properties:

* Filesystems read and write 4KiB pages.
* SSD can read many time 4KiB pages FAST, can write ONCE 4KiB pages FAST, but can only erase a whole 512KiB blocks SLOWLY.

When the drive is mostly empty, the SSD have no trouble finding blanks area to store the 4KiB write from the OS (he can even cheat with wear leveling to re-locate 4K pages to blank spaces when the OS re-write the same block). After some usage, ALL THE DRIVE HAVE BEEN WRITTEN TO ONCE. From the point of view of the SSD all the disk is full. From the point of view of the filesystem, there is unallocated space (for instance, space occupied for files that have been deleted).

At this point, when the OS send a write command to a specific page, the SSD is forced to to the following:

* read the 512KiB block that contain the page
* erase the block (SLOW)
* modify the page
* write back the 512KiB block

Of course, various kludges/caches are used to limit the issue, but the end result is here: writes are getting slow, and small writes are getting very slow.

The TRIM command is a command that tell the SSD drive that some 4KiB page can be safely erased (because it contains data from a delete file, for instance), and the SSD stores a map of the TRIM status of each page.

Then the SSD can do one of the following two things:

* If all the pages of a block are TRIMed, it can asynchronously erase the block. So, the next 4KiB write can be relocated to that block with free space, and also the 127 next 4KiB writes.
* If a write request come and there is no space to write data to, the drive can READ/ERASE/MODIFY/WRITE the block with most TRIMed space, which will speed up the next few writes.
(of course, you can have more complex algorithms to pre-erase at the cost of additional wear)

Re:fragmentation? (0)

Anonymous Coward | more than 5 years ago | (#28368661)

The problem is not the indexing, or the tables keeping track of where everything is, no matter how massive they may become. It's that simply deleting data in FLASH memory does not make that area available immediately.

SSD's use FLASH memory. "Empty" FLASH memory contains all 1's. When you write data to "empty" FLASH memory, you write 0's where they are needed, and leave the 1's alone.

As somebody else mentioned, when the system needs to modify data, it simply marks the old data as deleted and writes the new data to a fresh location. Very fast.

The problems start when you are out of fresh locations. You cannot write 1's to FLASH memory, you can only write 0's. So if you want to reuse a deleted location, you must first "erase" it and set it back to all 1's, so you can then write 0's where they need to be.

But FLASH memory cannot be erased one byte at a time, or even a few bytes at a time. When you erase FLASH memory, you must erase a minimum of an entire page. Pages are typically some multiple of 256 bytes.

After you have used your FLASH memory for a while, there will be no pages that are completely unused. So if you want to re-use some deleted bytes in a page, you need to read the page into RAM, erase the page, incorporate the new data into the RAM copy, then write the page back out to FLASH.

Biggest problem of all: Erasing FLASH is S-L-O-W. Doing these read-erase-write gymnastics in an on-demand fashion is horribly inefficient.

New algorithms, either in the FLASH firmware, or in the OS file system, can continuously scan the pages in a FLASH memory and systematically defrag the system, putting the live data together into common pages, leaving other pages completely deleted. This is a bit of a performance hit if you happen to need live data at the moment it is being moved (but the hit is orders of magnitude better than defragging a rotating media, which by definition means that at any given time the R/W head is anywhere but where you need it to be).

The pages that are entirely deleted can be erased in the background with zero performance impact. And ideally there are then always free pages available and nobody has to wait for something to open up.

A tip for people reading "fragmentation (1)

Ilgaz (86384) | more than 5 years ago | (#28369549)

Coriolis Systems (who produces iDefrag) jokingly referred to that issue on their blog.

" Ironically even SSDs, where you would expect the uniform access time to render fragmentation a problem of the past, still have various problems caused by exactly the same issue(1)'

of course, they add:

1 For avoidance of doubt, we strongly recommend that you don't try to defragment your SSD-based volumes. The fragmentation issue on SSDs is internal to their implementation, and defragmenting the filesystem would only make matters worse.

In case you spot a good friend who got suggested by Microsoft to defrag their drive (Win7 does it even without asking), you better tell it is not the "magnetic disk fragmentation" issue. It is really different and I heard some real bad stories from people who defragmented (!) their SSD drives.

High failure rate (0, Troll)

JO_DIE_THE_STAR_F*** (1163877) | more than 5 years ago | (#28367951)

I've heard that the failure rate on SSD's can be as high as 20%. As I am to lazy to google this or even RTFA I am wondering if this is true. If it is true then adoption rates are going to be very low and this technology may never takeoff before something new and better comes around. Of course even if it isn't true than there is still the perception by a lot of (Ignorant) people (like me) that there is a high failure rate so adoption will still be very slow

[perceived] Bottom line SSDs don't work well so, let's just wait until something better comes along.

Also doesn't one of the hardware manufactures (Samsung I think) have a patent on SSD so no one else can make the drives any way. Proprietary == Dead

Re:High failure rate (3, Informative)

Darkness404 (1287218) | more than 5 years ago | (#28368017)

What in the world are you talking about? The nice things about SSDs is that yes, they do fail, but they fail (or are supposed to) in a predictable, non-catastrophic way that leaves the data readable just not writable. I have had two SSDs and haven't had either fail despite heavy usage, and I don't think you could patent SSDs because the technology is everywhere because it is flash memory and even if it is patented more companies make them than just one.

Re:High failure rate (0)

Anonymous Coward | more than 5 years ago | (#28368079)

Proprietary == Dead

Yes, because NOBODY is going to buy a hard drive that runs at a speed equivalent to 35,000 RPM just because its proprietary.

Re:High failure rate (4, Insightful)

vadim_t (324782) | more than 5 years ago | (#28368215)

That's a statistic that doesn't make any sense.

20% under what conditions, and in what timeframe? Over a long enough time period everything has a 100% failure rate.

Normal hard disks also will eventually fail, due to physical wear.

Also if it lasts long enough, at some point, reliability will stop being important. Even if it still works, very few people will want to use a 100MB hard disk from 15 years ago.

Re:High failure rate (4, Insightful)

macraig (621737) | more than 5 years ago | (#28368449)

Just a small tangential nitpick: we were already more than a factor of ten past that HDD capacity fifteen years ago. The 1GB barrier was broken very early in the Nineties. I still have an HP 1GB SCSI drive from about '91 or '92, IIRC.

As far as failure rates go, I still have ALL of my disk drives (one or two outright failed) from the 15-20 years, and every single one of them still functions at least nominally. I'm still more trusting of magnetic media than I am either rewritable optical or Flash-based media.

Re:High failure rate (2, Insightful)

Bigjeff5 (1143585) | more than 5 years ago | (#28368337)

I've never heard of a 20% fail rate for SSDs. I've heard of wear concerns, as each little bit on the drive can only be written a set number of times (it's at 10,000 or so, if I remember correctly). However, thanks to the majic of wear leveling and the large amount of separate chips in an SSD drive, you can fill up your drive completely and you will have only written to each bit exactly once. That means you could theoretically fill your SSD up 10,000 times before you would expect failure. Reality is a bit lower than that, maybe 3,000-5,000 times due to having to TRIM to re-arrange the bits, but it's still significant.

Of course, even with the performance hit TFA talks about after filling your SSD (which is fixed with the TRIM function TFA also talks about) the fastest spinning disks are still much much slower than all but the very worst SSDs out there.

Anyway, the 20% fail rate may have been a specific manufacturer of SSDs, there are already some really shitty ones out there.

Lastly,

Also doesn't one of the hardware manufactures (Samsung I think) have a patent on SSD so no one else can make the drives any way. Proprietary == Dead

You may need to get some more education about how patents work, because if that were true IBM would not have the fastest SSD on the markent. See, they do this thing called licensing, which basically means company Y purchases an agreement from company X to use their technology to manufacture a product. It creates an incentive for company X to allow other manufacturers to use their technology, flooding with the market with both quality and crap, but ultimately lowering the price and speeding innovation regardless of the high quality stuff (and improving the quality of the cheap stuff, it works both ways usually).

It's actually the reason patents exist. We only get in a fuss when people patent stuff that either a.) should never need a patent (which means the patentor can sue for damages for infringement) or b.) some company goes around buying patents from legitimate inventors for the sole purpose of hoping said patents become infringed upon by an unwitting third party. The former is a failure in the patent system, and the latter is patent trolling, which is an unethical and disgusting abuse of the process.

Re:High failure rate (1)

fractoid (1076465) | more than 5 years ago | (#28368711)

I've heard that the failure rate on SSD's can be as high as 20%.

As Heinlein put it wonderfully in 'Tunnel in the Sky':

The death rate is the same for us as for anybody ... one person, one death, sooner or later. - Cpt. Helen Walker

Re:High failure rate (1)

MichaelSmith (789609) | more than 5 years ago | (#28369049)

I've heard that the failure rate on SSD's can be as high as 20%.

As Heinlein put it wonderfully in 'Tunnel in the Sky':

The death rate is the same for us as for anybody ... one person, one death, sooner or later. - Cpt. Helen Walker

Except Lazarus Long of course.

Why Windows 7 in the summary? (0)

loufoque (1400831) | more than 5 years ago | (#28368029)

Why is Windows 7 even in the summary?
People who buy high-end disk drives and care about Windows must be quite a minority. The point of hard disk drives with fast writing performance is for servers.

Re:Why Windows 7 in the summary? (2, Insightful)

Darkness404 (1287218) | more than 5 years ago | (#28368053)

Gamers, gamers, gamers and gamers. Seriously, the early adopters of any technology that is supposed to be faster on the consumer level will be gamers. Considering that most games are Windows-only it makes sense.

Re:Why Windows 7 in the summary? (1)

zippthorne (748122) | more than 5 years ago | (#28368189)

Meh. Just stick in 50GB worth of RAM in there. No one's filling a Blu-Ray disk with 3d environment data yet, are they?

Why should a game even hit the disk except when saving, these days?

Re:Why Windows 7 in the summary? (1)

Darkness404 (1287218) | more than 5 years ago | (#28368219)

...Because either the game has to do a lot of initial loading or use the disk. Even copying from the HD to RAM takes time, sure, today you can pre-load a bunch of stuff, but things still need to be written and read from the disk every now and then.

Re:Why Windows 7 in the summary? (1)

hairyfeet (841228) | more than 5 years ago | (#28368703)

Which is why i don't get why the game designers aren't preloading like hell. even the cheapo cards are starting at 512Mb-1Gb, and many machines are 4Gb+. Hell the box i am typing this on i built for $500 with 1Gb on the GPU and 4Gb on the CPU, yet games insist on constantly loading from disc while a good chunk of the memory just sits there doing nothing. Surely they can scan the machine on first install and if GPU RAM equals X and system RAM equals y then prefetch like crazy. Or am I missing something?

But the other poster is right, the hardcore gamers will snatch these up first. I have dealt with customers who think nothing of shelling out $1000 just on the GPUs, and several time that on the CPU+RAM. The hardcore gamers are always the early adopters for anything that will give them a couple more FPS to brag about.

Re:Why Windows 7 in the summary? (1)

Mitchell314 (1576581) | more than 5 years ago | (#28368761)

Doesn't RAM data become undatified when you turn the machine off?

Re:Why Windows 7 in the summary? (1)

walshy007 (906710) | more than 5 years ago | (#28369225)

most ram does yes, but there are always exceptions [wikipedia.org] to the rule.

Re:Why Windows 7 in the summary? (1)

eyepeepackets (33477) | more than 5 years ago | (#28369251)

Compress the data into a tarball (or .zip file, whatever) and write it to the hard drive when you are finished working/playing, reverse the process when setting up.

Real speed junkies use RAMdisks!

Re:Why Windows 7 in the summary? (1)

ls671 (1122017) | more than 5 years ago | (#28368573)

> Gamers, gamers, gamers and gamers.

Steve, is that you ?

Re:Why Windows 7 in the summary? (2, Insightful)

mrmeval (662166) | more than 5 years ago | (#28368075)

Because someone got paid to do it. You don't think /. editors work for free do you?

Re:Why Windows 7 in the summary? (0)

Anonymous Coward | more than 5 years ago | (#28368109)

*Must* be? You selfish, self-serving jerk. Start paying attention to things outside the idealistic server room setting. There are a lot of home users that want that speed for their own reasons.

Re:Why Windows 7 in the summary? (3, Interesting)

Robotbeat (461248) | more than 5 years ago | (#28368147)

Even the best consumer-level SSDs like the Intel x-25m/e use a volatile RAM cache to speed up the writes. In fact, with the cache disabled, random write IOPS drops to about 1200, which is only about three or four times as good as a 15k 2.5" drive. The more expensive truly-enterprise SSD drives which don't need a volatile write cache cost at LEAST $20/GB, so the $/(safe random write iop) ratio is actually still pretty close, and cheap SATA drives may actually be even on that metric as the fast enterprise SSDs. Granted, this shouldn't be the case in a year, but that's where it is right now. (Also, the performance-per-slot is a lot higher for SSDs, which can translate into different $ and power and space savings.)

Re:Why Windows 7 in the summary? (1)

42forty-two42 (532340) | more than 5 years ago | (#28369217)

And what's the problem with that? Any server worth its salt will have a small battery backup for its drive array to keep it running until array/disk write caches can be flushed anyway.

Re:Why Windows 7 in the summary? (0)

Anonymous Coward | more than 5 years ago | (#28369301)

The Intel X25 has a tiny buffer in the controller chip like every other SSD out there. The buffer is needed for basic operation. The DRAM inside the X25 isn't used as a cache for user data. http://www.anandtech.com/cpuchipsets/intel/showdoc.aspx?i=3403&p=10

trim (0)

Anonymous Coward | more than 5 years ago | (#28368061)

Well, to be honest, what kind of nerd doesn't like a little trim?

Re:trim (-1, Troll)

Anonymous Coward | more than 5 years ago | (#28368193)

linux nerds. they want another man's balls slapping off their chins. they're faggot shitballs who have aids.

linux = aids. god damn homos. burn in hell. suck your faggot cocks in hell.

if you mod this down it means you're a faggot or a ffaggot lover. either way you're a wworthless bitch.

Re:trim (1)

eyepeepackets (33477) | more than 5 years ago | (#28369187)

Yet another happy Microsoft customer vents his wrath on those wise enough to use something else.

Potential data recovery problems (2, Interesting)

quazee (816569) | more than 5 years ago | (#28368319)

Something as simple as deleting the wrong partition becomes an irreversible operation if you do it using a tool that supports TRIM on TRIM-enabled hardware.
Even if you restore the partition table from a backup, you will likely suffer silent file system corruption, which may even not be apparent until it's too late.
If TRIM support is actually implemented on the device, the device is free to 'lose' data on TRIMmed blocks until they are written at least once.

Re:Potential data recovery problems (0)

Anonymous Coward | more than 5 years ago | (#28368415)

Unless you mount the device as read only and copy to a different device, so the data doesn't have a chance to be overwritten.

Re:Potential data recovery problems (1)

quazee (816569) | more than 5 years ago | (#28368491)

This will only work if the drive doesn't do background 'scrubbing' to improve future write performance.
Or, even if the drive didn't erase physical Flash cells yet, it could already mangle the mapping between the logical and physical blocks.
In fact, I have a cheap CompactFlash card that does exactly that when you yank power from it while writing - the drive appears completely scrambled (with blocks reordered) when you restore power to it.

Re:Potential data recovery problems (1)

Spit (23158) | more than 5 years ago | (#28368913)

I would never, ever trust a filesystem after an event like this. Ever. Do your backups.

Re:Potential data recovery problems (3, Insightful)

steveha (103154) | more than 5 years ago | (#28369095)

Something as simple as deleting the wrong partition becomes an irreversible operation if you do it using a tool that supports TRIM on TRIM-enabled hardware.

This seems needlessly verbose. Let me shorten it for you:

Deleting a partition should always be considered an irreversible operation.

Hmmm, even shorter:

Don't delete a partition unless you want it to go away forever.

Even if you restore the partition table from a backup, you will likely suffer silent file system corruption, which may even not be apparent until it's too late.
If TRIM support is actually implemented on the device, the device is free to 'lose' data on TRIMmed blocks until they are written at least once.

If I understand you correctly, you are suggesting that a disk partitioning tool will use TRIM to not only wipe the partition table itself, but also nuke the partition data from orbit. And you the point out that it would not be adequate to rewrite just the sectors of the partition table.

If so, then the answer is: you don't just restore the partition table, you restore the whole partition (including data) from backup.

I for one consider much-faster write speeds to be a bigger advantage than possibly being able to reverse a partition deletion.

steveha

TRIM support (1)

profaneone (316036) | more than 5 years ago | (#28368555)

So does this mean that the a girlfriend of a geek can save her files on it too??

SSDs?! (1)

MMInterface (1039102) | more than 5 years ago | (#28368685)

Despite the rising excitement over SSDs, some of it has been tempered by performance degradation issues.

Who cares how they perform. All they have to do is sit there and scare away enemy fleets.

Re:SSDs?! (0)

Anonymous Coward | more than 5 years ago | (#28369879)

Despite the rising excitement over SSDs, some of it has been tempered by performance degradation issues.

Who cares how they perform. All they have to do is sit there and scare away enemy fleets.

Um...You're thinking of SSNs.

Ooh yeah! (1)

sootman (158191) | more than 5 years ago | (#28368939)

I'd love to get me some trim! [urbandictionary.com]

Re:Ooh yeah! (1)

Korbeau (913903) | more than 5 years ago | (#28369129)

But are you in a solid state?

TRIM needs a driver, a windows driver? (1)

Ilgaz (86384) | more than 5 years ago | (#28369425)

The most important point of hard disks are being amazingly multi platform. I didn't like the sound of "Windows driver", "OS support" to perform nicely.

SSD guys really better stick to the standards and never, ever do anything requiring a "driver" on host OS. For example, there are G4 Mac owners who happily upgrades their "old tech" magnetic drives to 500 GB or even 1TB. Who will write driver for them? Apple? SSD vendor? I don't think so.

In fact, HD vendors really better stay away from writing anything except the "smart control" stuff... Or, they better donate to smartctl project and stay away from it too...

TRIM support (1)

Dexter Herbivore (1322345) | more than 5 years ago | (#28369867)

Am I the only one that read the words "TRIM support" and immediately thought of tight fitting panties?
Load More Comments
Slashdot Login

Need an Account?

Forgot your password?