Follow Slashdot stories on Twitter

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

Is the Time Finally Right For Hybrid Hard Drives? 311

a_hanso writes "Hard drives that combine a traditional spinning platter for mass storage and solid state flash memory for frequently accessed data have always been an interesting concept. They may be slower than SSDs, but not by much, and they are a lot cheaper gigabyte-for-gigabyte. CNET's Harry McCracken speculates on how soon such drives may become mainstream: 'So why would the new Momentus be more of a mainstream hit than its predecessor? Seagate says that it's 70 percent faster than its earlier hybrid drive and three times quicker than a garden-variety, non-hybrid disk. Its benchmarks for cold boots and application launches show the new drive to be just a few seconds slower than a SSD. Or, in some cases, a few seconds faster. In the end, hybrid drives are compromises, neither as cheap as ordinary drives — you can get a conventional 750GB Momentus for about $150 — nor as fast and energy-efficient as SSDs.'"
This discussion has been archived. No new comments can be posted.

Is the Time Finally Right For Hybrid Hard Drives?

Comments Filter:
  • by Sycraft-fu ( 314770 ) on Wednesday November 30, 2011 @03:26AM (#38211572)

    If there is to be a time for hybrid drives, the window on it is fast closing. As SSDs get cheaper and cheaper more and more people will opt to just go that route. Most people don't really need massive HDDs and so if smaller SSDs get cheap enough that'll be the way they'll go. They don't have to be as cheap as HDDs, just cheap enough that for the size people need (probably 200-300GB for more people) they are affordable enough.

    For me personally, the time already came and went. I was very enthusiastic about the concept of hybrid drives, particularly since I have vast storage needs (I do audio production). However no hybrid drive for desktops was forthcoming. Then there was a sale on SSDs, 256GB drives for $200. I picked up two of them. $1/GB was my magic price when I'd be willing to get them. Now I have 512GB of SSD storage for OS, apps, and primary data. That is then backed by 3TB of HDD storage for media, samples, and so on.

    A hybrid drive has no place. I'd certainly not replace my SSDs, they are far faster than any hybrid drive (even being fairly slow on the SSD scale). Likewise I have no real reason to upgrade my HDDs, they serve the non-speed intensive stuff.

    While I'm willing to spend more than most, it is still a sign of things to come. As those prices drop more and more people will say "screw it" and go all SSD.

    • by thsths ( 31372 ) on Wednesday November 30, 2011 @03:40AM (#38211618)

      Right, but it didn't happen quickly. These is only one model of a hybrid hard disk available, which makes it unsuitable for any serious use in mass production. Also Seagate now tell us that their previous version was actually crap, and the new one is much much better. The price is lower but still high - about 100 dollars for 8 GB of flash. For that money you could get an SSD with 48 GB - and put all your system data on it.

      This is a niche product, designed for laptops with only one disk slot that require both fast access and high storage. It is heavily compromised in both aspects, and the price is outrageous.

      • by AmiMoJo ( 196126 ) on Wednesday November 30, 2011 @04:30AM (#38211844) Homepage Journal

        SSDs typically have large memory caches, where as HDDs are still stuck around the 32MB mark. With RAM so cheap these days even the lowest end graphics cards are coming with 1GB, but not HDDs for some reason.

        • by jimicus ( 737525 ) on Wednesday November 30, 2011 @05:20AM (#38212026)

          The cache on a hard disk is often used as write cache - store incoming data in cache, leave actually committing it to disk until a convenient opportunity arises.

          32MB of cache doesn't take that long to flush. 1GB, OTOH...

        • by olau ( 314197 ) on Wednesday November 30, 2011 @07:56AM (#38212634) Homepage

          That's because it doesn't do anything good for hard drives. There was a paper about it some years ago, I'm too lazy to google it up, but even 32 MB is too much (I think the sweet spot was around 2 MB).

          If you think about it, it's not surprising, what good would it do that the disk cache in main memory managed by the OS didn't already do?

          Large on-disk cache would only make sense if it was combined with a battery or something so you don't loose data on crashes.

          • by drinkypoo ( 153816 ) <drink@hyperlogos.org> on Wednesday November 30, 2011 @08:13AM (#38212708) Homepage Journal

            That's because it doesn't do anything good for hard drives. There was a paper about it some years ago, I'm too lazy to google it up, but even 32 MB is too much (I think the sweet spot was around 2 MB).

            Having had the 2MB and 8MB versions of the same disk from Seagate that uses the same mechanism and having seen the 8MB disk be substantially faster, I'm pretty sure it's not 2MB.

            • by swalve ( 1980968 ) on Wednesday November 30, 2011 @09:21AM (#38213060)
              I agree. But the difference between 32 and 64 might not be so great. There is a limit to how much a HDD can predicatively read, and I have to think that the real world difference between caching writes isn't going to be all that much on a single user machine. What we will see, I believe, is drives that become smarter and have their own filesystem layer that obscures the LBA from the physical location on the disk. The machine says "write this data to block 43533224" and the HDD just starts writing to whatever free blocks are nearest to its r/w head, using the flash to store the map. It will then defrag itself during downtimes to optimize the locations. (Dear Seagate: if I really just invented this, please pay me.)
          • by AmiMoJo ( 196126 )

            You don't seem to understand how the cache memory is used. It thing like read-ahead data that the drive basically gets for free as it waits for the disc to rotate to the correct place, or metadata like bad block and reallocation maps. With a larger cache it would be easy for the drive to do background reads when the computer is idling it, increasing the chance that the next read will be already in the cache, like a kind of super read-ahead.

            The drive can make smart decisions that the PC can't because it know

          • That's because it doesn't do anything good for hard drives. There was a paper about it some years ago, I'm too lazy to google it up, but even 32 MB is too much (I think the sweet spot was around 2 MB).

            The sweet spot will be very application and OS dependent. In the old days, the drive didn't have any cache, and the controller couldn't hold much more than 1 sector. So, when the head dropped, you had to wait for your sector to spin around before you could read. If you then needed the adjacent sector, you might have to wait for an entire revolution before you could read it. Schemes like interleaving were devised to get around this. (Logical sectors N and N+1 were physically 2 or 3 sectors apart)

            Wi

    • by Anonymous Coward on Wednesday November 30, 2011 @03:45AM (#38211644)

      > Most people don't really need massive HDDs

      Are you kidding me.

      Record FRAPS of your gaming sessions, photography (or RAW), record and edit anything with any modicum of quality? Save said media and final encodings?

      Age of conan, 33 GB. LA Noire13 GB. Mortal Online, 30 GB.

      That is stuff ordinary people do, not audio producers.

      • by migla ( 1099771 )

        One word: Esata

    • by Kjella ( 173770 ) on Wednesday November 30, 2011 @03:59AM (#38211704) Homepage

      The rewrite figures are going to shit as they move to smaller processing tech, 25nm eMLC is already down to 3000 writes/cell, they say you won't get $1/GB at normal prices until we get 19nm which at least some say will be down to 1000 writes. That you're getting 500MB/s write speed is nice, but if you actually start using that regularly you'll burn through the disk in a matter of months. My first SSD - which I admit I abused thoroughly - died after 8-9000 writes average (was rated for 10k) after 1.5 years. My current setup is trying to minimize writes to C:, but I still don't expect it to last nearly as long as a HDD. Using it as a read-heavy cache of static files may be a better way to boost it for those that haven't got hundreds of dollars to spend every time it wears out.

      • by Rockoon ( 1252108 ) on Wednesday November 30, 2011 @07:32AM (#38212514)

        The rewrite figures are going to shit as they move to smaller processing tech, 25nm eMLC is already down to 3000 writes/cell, they say you won't get $1/GB at normal prices until we get 19nm which at least some say will be down to 1000 writes.

        Based on 3000/25nm tech, the new erase cycle limits will be ~58% (1700/19nm) but the storage capacity per area will increase by ~70%.

        That you're getting 500MB/s write speed is nice, but if you actually start using that regularly you'll burn through the disk in a matter of months.

        The smaller tech has just as much "heavy use" as the larger tech when equal amounts of board area are dedicated to flash chips. A board with 1 TB of 1700-cycle flash can take a serious write pounding even with considerable write amplification. The same board on the 25nm tech would only have 588 GB of 3000-cycle flash/

        "Heavy use" doesnt mean "fastest possible erases." I don't know what you think heavy use means, but even extreme pounding scenarios (such as cycling the entire 1 TB once per day, something you might see in a non-incremental backup server) still gives these drives years of cycles to "blow" through. You could technically kill this theoretical drive in a little over a month but that says nothing about what a "heavy user" will actually witness.

        The people solving write needs extreme enough that they would burn through the cycles of this theoretical 1 TB drive in less than a year are dedicating a lot more than a single 1 TB drive to their data volume problem

    • I'd kill for a decent hybrid drive for my laptop right now. I'm currently running Samsung's 1TB 2.5" drive, and that's about halfway full... pretty much the only SSD I'd be able to use is Intel's 320 (or 310?) with 600gigs, which costs about as much as I paid for my Thinkpad. And even with that, I'd be uncomfortably limited due to the lack of room for expansion... not to mention leaving room for wear leveling and such.

      Looks like I'll be upgrading to a Thinkpad with two hard drive bays, or one with an mSATA

    • by UnknownSoldier ( 67820 ) on Wednesday November 30, 2011 @04:33AM (#38211850)

      While I love the speed the SSD (and the prices is hitting the "magic" $1/GB) you're forgetting the HUGE elephant in the room with SSD that almost no-one seems to notice ...

      SSDs have a TERRIBLE failure rate.

      http://www.codinghorror.com/blog/2011/05/the-hot-crazy-solid-state-drive-scale.html [codinghorror.com]

      He purchased eight SSDs over the last two years ⦠and all of them failed. The tale of the tape is frankly a little terrifying:

              Super Talent 32 GB SSD, failed after 137 days
              OCZ Vertex 1 250 GB SSD, failed after 512 days
              G.Skill 64 GB SSD, failed after 251 days
              G.Skill 64 GB SSD, failed after 276 days
              Crucial 64 GB SSD, failed after 350 days
              OCZ Agility 60 GB SSD, failed after 72 days
              Intel X25-M 80 GB SSD, failed after 15 days
              Intel X25-M 80 GB SSD, failed after 206 days

      and ...

      http://translate.googleusercontent.com/translate_c?hl=en&ie=UTF8&prev=_t&rurl=translate.google.com&sl=fr&tl=en&twu=1&u=http://www.hardware.fr/articles/843-7/ssd.html&usg=ALkJrhjecZZv1F6d_oT-dr41FPFYOIkVCw [googleusercontent.com]

      - Intel 0.1% (against 0.3%)
      - Crucial 0.8% (against 1.9%)
      - Corsair 2.9% (against 2.7%)
      - OCZ 4.2% (against 3.5%)

      Intel confirms its first place with a return rate of the most impressive. It is followed from Crucial, which significantly improves the rate but it must be said that the latter was heavily impacted by the M225 - the C300 is only reached 1%. The return rate for failure are up against Corsair and OCZ especially in the latter confirmed by far his last position. 8 SSDs are beyond the 5%:

      - 9.14% 2 240 GB OCZ Vertex
      - 8.61% 2 120 GB OCZ Agility
      - 7.27% 40GB OCZ Agility 2
      - 6.20% 60GB OCZ Agility 2
      - 5.83% 80 GB Corsair Force
      - 5.31% 90GB OCZ Agility 2
      - 5.31% 2 100 GB OCZ Vertex
      - 5.04% OCZ Agility 2 3.5 "120 GB

      At the _current_ price point & abysmal failure raite, SSD sadly has a ways to go before it catches on with the main stream.

      • by abigsmurf ( 919188 ) on Wednesday November 30, 2011 @05:24AM (#38212044)
        Yep, had a OCZ drive fail after 3 months. First time I've had a drive that wasn't DOA fail before at least 2-3 years of usage

        It wasn't even one of those gradual fails you tend to get with HDDs where they tend to start getting faults for a while before failing, giving you a chance to get the data off of it and order a replacement. One day it was working normally, next day, wasn't even recognised by the bios.

        Just to add insult to injury, OCZ have an awful returns policy, had to pay to get it send recorded delivery to the Netherlands. Cost me £20. Going to be a few years before I take the plunge again and I won't be buying OCZ. Paying premium prices for something so unreliable, isn't on, especially given how much of an impact a sudden drive failure has on just about every type of user.
        • by Bert64 ( 520050 )

          Ignore the returns policy, send it back to the retailer... Your contract is with the retailer, not the manufacturer. Know your rights!

    • Hybrid drives are designed for laptops. Most laptops don't have space for two drives. Thus, the hybrid drive will let media obsessed folk to carry around 750 GB of stuff but give them a speed boost when necessary.

      • Most laptops have space for two drives (by default HDD and optical). It's just for some reason few vendors seem to offer combinations out of the box that don't involve an internal optical drive.

    • Comment removed based on user account deletion
      • by smash ( 1351 )

        8 gigs is more than enough for the components in windows that you actually load on a regular basis. A windows install may be 17 gigs, but that includes all the utilities you use once in a blue moon, a heap of desktop wallpapers, drivers for all the hardware in the Windows world you DONT own, sound themes, etc. The actual base OS that is loaded into RAM on boot is likely nearer 1/3 to 1/2 a gig.

        Ditto for the apps you install.

    • I know there's no 3TB HDD/512GB SSD Hybrid on the consumer market, but you pretty much just described an inefficient hybrid (requires manual organization and has drives on separate controller).

      I'm all for getting rid of spinning disks as well but if anything your post legitimizes hybrids.

    • by smash ( 1351 )
      Hybrid has no place? How about your average laptop with one hard drive bay? If you can get near SSD performance and actually carry around a decent amount of data, i reckon its a winner.
  • HDD for mass storage, small SSD for OS, installed software, and most frequently accessed files.
    • by gl4ss ( 559668 )

      so it's not the time for hybrid drives - but it's the time for hybrid setups?

      on a laptop with a single drive bay I could see use for a hybrid drive.

    • Re: (Score:3, Insightful)

      by jimicus ( 737525 )

      That's precisely what a hybrid HDD does, except it takes the decision regarding what will benefit most from going in the SSD out of your hands.

  • Favorite movies and video will keep hard drives spinning for a while.
    $50/TB (next year) implies a 4 GB movie stores for 20 cents, not quite zipless for favorite 1000 movies and videos at $200, plus back up doubles that cost for a simple mirror.
    • Re: (Score:2, Interesting)

      I admit this is slightly off-topic, but I recently saw a 32GB Class 10 SD card for under $30... and it got me remembering back to when -- not as long ago as some might think -- it took an hour to transfer the contents of one 10MB... that's MB not GB... 5.25" HDD, which cost $400, from one machine to another over the network.
  • by jones_supa ( 887896 ) on Wednesday November 30, 2011 @03:38AM (#38211604)
    I think the idea is cool. However, as you now get the best of both worlds (capacity, speed) you also have two areas of failure (mechanical damage, flash corruption). I also hope the firmware does not create problems. It's not completely unusable product either.
  • by pathological liar ( 659969 ) on Wednesday November 30, 2011 @03:43AM (#38211632)

    I don't imagine it is. Anandtech found it wasn't that difficult to evict stuff from the cache you actually wanted [anandtech.com]. Not to mention that if you start copying anything especially large (your MP3 collection, or installing a couple games from a Steam sale, say) you nuke the cache and are back to mechanical HD performance.

    Personally, I prefer to do it manually. Stuff I want to load fast (Windows, applications, games, my profile folder) sit on an SSD. Bulk data sits on a mechanical drive.

    • by Twinbee ( 767046 )
      I thought by 'hybrid', it actually meant two different drives (one SSD, one HDD) working in conjunction, where maybe the OS sits on the SSD, and where both drives simply take up half as much room. That sounds a great idea for laptops, which only have space for one drive.

      But now I know it's all this cache crap, I'm suddenly not at all interested. If one wants the best of both worlds, simply get two drives, and install the OS on the SSD one.
  • Who is this for?

    With only 4-8gb of flash I can't think of who this is for?

    Mid-range consumer desktops/laptops?

    Really with such little cache you might as well just add more ram.

    Wouldn't even dream of putting one of these in a server. It's a shame linux doesn't have L2ARC support and it would be nice if there was a drop in hardware equivalent.

  • The core problem with SSD's is write speed on workloads that have a large number of small updates. My testing on the older 500GB Momentus XT showed that in general it had better write speed doing, e.g., a Fedora install, than the 80GB Intel SSD that I benchmarked it against (same generations of product here, about a year ago), due to the large number of small updates that the non-SSD-aware EXT3/4 filesystems do during the course of installing oodles of RPM's. Because the Momentus only caches *read* requests

  • Prices! (Score:3, Interesting)

    by Shifty0x88 ( 1732980 ) on Wednesday November 30, 2011 @04:05AM (#38211736)

    Not only are SSD prices going down, but traditional hard drives are going UP! (At least for the short term)

    Prices taken from Newegg.com:

    Seagate Barracuda XT 3TB is $399.99 (used to be a lot cheaper)

    Seagate Barracuda 1TB SATA III:

    About a year ago: On sale for $60, regular $70

    Now: $149.99

    I think now is the time of the SSD and the hybrid drive is just not worth the price

    And considering this drive is retailed at $239.99 and a regular mechanical 750GB drive is between $69.99(Hitachi Deskstar) and $179.99(Western Digital Black) there is no reason to buy it.

    Just go buy a small SSD and a regular mechanical drive and do it manually

  • One lesson I've learnt over the years is that hard disk cache (in this case the traditional RAM based cache) doesn't matter all that much. Drives with 8Mb cache consistently show 99% of the performance as drives with 16Mb. And so on for the 128Mb vs 64Mb vs 32Mb varieties of hard disks.

    I do realize there's a benchmark there. But i'm still skeptical given the history of how little on board hard disk cache matters.

    • by greg1104 ( 461138 ) <gsmith@gregsmith.com> on Wednesday November 30, 2011 @04:32AM (#38211848) Homepage

      There are only two things drive cache can help with significantly. When rebooting, where memory is empty, you can get memory primed with the most common parts of the OS faster if most of that data can be read from the SSD. Optimizers that reorder the boot files will get you much of the same benefit if they can be used.

      Disk cache used for writes is extremely helpful, because it allows write combining and elevator sorting to improve random write workloads, making them closer to sequential. However, you have to be careful, because things sitting in those caches can be lost if the power fails. That can be a corruption issue on things that expect writes to really be on disk, such as databases. Putting some flash to cache those writes, with a supercapacitor to ensure all pending writes complete on shutdown, is a reasonable replacement for the classic approach: using a larger battery-backed power source to retain the cache across power loss or similar temporary failures. The risk with the old way is that the server will be off-line long enough for the battery to discharge. Hybrid drives should be able to flush to SSD just with their capacitor buffer, so you're consistent with the filesystem state, only a moment after the server powers down.

      As for why read caching doesn't normally help, the operating system filesystem cache is giant compared to any size it might be. When OS memory is gigabytes and drive ones megabytes, you'll almost always be in a double-buffer situation: whatever is in the drive's cache will also still be in the OS's cache, and therefore never be requested. The only way you're likely to get any real benefit from the drive cache is if the drive does read-ahead. Then it might only return the blocks requested to the OS, while caching ones it happened to pass over anyway. If you then ask for those next, you get them at cache speeds. On Linux at least, this is also a futile effort; the OS read-ahead is also smarter than any of the drive logic, and it may very well ask for things in that order in the first place.

      One relevant number for improving read speeds is command queue depth. You can get better throughput by ordering reads better, so they seek around the mechanical drive less. There's a latency issue here though--requests at the opposite edge can starve if the queue gets too big--so excessive tuning in that direction isn't useful either.

    • On disk caches in the MB range have little effect because in most cases anything in the disks cached will already be cached closer to where it is needed, ie in the OS cache using spare system memory. I suspect in that case the on disk cache is probably used more as a buffer than a cache.

      Larger SSD caches bring two advantages, the cache persists across restarts assisting boot time and may also be larger than the amount of memory the OS allocates to its own cache.
  • by macraig ( 621737 ) <mark@a@craig.gmail@com> on Wednesday November 30, 2011 @04:12AM (#38211764)

    If you're willing to make a bit of effort, that is.

    Just yesterday I was just investigating the Highpoint Rocket 1220 and 1222 HBAs, which imbues its possessor with the power of Creation... the power to create hybrid magnetic-flash storage devices. Hook up an SSD and a good old moving-platter drive to it, and the HBA does the heavy lifting to create a virtual hybrid drive that will appear as a single device to the host system. It's similar to what is being done with some RAID enclosures of the last couple years, using chipsets like the JMicron JMB393 to create singular virtual drives that are really RAID arrays. I have no doubt there will be other brand HBAs of a similar sort joining these Highpoint ones soon enough.

    With products like this Highpoint HBA, it's not necessary to be a lady-in-waiting to to some royal manufacturer's whim. You can pick and choose an SSD and disk drive of prices and capacities and characteristics that suit your specific needs, rather than waiting breathlessly for some one-size-fits-all solution that benefits the maker more than the buyer.

  • by subreality ( 157447 ) on Wednesday November 30, 2011 @04:15AM (#38211776)

    I would buy one now if they would implement it as a write-back cache. It wouldn't be hard to do. Take a GB of flash, structure it as a ring buffer. That eliminates the "small random writes" problem - you're just writing a linear journal, and the places you're writing are pre-erased and ready to go. If the power fails the drive just plays back the cache when the power comes back on.

    That would let you have massive improvements in write performance. Metadata updates leave you seeking all over the disk. BTRFS is currently very slow to fsync because of this. But if it could just blast it to a big flash cache, and the drive could confirm that as committed to disk immediately, it'd scream.

    Unfortunately all the manufacturers seem to just want to use it as a big persistent read cache to make Windows boot faster.

    • by olau ( 314197 )

      Sounds nice, but I think the truth is that most people on non-db-server workloads don't really write a lot of random data in the first place. For them, start up speed is probably more important. I know it is to me. :)

  • by Sitnalta ( 1051230 ) on Wednesday November 30, 2011 @04:19AM (#38211790)

    A hybrid drive would be great in my laptop. It doesn't have room for "storage" drives and a 600GB SSD would be heinously expensive. You could also put one in a USB 3.0 external enclosure (I assume they can work like that.) That would give you a nice trade off between speed, capacity and, most importantly, portability.

    That seems to be what Seagate is thinking too. Since the drive is in the 2.5" form factor.

  • First hand (Score:4, Informative)

    by jamesh ( 87723 ) on Wednesday November 30, 2011 @04:26AM (#38211824)

    I have one. It works great, but "chirps" occasionally which I think is the sound of the motor spinning down. None of the firmware updates i've applied that claim to fix the chirp actually fix it.

    It runs much faster than my previous drive, but i'm also comparing a 7200RPM drive to a 5400RPM drive so the speed increase isn't just because it's a hybrid.

    I guess the advantage of the SSD cache is that if you use it in a circular fashion you can avoid a lot of the 'read-erase-rewrite' cycles... but I don't know how the cache is organised for sure.

    • Just to make sure...it's not the head unload sound? And you have disabled idle hard drive power off in OS power settings?
  • by jafo ( 11982 ) on Wednesday November 30, 2011 @04:34AM (#38211858) Homepage

    Hybrid drives, and even all of the hybrid RAID controllers I've looked at, only use the SSD for read acceleration. They aren't used for writes, from what I could tell from their specs. So you're almost certainly better off upgrading your system to the next larger amount of RAM rather than getting a hybrid drive.

    Personally, I looked at my storage usage and realized that if I didn't keep *EVERYTHING* on my laptop (every photo I'd taken for 10+ years, 4 or 5 Linux ISOs, etc) and instead put those on a server at home, I could go from a 500GB spinning disc to an 80GB SSD. So I did and there's been no looking back. The first gen Intel X-25M drives had some performance issues, but since then I've been happy with the performance of them.

  • So it is big and prone to shocks ? Servers may have their own particular needs but for consumers, the advantage of SSD are size and resistance to shocks. Speed is only a slight advantage.
  • I was all set to buy a new laptop with the OS mounted on the SSD and a second HDD for mass storage. The obvious solution to me would've been to map the user directories to the HDD for file storage. Not a problem with Linux of course, but you can't do this with Windows! Can't recall the details, but there's some path info hard-coded somewhere that prevents you from moving your "My Documents" folder to a different drive. I never saw any workaround that didn't feel like a hack that would cause problems lat

  • This Drive is CRAP (Score:5, Informative)

    by rdebath ( 884132 ) on Wednesday November 30, 2011 @05:13AM (#38211998)

    This Drive is CRAP
    ASSUMING that it still only does read caching.

    I bought one of the Gen-1 drives and was very underwhelmed. I wanted write caching; 4GB of non-volatile memory with the performance of SLC flash could allow windows (or whatever) to write to the drive flat out for up many seconds without a single choke due to the drive.

    In addition 4G of write-back cache is enough to give a significant performance boost for continuous random writes across the drive and even more so across a small extent such as a database or a DotNET native image cache.

    But for reading it's insignificant compared to the 3-16Gbytes of (so much faster) main memory that most systems contain, except at boot time when, unlike RAM, it will already contain some data. The problem with this is that it will contain the most recently read data, whereas the boot files can quite reasonably be described as least recently read.

    So in the real world it's useless for anything except a machine that's rebooted every five minutes ...

    • by shitzu ( 931108 )

      Considering the price of ram and flash i do not really understand these hybrid drives. Wouldn't it be cheaper and make more sense to just put 8GB (or 16GB, or more) battery protected RAM cache inside the hard disk rather than flash memory?

      P.S. i chose to go an SSD route anyway, hybrid drives never entered my mind as an alternative.

  • by evilviper ( 135110 ) on Wednesday November 30, 2011 @05:20AM (#38212022) Journal

    They may be slower than SSDs, but not by much

    That's horribly incorrect. I liked the sound of hybrid drives as well when I saw the price... The 500GB laptop hard drives with 4GB Flash for $150, should be awesome... But I, not being an idiot, did some research, and sure enough, the reviews say it's not remotely comparable to a real SSD.

    eg. http://www.storagereview.com/seagate_momentus_xt_review [storagereview.com]

    It's faster than a drive without such a cache, and it might be a good option for a laptop, but even there I'd say a 32GB SD card would be cheaper, and will work wonders on FreeBSD with ZFS configured for L2ARC...

    I have no particular interest in what anyone buys, but the comparison to real SSDs is a massively dishonest.

  • That's the real question with a hybrid drive. If you're running any kind of database, your performance is limited by how quickly you can fsync. A hybrid ought to be instant, which would be a major speed and reliability win.

  • Its benchmarks for cold boots and application launches show the new drive to be just a few seconds slower than a SSD.

    My Debian sid boots in a few (noticeably less than ten) seconds into kdm. A few seconds of ten seconds is about a third or more.

    "Newfangled tech! Now at least 33% slower!"

    Great slogan you got there.

  • Is the Time Finally Right For Hybrid Hard Drives?

    No.

  • by SmallFurryCreature ( 593017 ) on Wednesday November 30, 2011 @06:32AM (#38212270) Journal

    The article seems to think hybrid drives are the best of both worlds, but they are not.

    They have the unknown reliability of SSD/flash drives (they do fail) COMBINED with the failure rate of consumer grade HD's (not that good).

    They are not as speedy as pure SSD and not as cheap as pure HD.

    So, the people that want speed, spend the money for a real SSD and use cheap reliable HD's for mass storage in a nas.

    The people that want cheap, buy regular old HD's and accept the lower performance or just whine about it without doing anything about it because they are cheap.

    The middle market, the people to cheap to buy a SSD but willing to spend far more on a small HD... I guess it just ain't there. ESPECIALLY since this lower class of consumer tends to buy ready made machines. Notice how the consoles only increase the HD space at the same time netbooks do? When THAT size of laptop HD as reached rock-bottom prize and you actually would have to pay more to get a smaller sized one?

    Well, same for budget PC's makers. They buy HD's in bulk and put the same size in everything to cut costs. They are NOT going to add several tenners worth of hardware in the faint hope that budget PC buyers will buy their more expensive model when its sits next to the cheaper models in the shop.

    And the high-end PC makers? They simply buy cheap SSD's and charge a premium for them.

    Budget and high-end markets are FAR easier to supply for then the mid range. Because the budget people think anything more expensive is a rip-off and the high-end people look down their noses at anything cheap.

  • by Shivetya ( 243324 ) on Wednesday November 30, 2011 @06:35AM (#38212284) Homepage Journal

    and their upcoming Ivy Bridge chipset will take it even further. Both allow for the use of a small SSD drive as a cache against a larger traditional hard drive.

    Per the wiki page on their chipsets, The Z68 also added support for transparently caching hard disk data onto solid-state drives (up to 64GB), a technology called Smart Response Technology

    SRT link is http://en.wikipedia.org/wiki/Smart_Response_Technology [wikipedia.org]

    • by smash ( 1351 )
      it still requires 2 drives in the machine. for laptops this is often not viable.
      • it still requires 2 drives in the machine. for laptops this is often not viable.

        But the SSD doesn't have to be added as a discrete component. You can already get motherboards [newegg.com] that incorporate a small SSD drive to be used with SRT.

  • The current idea is to put often used files on the SSD, and less used files on the HDD.

    I bet you could get even better performance by splitting every file and putting the first few blocks on the SSD.
    When a file is accessed, the SSD can start delivering data immediately while the HDD has some time to find the rest of the file and take over from there.
    That should make every file access fast.

  • ...and you can't force the drive to actually flush the cache to storage immediately. For the sake of efficiency, drive API's will lie about data having been being persisted.
  • by laird ( 2705 ) <lairdp@gmail.TWAINcom minus author> on Wednesday November 30, 2011 @09:00AM (#38212934) Journal

    So how do hybrid drives decide what's stored in the SSD vs the disk? From working in the hard drive business, I can think of several ways to tackle this - which is it?

    1. Drive observes usage patterns and stores data on SSD vs disk based on that. This would be cool since it's transparent to the OS, etc., so it can work by "magic" (e.g. like bad block remapping), but it feels like it might be less effective than the other strategies depending on how good a job it does guessing how data is used. Also, there are some cases that are 'rare' (such as boot time) but which are important to optimize, even if statistically it wouldn't appear so.

    2. Driver/OS controls what's stored where. This could be great, since they can have much more knowledge of what's going on than the drive.

    3. SSD and disk are distinct 'drives'. This would allow the user to optimize (e.g. put boot OS and swap on SSD, big files on disk, etc.). But it requires users to understand and manage tradeoffs explicitly, which most people probably don't want to deal with.

    So which is it? Does anyone know?

I have hardly ever known a mathematician who was capable of reasoning. -- Plato

Working...