Become a fan of Slashdot on Facebook

 



Forgot your password?
typodupeerror
×
Data Storage Hardware

3TB Hard Drive Round Up 238

MojoKid writes "When 3TB hard drives first arrived compatible motherboards with newer UEFI setup utilities weren't quite ready for prime time. However, with the latest Intel and AMD chipsets hitting the market, UEFI has become commonplace and compatibility with 3TB drives is no longer an issue... A detailed look at four of the latest 3TB drives to hit the market from Hitachi, Seagate, and Western Digital shows ... there are some distinct differences between them. Performance-wise, Seagate's Barracuda XT 3TB drive seems to be the current leader but other, slightly less expensive drives, come close."
This discussion has been archived. No new comments can be posted.

3TB Hard Drive Round Up

Comments Filter:
  • by Kenja ( 541830 ) on Wednesday September 07, 2011 @12:32PM (#37328744)
    Seems the trend that as capacity increased so does failure rate. For comparison the older 1TB Seagates claim 1,200,000 hours.
    • by Hatta ( 162192 )

      If you bought 3 1TB seagate, you'd be 3x as likely to suffer a failure. So that's really more like a 400,000 hour MTBF for 3TB worth of space.

      • by Kenja ( 541830 )
        But I would have bought four and put them in a DROBO style array. So no data loss.
        • I break in to your house and steal your Drobo, then what?

          • Then he is in exactly the same situation as if you broke into his house and stole his 3TB disk, which is completely orthogonal to the MTBF.
          • Re: (Score:2, Funny)

            by Anonymous Coward

            you get shot?

        • by jovius ( 974690 )

          I actually did quite thorough review of NAS/Backup boxes for home use and in the end decided to build one myself. Now I have a 4TB Ubuntu box with plenty of room to expand and connectivity and what's most important: total control of the system. The passively cooled mini-itx board and the case supports 6 sata III drives besides having eSata and USB 3.0, the OS launches from a small SSD. Raid is something that many people automatically implement even though it's not always necessary or convenient. I simply rs

          • You're using ZFS... with Ubuntu?
            • by jovius ( 974690 )

              I first tried FreeNAS before installing Ubuntu, and ZFS was a natural choice (for the 2TB drives). I didn't want to use the SSD as a cache for ZFS so decided to install Ubuntu on it rather than having an unused HD. The SSD has the default Linux filesystem. I found a great ZFS driver, which doubled the disk to disk transfer rate from fuse-ZFS.

              • Pony up. I need to transition off of fuse-ZFS. Is it ZFS on Linux? I've been waiting forever for it to exit beta, but mostly just not having time to experiment.

                • by jovius ( 974690 )

                  Yes it is (http://zfsonlinux.org/). I installed it from Darik Horn's PPA as instructed in the FAQ. I don't know how fast the disks should optimally be (100 MB/s reads from single disk, WD Caviar Green), but disk to disk rate peaks and goes a bit above 60 MB/s compared to 30 MB/s before. Average is somewhere a bit above 40 MB/s.

                  • Interesting. I hadn't realized any of the native implementations had progressed so far, and trying to run something like ZFS through FUSE just seems disastrous. I'll have to look into this. Right now, I've got several Gentoo systems booting off iSCSI JFS disks, served from cloned ZFS volumes on a FreeBSD server. It would be useful if the images were formatted with a filesystem the server could actually read.
                    • For what it's worth, I've been running zfs under fuse for a while, and am extremely happy with it. It wasn't built for performance, I just wanted the snapshotting + raidz features, so I've never benchmarked it. That said, reliability has been good. I've had a drive fail on me, and rebuilding the array worked as advertised, no issues introduced from fuse.
                  • That is an improvement. I tend to attribute the slowdown compared to the 100MB/s to filesystem overhead. Particularly in my case b/c I'm generally writing over gigabit from OS X via a netatalk AFP share. But even as-is, I tend to get above 30MB/s to a single-disk ZFS pool on ZFSfuse. I would love to see that jump to 50-60MB/s, which I would have to consider best-case reading/writing over a single gigabit link from OS X from a single laptop HD.

                    I have seriously considered switching from Ubuntu to Fedora h

                    • by jovius ( 974690 )

                      The FreeNAS people recommend at least 6 Gigabytes of memory for a system of my size, and when I do a large copy (about 100 GB OSX Image) it actually takes that much and reaches the top speed. The CPU cores hit 100% (the board Asus E35M1-I Deluxe, with AMD Fusion). If you have a large setup the extra memory will likely contribute to the transfer rate. I checked out the other NAS OS's but I like Ubuntu for the versatility. It's probably possible to have yet another performance boost by having a more optimized

                    • by ifrag ( 984323 )

                      Huh... how fast is your network?

                      That 6 GB figure for RAM seems a bit excessive (although with cheap RAM prices, probably no big deal). I'm running a 4x1TB ZFS on OpenSolaris and it hits Gigabit Ethernet speed with 2 GB RAM (not actually sure it's even using all that).

                    • by jovius ( 974690 )

                      I was benchmarking internal disk to disk transfers, and the system really took a bit over 6 Gigabytes. The memory usage pattern was somewhat of interest, as it ramped up for some time until releasing, and the cycle was restarted. The default of ZFS seems to be that all of the system memory is used except 1GB: http://www.solarisinternals.com/wiki/index.php/ZFS_Best_Practices_Guide#Memory_and_Dynamic_Reconfiguration_Recommendations [solarisinternals.com]

                      From Freenas Hardware reqs: "The best way to get the most out of your FreeNAS h

                    • by jovius ( 974690 )

                      Important note was left out from the FreenNAS Hardware reqs:

                      NOTE: by default, ZFS disables pre-fetching (caching) for systems containing less than 4 GB of usable RAM. Not using pre-fetching can really slow down performance. 4 GB of usable RAM is not the same thing as 4 GB of installed RAM as the operating system resides in RAM. This means that the practical pre-fetching threshhold is 6 GB, or 8 GB of installed RAM. You can still use ZFS with less RAM, but performance will be affected.

          • by Hatta ( 162192 )

            You're using ZFS and manually managing backups with rsync? That's ass backwards.

            • by jovius ( 974690 )

              I understand ZFS is mostly for RAID's then? Most probably I'm not doing things as they should be done, but I like ZFS and originally thought of using FreeNAS, which recommends ZFS anyway. I'll probably check back to it when the new version matures a bit. I'm not after meticulous performance tweaking though. It's just great to have a back-up system rather than a single back-up drive (most of my data is practically triple secured).

      • Comment removed based on user account deletion
      • Even the 750k MTBF feels bogus. In my experience actual failure rate is just a hair under 3%, which works out to about 300k MTBF. Maybe they're quoting the MTBF of a drive still in its anti-static bag, sitting in the spares drawer :)

        Try shoving 36 3TB drives in one of these: http://www.supermicro.com/products/chassis/4U/847/SC847E26-R1400LP.cfm [supermicro.com] and you'll appreciate MTBF in a whole new light. My approach is simple: I take the number of drives, times 3% annual failure, times the number of years I want to

        • For a single drive, well, failure rate becomes a lottery.

          You misspelled "single manufacturing batch of drives".

          • The problem is that a hard drive "batch" isn't really a batch at all, it's an entire production run from any particular facility. Typically, once you break into a new "batch", the drives have other differences like a cost-reduced controller or different platter density, that makes it unsuitable for integration with the existing array.

            The most defining indicator I've seen for hard drives is simply its inception date. If you take a bunch of drives, start pounding them in a RAID simultaneously, they are a li

        • That's the right way to do it you don't want to deal with field service.
          And 3% sounds perfectly reasonable for the consumer channel remembering my hard drive days. You only get to 1 or 2% in a tightly controlled OEM environment with mature (aka obsolete) technology. That's why enterprise storage vendors are always 1 or 2 generations behind the bleeding edge. We're just now qualifying 2TB drives.

    • Total lie. Most HDD's will fail around 3 years, so ~26,000 hours.

      • >Total lie
        My hdd still running since 1997 would prove you wrong....although I do not have much on it as it is a small hdd (250gb),
        it would still outlast any of the devices today......

      • by b0bby ( 201198 )

        Just keep your receipt; Seagate is giving you a 5 year warranty on the 3TB drives.

        I have lots of hard drives here which are 4+ years old, I have only had one fail in the past year. I'm sure the oldest drive that's still working here is 8+ years old, and there were lots more that just got cycled out.

        • by repetty ( 260322 )

          I don't give anyone my failed HD's... it's a privacy thing.

          MTBF are fictitious figures. They mean nothing.

          Hard drive warranties are irrelevant. They mean nothing.

          Warranties do not imply quality. Seems like they might but... they don't.

          My Seagates fail faithfully after around 30,000 hours. I track 'em so I know.

          WD's fail... Hitachi's fail... they all fail if you use them long enough.

          Working 8-year old hard drives can be the basis for an amusing anecdote, one of those "ain't that the dangest thing" sort of st

      • I just dealt with a failed drive last night. The drive was a 26-year-old NEC D5124.

        Hows THAT for TBF?

  • by beelsebob ( 529313 ) on Wednesday September 07, 2011 @12:49PM (#37329024)

    For every drive they comment that the drives have a 2.72TB capacity reported in windows. Why is this surprising them so much? Everyone knows that Windows misreports TiB as TB. Given that all these drives are advertised as 3TB, and 3TB is equal to 2.728TiB it's hardly surprising the capacity that windows reports, is it?

    • Also do not miss the fact the drives have throughput topping 160TB per second. These drives are fast. o_O

      http://hothardware.com/articleimages/Item1712/3tb_roundup_atto_read.png [hothardware.com]

    • Everyone knows that Windows misreports TiB as TB.

      Uh, no; "everyone*" knows that hard drive manufacturers, computer manufacturers, and resellers misrepresent TiB as TB, and in the rare cases they do disclose it in advertising or on the outside packaging, it is in 3- or 4-point fine print in a low-contrast color and written in a very technical manner that may as well come across as greek to a nontechnical person, or will refer the user to a web site.

      Windows reports the traditionally-accepted units to the e

    • by J-1000 ( 869558 )
      According to Google unit conversion, 1 mebibyte = 1 megabyte, and both equal 1024 kilobytes each. Can someone please explain?
      • Google is getting it wrong. 1 Mega byte is equal to 1000 kilo bytes, which is equal to 1000000 bytes. 1 Mebi byte is equal to 1024 kibi bytes, which is equal to 1048576 bytes. These are agreed standards by the IEEE, ISO and IEC.

        http://en.wikipedia.org/wiki/Mebibyte [wikipedia.org]

        • Way to complex.

          For us American consumers, I would suggest

          Small
          Medium
          Large
          XXL
          XXXL
          Oprah Winfrey

          None of this mathy stuff.

          • You have to plan for the future.

            What's beyond Oprah Winfrey? And what happens if she shrinks? Is that data loss? Is it recoverable or reusable?

            • by Luyseyal ( 3154 )

              OMG, what will we do if the Library of Congress burns down! Or -- traveling backward in time -- all the gold transmutes!

              -l

            • by rworne ( 538610 )

              She shrunk once before during the "Oprah Diet".

              Somehow we all survived that. Later, she seemed to recover all the loss just fine... and then some.

        • They can say whatever they want, but even today I'd never even heard of a 'mebibyte' and I have both a degree in comp sci and I've been in IT for over a decade. In fact I'm currently working for a college whose comp sci program doesn't once mention medbibytes anywhere. You ranting about how 'MB' doesn't mean what most of the world was taught seems to mean a whole lot of nothing.

          Personally 'way back' in 1996 when I went to college mebibytes didn't exist and we were taught 1 KB (kilobytes) = 1024 bytes. Every

  • Link to print-friendly article: http://hothardware.com/printarticle.aspx?articleid=1712 [hothardware.com]
  • by Nemilar ( 173603 ) on Wednesday September 07, 2011 @12:59PM (#37329174) Homepage

    Am I reading the graphs wrong, or are they claiming 160,000MB/s throughput on those drives?

    Is that supposed to be KB/s? I might buy 160MB/s (that's still crazy high), but 160GB?

  • Sure, a single external drive for backups is one thing but for everyday use I prefer to use RAID-5 or RAID-Z. Sure it's anecdotal but it just seems to me that newer drives fail more often than older ones. Not to mention that losing all the data on a 3 TB disk is a bit worse than losing all the data on a 540 MB or even 9 GB disk was. Sure I had important data on those as well, but it was easier to keep the most important stuff backed up properly.

    • The idea of a rational backup strategy is that you don't lose data, no matter what size drive you are using. So if you're worried about a 3TB drive failing, you're doing it wrong.

      • The rational backup strategy for my movie collection *is* a RAID-1. One of which has actually failed (Samsung 2GB) so I have to replace it *quick*. It's not 100 safe or anything, but it's a balanced decission in my opinion.

        Documents are stored on my SSD, my RAID and online, but I don't need 3 GB for that anyway.

        • And that's fine. But backup strategy shouldn't be based on 'ooh, that drive is too big to fail'. That IS what RAID is for, after all.

    • The claims that newer drives are more error prone is a fallacy, resulting from a failure to understand basic statistics. These are people who in the past have bought one, maybe two hard drives, but now they have several OS drives on several computers, plus several more for bulk storage, portable storage, etc. When you have five times the number of drives, you are five times more likely to suffer a failure in one of them. People are experiencing more failures, because there is more to fail, and they are i

  • How many times have we been through this "my BIOS doesn't recognize my drive because it's too large". Then the BIOS vendors find another way to tack on another factor of two. Then next year we have the same problem. Why the hell can't we solve this problem once and for all? Is there some fool that actually believes that next year, drives won't be bigger?
    • For us old foggies what we did is read the drive parameters off the drive
      and then manually enter them.

      Then if the controller could not address all the space we used
      what was called a drive overlay software usually provided
      by the manufacture, that should get you up and running
      but back in the day drive compression software could cause
      serious issues with drives built with a drive overlay.

      Also data recovery get more than just a little tricky.

      If the bios can't autodetect the drive that is your most likely
      path to

  • Based on the current size of the Library of Congress [loc.gov], you'd need a RAID of 93 of these to store everything! And you'd need to increase that RAID by two drives every month to keep up!
  • Do 3 TB exist and are reliable?

  • 4TB [seagate.com] is where it's at!

E = MC ** 2 +- 3db

Working...