zanny 6 years ago

So why have hard drives stalled out for the last 3 years? An 850 evo is still the "premier" 2.5” drive and still goes on sale for the same lows it did years ago - the best deals are around $240/TB. The new nvme drives are a markup from there. 3TB has had low prices of $50-70 for over four years now but 4+TB never creeps lower. About $20 a TB is the best mechanical storage you can get since like 2014.

When are we going to see a $100 6TB drive? 8TB? It seems like density has just stopped on both fronts for practical purchases. 10 years ago a terabyte cost $200 - 2 years later it was $100, and 2 years after that $50. But now, the 8+TB drives just stay at that $200 price point. SSDs fell off a cliff in price from 2011 to 2015 and then just completely stopped.

Its not a good feeling when it seems Moore's Law has come to a screeching halt across and kinds of components. RAM is 3x more expensive today than 2 years ago. CPUs might see 5% improvement year over year. There hasnt been a substantial new GPU series in almost 3 years.

Go back and tell someone in 2006 that in 2016 the PC you build will he just as good as the one in 2018 while bring cheaper and they'd think its impossible progress could grind to such a halt do fast after 40 years of lockstep improvement.

  • bb88 6 years ago

    I think it's maybe three things.

    1. The industry is retooling from HDD To SDD, but they apparently can't keep up with demand for SSD's. So the prices are artificially high.

    2. Demand for faster, denser, and lower power SSD's are coming from the cloud providers. So rather than incremental price decreases over time, we're seeing the prices are staying the same, and we're getting incremental technology improvements. The V-Nand technology is the hot new thing, enabling 2TB of storage on a single chip.

    3. There's no more serious innovation on the HDD's since SSD's will likely eclipse HDD's in 2021.[1]

    [1] https://www.statista.com/statistics/285474/hdds-and-ssds-in-...

    • cm2187 6 years ago

      Can we really use SSD for long term storage?

      • lev99 6 years ago

        Long term storage (archive) of sequential data is still being done on tapes in many circumstances. Tapes are more reliable than hard drives, cheaper, and have a faster write speed.

        Sony announced their highest density tape at 201Gbit/in^2 last year.[0]

        You can use SSD for long-term storage assuming the data is stored on redundant drives and drive health is checked. It's really only cost effective if you need the data frequently or if you are only archiving a small (1pb or less) amount of data.

        [0] https://www.sony.net/SonyInfo/News/Press/201708/17-070E/inde...

        • Patient0 6 years ago

          Just to clarify: you’re saying that 1pb - 1 petabyte - 1000 TB - is small?

          • wtallis 6 years ago

            We're very close to being able to fit 1PB of flash in a 1U server. So yes, 1PB is a small-scale archival project. Current generation tape libraries start around 1PB and scale up from there.

          • lev99 6 years ago

            I'm saying less than 1 petabyte is a small amount of data for long-term storage.

            30TB is large if your talking about daily network activity or your personal photo collection, or the size of one SSD that can fit in a laptop.

            Many businesses have digitized records 50+ years old. Even newer businesses can hit the 1pb mark. Older businesses might primarily store accounting records. Businesses in Machine Learning are interested in much more data.

      • srdev 6 years ago

        Can you use mechanical drives for long-term storage? I would expect that any long-term storage would involve a strategy including backups and redundant devices.

        • bb88 6 years ago

          As long as the machinery for HDD's and SDD's keep plugging away, the software exists to duplicate, verify, and repair large amounts of data across many many devices.

      • fleitz 6 years ago

        Define long term?

        20 years probably not but you probably don’t want to be running a 16GB 3.5in HDD from 1998 either

        Until you replace it? Definitely.

  • bo1024 6 years ago

    In my understanding, the following factors are currently in play.

    * CPUs are a whole issue on their own and the reasons for slow progress are well known.

    * GPUs have been improving a fair amount, I think, but prices are very high now due to cryptocurrencies, and much of the improvement comes from the large amount of flash memory...

    * ...SSDs and RAM (flash memory) is staying expensive due to extremely high and growing demand. I have also heard suggestions of collusion among RAM manufacturers, but I don't know. Think about how much new RAM and disk space is added to servers for YouTube and Facebook alone each day. Furthermore, flash memory is in high demand for almost all other computational systems, particularly, phones.

    * HDDs seem to have, to some extent, hit a limit on how well the technology can perform, and demand for storage is very high (see above). Perhaps manufacturers are reluctant to invest further knowing that SSD technology is catching up?

  • tagrun 6 years ago

    Similar is true for HDD BTW. Consumer grade HDDs are stuck at 8TB since 2015, and their prices keep hovering at around $200.

    But it's not due to lack of technical progress. There are 10TB, 12TB, 14TB drives out there, but there is a huge gap in per-GB cost for them since they're marketed for enterprise users.

    It almost looks like they're trying to milk HDDs as much as possible for exorbitant prices when they still have some chance until at least SSD manufactures start lowering the prices to a competitive level (their excuse is manufacturing can't keep up with the demand, which also reads to me "why would we lower the prices when there's so strong demand").

  • pishpash 6 years ago

    There is still Moore's law in mobile devices and card-sized computers. I think we're just seeing a shift in supply-demand and the relevant economies of scale.

jpalomaki 6 years ago

Performance figures given in the article are about same as those for Samsung 960 Pro NVMe [1]. Would have expected this to have an opportunity for higher speeds by splitting the write/read to multiple chips.

40GB of DDR is pretty interesting. Add there a battery or suitable capacitor and you could use that as a reliable write cache. With SSDs I guess you could design it so that the capacitor would have enough power for flushing the data from the cache to the flash chips even if external power is lost.

[1] http://www.samsung.com/semiconductor/minisite/ssd/product/co...

  • wtallis 6 years ago

    > Would have expected this to have an opportunity for higher speeds by splitting the write/read to multiple chips.

    This drive uses a 12Gb/s SAS interface rather than a ~32Gb/s PCIe x4 interface. SAS drives are still preferred in many segments of the enterprise storage market, especially where capacity is a higher priority than performance.

    > 40GB of DDR is pretty interesting. Add there a battery or suitable capacitor and you could use that as a reliable write cache.

    Supercapacitors are standard issue in enterprise SSDs like this. Most of the DRAM is used to cache the mappings between logical block addresses and physical NAND flash pages and blocks. With 40GB of DRAM but only 32TB nominal raw capacity of NAND flash on the drive, there's quite a bit of leftover DRAM for write caching. (The normal ratio is 1GB of DRAM per 1TB of NAND.)

  • IntronExon 6 years ago

    Jesus, I remember being blown away by 16MB of HDD storage years ago. I am older than dirt! Now it’s 30TB, and solid state... no flying cars, but the future has definitely come through in some regards.

    Edit: These responses have filled me with a really pleasant nostalgia. It’s a pleasure to have such a simple, yet profound shared experience; the “We’ll never fill that up!...” club. Drinks are on me.

    • op00to 6 years ago

      I remember my father installing a 250 MB "hardcard" into our computer at home, and marveling at how we'd NEVER fill it up.

      • Dylan16807 6 years ago

        It's an interesting thing, because it's naive in a very particular way, but it's also more true than most people would guess.

        And that's photography.

        A single photograph requires a bunch of megabytes to store properly. Videos require tens of thousands of frames.

        If you were working with text, it would still be easy to fit everything you needed in 250MB.

        • Brockenstein 6 years ago

          >It's an interesting thing, because it's naive in a very particular way,

          Well given the time period we're talking about, mainstream home computing was much younger, the user base was much smaller, and people's experience and expectations were a bit more limited.

          We're 25 years removed from the first Pentium CPU's. But Pentium CPU's are only 15 years removed from the first x86 CPU's. Our perceptions are much different now because we've experienced more and this technology is much more embedded in nearly everyone's lives as compared to 30 years ago. Even if you're a latecomer there's still 20-30 years of history to draw on that we didn't use to have. And no one is questioning "how will I ever use 500GB of SSD space..."

        • kevin_thibedeau 6 years ago

          There were no consumer digital cameras when 250MB drives were common. The closest, Sony Mavicas, were still writing to analog media. Kodak was still using bulky briefcase storage units with their own hard drives. Not something the everyday "naive" computer user of the era would be investing in.

          • Dylan16807 6 years ago

            But when you're talking about "ever", you shouldn't be looking at what you can easily do today. Computer graphics existed. TV existed. "Ever" should include all of that.

            And especially, it should occur to people that they want to mirror/replace their filing cabinets with computers. Filing cabinets that are full of both text and photos.

          • zimpenfish 6 years ago

            The Casio QV-10a was released in 1995 - that was a consumer digital camera. Although I wouldn't even call 250MB "common" then based on historical prices and my memories.

            • kevin_thibedeau 6 years ago

              In '95 consumer grade computers were shipping with between 512MB and 1GB. 250 MB was common in the early 90's before the first wave of consumer digital cameras.

              • zimpenfish 6 years ago

                > In '95 consumer grade computers were shipping with between 512MB and 1GB.

                Do you have a source for that? I bought a consumer grade computer in 1995 and it only had 120MB of disk space. And the historical prices suggest that 250MB was beyond consumer prices until ~1995.

                http://www.mkomo.com/cost-per-gigabyte suggests 250MB would have been >$500 until 1994. Which is nowhere near "consumer" price.

        • PuffinBlue 6 years ago

          Yep, people do it with email all the time. I think I have under 200MB and I have in the tens of thousands of plain text emails from mailing lists.

          • tritium 6 years ago

            True, but email is pretty crufty with headers and markup, and that compounds with the flotsam of unread spam retained out of laziness, and for the sake of completion. Not to mention that duplication occurs with the endlessly indented prior messages retained in lengthy reply chains across all conversation participants. Compression of that kind of information helps, but compression only attacks the problem only in the usual blind, robotic manner.

            I’d estimate that high definition video has similar problems, in that 24 hours of a pitch black screen with an ambient noise floor, captured at 4K screen resolution, and 240 frame rate, would still eat up 1TB, even with an efficient codec, because imperceptible thermal noise in the frames leaves a lot of uncompressible variation in the pixel raster, even though the actual color differences are imperceptible to people.

      • atonse 6 years ago

        My dad and I once asked a computer repair guy to replace our hard drive and picked a 1 GB hard drive, and we similarly thought "WE'D NEVER FILL IT UP!"

        Now we get watches with 8 times as much storage.

      • cr0sh 6 years ago

        My first hard drive was a 20 MB monster in an 8088 "laptop"; my computer prior to that had 180K 5.25 floppies, so I thought I'd never run out of room...

        • JoeAltmaier 6 years ago

          My first hard drive was a shared 10MB drive on a 'server' that 2 other people used. My diskless workstation (CTOS AWS for what its worth) had dual floppies where I held my source code and build. The tools were downloaded from the 'server'.

      • orthros 6 years ago

        In high school, I recall drooling over a 1GB hard drive in Computer Shopper magazine for $1,699.

        I'm guessing that in 30 years we'll be looking at 1PB drives in the $150 range, inflation adjusted. What a time to be alive.

        • ben_w 6 years ago

          That seems pessimistic to me. There was, what, a 8,000-fold increase in capacity over just the last 20 years? That should get us to an exabyte rather than a petabyte for $150 in the next 30.

          • jedberg 6 years ago

            At some point the laws of physics kick in. Physics already flattened out the CPU curve years ago.

            • ben_w 6 years ago

              At some point.

              The flattening of the CPU speed curve wasn’t matched by the GPU performance curve. The difference between linear computation and graphics cards is the latter is parallelizable, and the former is not. Flash is also trivially parallel.

              Even with perfect materials, a 10 millimetre CPU couldn’t be faster than about 30 GHz because of the speed of light, but even current microSD cards have enough bits-per-unit-volume to put 1.2 petabytes in the largest current standard form factor of hard drive (at a cost of £704k/$982k/€798k). I don’t know how much of that volume is wasted on plastic casing or circits that would be redundant if you really wanted the highest possible density, but then again I also don’t know how much waste heat they produce either.

              My claim is primarily that costs can and will fall, as to presume capacity will rise requires something that uses a lot more data than even high quality video currently uses. (Full mind backups would be one of those things, but IDK when to expect that).

      • jsgo 6 years ago

        similar when I was younger, but it was one of the smaller gig hard drives where I was like "wow, they've hit the point I'll never be able to fill this thing."

        Those were innocent times.

        • berbec 6 years ago

          I've said "Wow, I'll never fill THAT drive" many times in my life, and wouldn't you know it, I'm always wrong. I stopped saying that years ago; guess I can learn.

          • kaybe 6 years ago

            I wonder how many of us have onion shells of data on their harddrives.

            Being lazy means to just copy the full old drive onto the new one, including images of nested older drives.

            • berbec 6 years ago

              I used to have a directory oldstuff and newstuff. When I got a new hard drive, I'd copy the entirety of the old disk into the "new" oldstuff. I think I got 6 layers before I decided to ditch that advanced sorting method.

            • bigiain 6 years ago

              I have deeply deeply nested folders named "Desktop mess" from several decades worth of migrating MacOS hard drives and computers... I _mostly_ do a better job with complete drive images though... I do have a shelf full of drives that got taken out and archived when Time Machine filled them up (all of which I went carefully hunting through when bitcoin hit $10,000 and kept going up...)

        • bigiain 6 years ago

          For quite a while, up until maybe 10 years ago, every hard drive I bought had more capacity that all the other hard drives I'd ever bought added together.

    • irascible 6 years ago

      "Some day I'll have a drive the size of a sugarcube that can hold a GIGABYTE of data!!" -me, in the 80s.

      Me today: "I kinda want one of those flash drives where the dog humps your usb port..."

  • jagger27 6 years ago

    40GB seems like a lot of cached writes to lose in a power loss.

    Assuming best-case write speeds of ~1,700MB/s, you'd need almost 30 seconds of power. Could an on-board capacitor satisfy that or is a RAID-style battery necessary?

    Though I suppose Samsung expects these drives to be put behind a RAID controller (SCSI 12-Gbit, not PCIe), likely with battery backup.

    • wtallis 6 years ago

      Intel's 8TB drive takes about 5 seconds to shutdown and flush its caches, so your estimates are definitely in the right ballpark. I'm not sure how many more capacitors can be squeezed into a 2.5" drive. I'll be cracking open that 8TB drive once I'm done testing it—but carefully, and after making sure it has had plenty of time to discharge.

  • hateful 6 years ago

    I would guess that maybe there's an overhead by adding additional NAND Channels? I wonder at what point the performance suffers because you have to split everything into too many channels?

    • wtallis 6 years ago

      The NAND channel count within SSDs is usually limited by a combination of the host interface bottleneck (no point in using 64 channels if 12 is sufficient to saturate the uplink) and by the cost of increasing pin count on the controller (a current 32-channel controller is a 1517 ball BGA). Increasing the channel count also adds to the power draw, and most products try to stay within 25W maximum.

  • noja 6 years ago

    Is the power draw higher when splitting to more chips?

    • astrodust 6 years ago

      SSD power draw has never been a huge problem, has it?

      • detaro 6 years ago

        Heat is an issue with high-end SSDs, requiring cooling/slowing down artificially to avoid overheating.

      • noja 6 years ago

        We've not had 30TB at once before.

jedberg 6 years ago

Given that their price for their other enterprise disks seems to be about $800/TB, tacking on a premium for the new tech and such, I'd guess the price will be close to $30K.

But man, if I had an extra $120K to blow, I'd totally build a 90TB NAS out of these.

  • chrisper 6 years ago

    Is there any particular reason why you'd need SSD speeds in your NAS? im assuming this would be for home use.

    If you say just for fun, then that's a valid reason :-)

    • wil421 6 years ago

      With gigbit internet I realized the HDD write speed is my limiting factor. It’s taking siginificantly longer to extract my compressed files than it takes to download it.

      Adding an ssd as a staging drive for downloads and using it for transcoding has given me a good boost.

      • ben174 6 years ago

        I just bought the Thunderbolt 3 Drobo, and realize it's completely pointless without SSDs. It advertises 40 Gbit speeds when using an active Thunderbolt 3 cable, but only gets about 200 MB/s with normal drives. I could probably get close to that with the USB drobo.

        • Bud 6 years ago

          Yes. But you're totally future-proof now. And you can speed it up a ton if you throw in a single fast SSD as the cache disk.

          Then in 4-8 years, when the price/performance works for you, you can throw in all SSDs.

        • ferongr 6 years ago

          Wouldn't the speed ultimately be limited by the CPU speed of the Drobo itself instead of the interface? Even hardware accelerated hashing has limits.

      • perfmode 6 years ago

        Why not just stage to a RAM disk?

        • wil421 6 years ago

          Not enough RAM at the moment my water cooler failed and I just replaced it. I want to try some of the Optane memory and my MOBO supports it. My other hobbies have been comsuming my spare cash.

    • Manozco 6 years ago

      Silence is a nice feature for SSD :)

    • jjeaff 6 years ago

      It's not the speed, it's the size and power consumption. Big heavy server racks with 1000 watt power supplies don't go well in apartments.

  • rconti 6 years ago

    Yeah, we were running the 14TB Hitachi FMD flash drive modules and list price (which nobody pays, I hope) was like $44k a pop. Couple trays of those things runs you $1M easy.

fuzzythinker 6 years ago

With same density, it should be possible to pack a Petabyte into 3.5" drives. Wonder why Intel went with a new "Ruler" physical format for its Petabyte drive instead of 3.5" format.

  • simcop2387 6 years ago

    Like everything silicon, I'd bet heat dissipation. As you make things more dense and thicker like that you end up needing more space just to get rid of the heat. Intel's ruler gives a better surface area to volume ratio to get rid of the heat. The 3.5" form factor (and even the 5.25") makes a lot more sense for rotating platters since those scale linearly with vertical space (or at least close to it) because they're not dissipating heat off the platters like you need to with ICs.

    • wtallis 6 years ago

      Yep, heat dissipation is one of the big factors. The other is that 2.5" SSDs mean you can only easily use the front 6" of your server for hot-swappable storage. The Intel Ruler/EDSFF Long form factors let you devote much more of the total server volume to storage that's still accessible from the front of the machine.

  • scrooched_moose 6 years ago

    Where are you getting that?

    A 3.5" drive has a volume of 18.56 in^3 and a 2.5" about 4.48 in^3 [1]. Applying that scaling you'd get 124 TB

    1) http://smallbusiness.chron.com/difference-between-25-35-hard...

    • fuzzythinker 6 years ago

      Yes, physical dimensions ratio is 18.56 / 4.48 = 4.14. This makes 3.5" drive max out at 124TB. However, a good portion of the space is "overhead" for enclosure, connectors and circuit board. This overhead won't need to be multiplied by 4.14. So we may not be able to fit 1 PB into 3.5", but it will be closer than just 124TB.

  • wmf 6 years ago

    36 rulers probably gives higher performance than 12 3.5" SSDs. Large-capacity SSDs also increase the "blast radius" of data lost in a failure.

mfringel 6 years ago

Being able to set it up as 10TB with 3x the life would be even better, because it makes other write-intensive applications feasible with an SSD, with the firmware handling the wear accounting in the background.

ETA: Specifically, having the option to do so, and having it happen transparently in firmware.

  • mankash666 6 years ago

    The market doesn't want products advertised this way. Setting up a 30TB SSD as a 10TB with 3X the endurance is an easily solvable issue in host software. But selling a 30TB as a 10TB, while demanding 3X the price of a 10TB despite the high endurance is a gargantuan challenge

    • rayvd 6 years ago

      I dunno, we on the Enterprise side have always ponied up for the longer-lasting and far more expensive SLC based drives.

      • Faaak 6 years ago

        I know two datacenters in switzerland that instead go "the cheap way". For them, if a drive lasts 4 years instead of 8, it doesn't really matter, as in 4 years, drives will de better and consume less.

        Your redundancy must be top notch though.

    • mfringel 6 years ago

      No doubt. I'd still like to option to have high-endurance handled transparently in firmware.

  • jimrandomh 6 years ago

    You don't get extra life that way; the total number of gigabytes you can write (including rewrites) is the same.

    • wmf 6 years ago

      FTLs are nonlinearly more efficient with more free space (like how GCs are more efficient with more heap available), but exactly how much extra endurance this translates to would depend on the particular SSD and generally isn't documented.

      For example, Micron has an 11 TB SSD with 16 PB endurance and a 6.4 TB SSD (that may have ~11 TB of flash) with 35 PB, so giving up 40% capacity gives 120% extra endurance.

      • jimrandomh 6 years ago

        No, that's incorrect. A naive write-balancing algorithm might concentrate writes into a small subset of sectors when free space is low, but a properly designed firmware will move things around to prevent that from happening. And in any case, that's a matter of free space, not total space.

peter303 6 years ago

Must brew cofee and cook toast too:-)

How many kilowatts?

  • pixl97 6 years ago

    The 16TB unit says 12W when active. Probably not much over 20W under use for this one at most. That would put it in the range of a 15k scsi hdd.

kayall 6 years ago

This is good for Bitcoin Cash.

We can store the entire blockchain on SSDs.