Share:|

For a very very long time I have had it as a to-do to finish up my thinking / posting about how the end of Moore's Observation was going to affect the way we design and build data centers. The core tenant of this was that data centers are about done shrinking. In the whole "Go Big to Get Small" series I did here I talked about the 10:1 floor space reductions and the 4:1 power reductions we were after and have achieved.

 

The bad news is, you reach a point where you can not 'cash that check' any more.

 

One exception to this trend, for now, is Storage. Namely: The glacial move away from mechanical storage towards solid state storage

 

I had written a whole piece about disks, areal densities, vertical recording technologies, and various other things that tied into my general theme of Moores Observation being over. I put in 'on the hook' so long here that it was lost when they upgraded the site. I'm looking at you  Matt Laurenceau.

 

I had paused writing it because I just kept feeling like I was on the cusp of change and that anything I posted about storage was going to get passed by 3 minutes after I hit send. Then I moved / consolidated another DC, and by the time I got back to it (now) not only was the original post gone, but everything it was about had in fact been passed by.

 

An intersection we are at that is related:

 

  1. Annual storage growth is supposed to keep going ballistically up: 40% a year
  2. Annual IT budgets are generally flat to tiny growth. No where near the growth rate of storage, to be sure.

 

Used to be I'd look at a storage array that could grow to two petabytes usable and think "Wow". Now its more like "Is that all?"

 

To address some of the problems with the growth of storage, and the general lack of it physically shrinking, some vendors went with tiered approaches: Slow (7200 RPM), High density [eight terabyte right now] drive arrays fronted by faster, lower density drives (10K or 15K RPM), in turn fronted by even smaller amounts of Flash drives, in turn front by even smaller but fast still caches.

 

It made a sort of sense at the time. Flash memory and RAM were expensive. Early Flash wore out quickly. Spinning drives were cheap and slow (relatively speaking). Further, its well understood that data is largely 'cold': the standard number I see around says that 80-90% of your data is low reference. Only 10-20% of it is 'hot'. That does not even count data warehousing or adding something like a gnarling tape teir. We have been solving this problem for a very long time with things like heretical storage.

 

We keep solving this problem because we keep trying to drive the cost per Megabyte / Gigabyte / Terabyte down per unit. The unit we think in keeps going up.

 

The end is in sight. Flash technology is finally getting at or near the same cost as spinning media. For enterprise grade storage, lets call that near 1,000 USD per Terabyte. There is a caveat there: You HAVE to use dedupe technology on the Flash memory to achieve things like this. Its still five times the price of disks at the time I write this. I'll read this next year and go "Wow: Stuff was expensive back then".

 

Dedupe makes all kinds of sense. Why store 80 copies of something when you can store one, and have 80 pointers to it instead? It works better with fast controllers and storage though. And therein lies a conundrum. Its a classic one.

 

In the early days of PC's we used to do anything we could to stay in Memory. Disks were just so slow!. The entire UNIX operating systems was designed to treat everything like a file, and to use every bit of RAM to cache the disk I/O. DOS had 'TSR' (Terminate and Stay Resident') memory management programs so that programs would NOT have to be reloaded from the slow slow disks. We compressed hard drives and traded CPU cycles for compression of disk space. Many thought that would slow down the disk, but often it was the other way: A compressed program read off the disk faster than an uncompressed one. As long as you were CPU rich, you were good to go.

 

Dedupe is like that. You trade controller cycles for disk / Flash storage space. Problem is what I outlined in my post "The Core Necessities". We are off the free CPU train. CPU's are getting more expensive per transistor. We are nearly done with current tech at shrinking them. That means to add CPU power literally means to add more CPU's. Or CPU Cores at least. That equals more heat and more power for the CPU's.

 

As long as thats cheaper than the actual memory though, we'll do that deal.

 

Physically speaking, Flash memory has some space and power problems too. See "Moore's Memory" for details. In summary: Quantum size limits are going to keep how small we can go limited. CPU's and memory are going to become three dimensional to deal with that, with more and more layers of silicon. There literally is no place to go but up. That article says they see no limit to how many layers they can go, but I do. I am sure they were taking this into account and thought it obvious, but it does not mention power. Power equals heat. Heat has to go someplace, and I am sure this can be solved with heat pipes, and voltage regulators and all manner of that kind of thing.

 

All of which means that while we may get more DENSITY, we are not going to see a huge drop, if any in cost or power. Not right away.

 

So: Storage is going to see a precipitous drop is size / power when it makes the move to all solid state, but then it is going to be on the same ramp as CPU's and memory. Unless something really really new comes along (optical is often bandied about, along with quantum computing) we are going to enter a phase of life where we are going to have to do something else. Something much harder.

 

We are going to have to manage all this stuff.