Share This:

At a recent technical conference I was at, I went to several sessions that referred to Moore's "Law" and how it was just going to keep making things better in the DC. Clearly not readers of my stuff. <sad face>. One session leader however, referring to a new server generation said "So, its about 1.5 times faster than the previous generation… So much for Moore's law".  I was proud of my self restraint. I did not cheer out loud. Much.


It was easy to see why there was so much enthusiasm around for the idea that Moore's Observation was inexhaustibly marching on. In particular there was much talk about Solid State Storage, and how much smaller and faster it was. All the new and cool things you could do with it. How tiny and power efficient it is.


All true.


The crux of the matter though is this: Flash memory, like general processor, is reaching the lower limits of current lithography. Samsung is as 12 nm according to the roadmap. Like so many other things, going vertical is the only way around that limitation of physics soon. Making memory more DENSE like that has heat implications, and COST implications. If you read Moore's Observation as a statement about the cost per transistor, Flash, like general processors are going to be getting off the halving / doubling cost / capacity train soon.


It is deceptive right now, at this infection point. We see that moving from SAS or other spinning disk technology to Flash is allowing us to put in one cabinet instead of five or six, for the same capacity, and at higher access speeds. No matter which storage vendor you are looking at, their Solid State offering has that 5 to one ore more space decrease, all in one great leap! Its huge. And with compression and dedupe  and all the things Solid State enables, the cost per Terabyte is getting in line with the older tech, with lower long term costs to boot. Less space. Less power. More speed. It all makes sense. It all fits into the worldview of those raised thinking Moore's Observation is Holy Writ, because it has never failed in their professional lifetimes.


It won't obviously fail just yet. There will be a few turns of the packaging crank to keep upping that physical density on the circuit boards. More chips closer together. Stacked packages. Better airflow management. Still, it can not beat physics. We are in the quantum realm here and there is a lower limit to how small this can go until we start figuring out quantum memory. When a bit becomes a qubit we'll have another leap in capacity (though how that affects speed and cost is not clear at this point).


I said above 'current lithography', and that was intentional. For example, IBM has it in mind to use carbon nanotubes to get us past such limits. IBM also announced they think they can get to 7 nm with a version of current lithography, but since we are at 14-12-10 nm now, that is NOT a huge leap, and it’s a few years before we see 7 nm arrive. Carbon nanotubes take you down to 4 or 5 nm. 4 atoms wide. These articles discuss the scientifically possible, but say nothing about the cost per transistor to achieve them.


What is obvious though is that the Power per cabinet is going to rise, and for a while. The DC can keep getting smaller and more power dense as far as storage is concerned for another few generations without straining packaging too hard.  Hitachi, for example, just announced 14 TB Flash drawers for the G1000 / G1500, and they fit in the same flash slots. I imagine with more packaging / airflow work they have a few more turns of that crank.


Sun  / Oracle gave up on their Chassis, as did IBM (in all of one generation of chassis, when you are talking about their new, revolutionary never been anything like it ever Pureflex chassis). If you want Power or Sparc based gear, we are back in rack mount servers. This will limit our per-cabinet density. Unless you are building custom Super-computers or are Google, we are seeing the new face of the DC in terms of form factor. Same as the old face in a way. Vendors are avoiding esoteric cooling technologies for as long as they can, though IBM has been building water cooled mainframes for a while now. Intel has experimented with computers dipped in oil baths for cooling but that article was from 2012, and clearly that idea has not caught on yet.


To get denser costs more money, and so like Moore's observation relative to the cost per transistor, its always the math of how much more square footage costs than esoteric cooling technologies.


That seems to be the face of the next few generations of DC's.  You can go taller (I have a DC with 57U cabinets for example) but you run out of height, and using that height (special server lifts, special cage walls, etc) is more of a problem. You can go denser, but you soon hot maximum density before you have to introduce specialized cooling. You can hyperconverge, but your cabling plant starts getting more complex as you add bricks. Wider racks with more side space to keep it all neat become attractive.


Soon, your choice will be simple. Bigger DC or move it to the cloud (which just means THEY will need a bigger DC).


(Coda: I know of several DC's just getting started retiring 15 year old gear, and getting to higher density stuff, so for a while, while the older stuff is retired, 'we' (global DC denizens) are still going be able to get smaller. Like the leap from Spinning to Solid State disk though, once you are past that inflection point, it’s a good idea to be sure you have good first right of refusal on the white space next to your cage!)


[Coda 2: I realized after I wrote this that there was an assumption in there: That all the other efficiencies are already in use to drive up utilization per computer. We started at average utilization of less than 3% back for "Go Big to Get Small". Now we try to be close to 90 / 90  - CPU / Memory. When we add a computer resource to the on-prem cloud, its because we are OUT of something. A DC that has not been on that journey yet has wiggle room to shrink.]


[Coda 3: Interesting article about how Facebook is changing DC designs potentially for many others and the Open Compute Project. Just read today]