Share This:

I waited a bit to write this post, in part because of being busy with getting a new DC started up, but in part because of of this:


The hard part about writing about tech is that it catches up to you, and fast! The punch line here is that we are more or less at the end of the line for Silicon, and going smaller means new materials. New processes. Massive R&D and investments for new fabs.


It will not be cheaper to get smaller. Far from it.




From one point of view, Moore's observation actually stopped a while ago: Back at 28nanometers:


Worse, from the point of view or RAM/ DRAM et al, the Observation quit being in effect a while ago:


There are implications there for mass storage too, but for now I am just thinking about how all of this affects RAM, and therefore ultimately what that means for us in the Data Center designing / building / maintaining business.


The Good Old Days (of the last couple years)


During recently DC consolidations we have seen 4 to 1 power reductions and 10 to 1 space reductions, but the end of Moore's Observation for processors means that how things go forward soon will be different. Chips will be bigger: its still faster to have more components on a small chip than have them talking to each other across a mainboard, or a backplane. Way way faster. The speed of light is still a law. Einstein!!!!


Parallelism will increase. Has to. More processors. More cache. We can not make them much faster than they are in terms of clock-cycle without them just radiating right off the substrate in useless and uncontrollable ways. Microwaves are just a few Gigahertz


All of that downsizing is just as true for RAM, if not more-so. Memory is on chips, just like CPU's but it has always been on a different path too. How you make a chip 'remember' something is not the same problem as making transistors compute things. The design on the substrate is different, and therefore the way it scales down is different.




Memory requirements are NOT going to stop though. To get things done, more and more things want to stay memory resident. Virtualization requires lots of RAM to hold all the system images. The Power 8 system we just ordered has 1 terabyte of RAM. The systems we have been getting rid of from 10-15 years ago ran 2,4,8,16 or so gigabytes.


To go to bigger and bigger RAM sizes, sooner or later, the RAM chips are going to get bigger. Heat will be a problem. Cooling memory will be a thing again. If you have ever overclocked your PC, and had it live, you know what kind of work you had to do on the heatsink. Liquid cooler caps were not unheard of.


Looking Back


In those two posts I wondered about, ultimately, form factor. Would blades be able to survive in a blade chassis if all this new size was coming into play? Now add in the fact that RAM is going to get larger. More / longer DIMM slots. More memory controllers on-chip. More address-ability of 64 bit-ness but no shrinkage of the chip die.


Looking back at that Power 8 I just mentioned above: You know what I can NOT buy from IBM? Anything with a Power 8 chipset in a blade form factor. Has to be a Rack mount. Has to be a 2U or 4U case. The smallest blades for the Sun / Oracle 6000 are full height! Dell announced the M630 to replace the M620 half height blade, but there is no M430 quarter height blade in sight. Not for the M1000e chassis anyway.


Its Getting Racky


If the case the computer guts sits in has to get bigger, then it seems we are headed back to something more like a rack-mount design. It still has to fit inside our DC, and we do have that huge investment in the current cage.


Also, when you get right down to it, modern DC's like the ones our Co-Lo's are in are able to handle 250, 500, even a 1000 watts per square foot, and we are nowhere near that in our cage. Bigger servers, with big CPU's and RAM installations that run hotter than what we have now are not really a problem there yet.


But then there is this new trend:


Everything that is old is new again! Back in the days of the mini-computers, we had big cases full of CPU's and RAM because they were NOT dense. Now we are headed back to that form factor because we can not make things any smaller / denser? Could be. It certainly aligns with the 'hyperconvergence' story.


We have over 24,000 virtual machines and growing. We can only manage that because we have BMC's CLM in there taking a lot of the work off the data center team. We could not easily use the *current* hyperconverged platforms because of that scale, but that market is changing super fast. See things like EVO:RACK.


And that all leads me to think about the Storage part of this next.