well, this is embarrassing. I wrote a blog post a long while back, and never posted it, and I was just getting ready to write an update commentary about it... and its not out here!


So: here is what I wrote forever ago:




In my last Green IT post (Ed note: This wasn't my last post. I just missed posting this one...) I looked at the Green / Power side of CPUs and Cores. Here I want to open that up, and have a look around.

Framing this thought experiment is the idea that we are running out of road with Moore's Observation.


What the Observation Is


It is worth noting here that what Moore observed was not that things would go twice as fast every two years or that things would cost half as much every two years. That sort of happened as a side effect, but the real nut of it was that that the number of transistors in an integrated circuit doubles approximately every two years. Originally it was 12 months, but that was walked back to 2 years, and some split that and call it 18 months. In 2010 it was predicted that by 2013 the rate of doubling would only be every three years.

Just because the transistors doubled does not mean its twice as fast. Not any more than a 1 Ghz chip from one place is half as fast as a 2 Ghz chip from a different place, because it all depends. Double the transistors only means it is twice as complex. Probably twice as big, if the fab size stays the same. Architectures matter. Workload matters. Application matters.


Since the Observation was made in 1965, doubling what an IC had back then was not the same order of magnitude as doubling it now. IBM's Power 7, which came out in 2010 has 1.2 Billion transistors. It is made using 45 nanometer lithography. Three years on, the Power 8 is using 22 Nanometer lithography and the 12 core version has 4.2 billion transistors.

To stay on that arc, the Power 9 would have to be on 11 nanometer lithography, and have over eight billion transistors (Sparc has already passed that...). However, from what I have read, both IBM and Intel's next step down is 14 nanometer, not 11.  It may not seem like a big difference, but when you are talking about billionths of a meter, you are talking about creating and manipulating things the size of a SMALL virus. We are in the wavelength of X-Rays here.


A silicon atom is about .2 nanometers across (as near as such a quantum object can be measured anyway). We are not too many halve-ings away from trying to build pathways the size of 1 atom wide, and quantum mechanics is a real bear to deal with at that scale. Personally, I don't even try. Also, there is not much redundancy in a pathway that wide. Any tiny event can blow the atom right off the substrate.


So we'll do other things. We'll start making them taller, with more layers. The die will get bigger. To get more cores in a socket will mean the socket will get physically larger... up to a point. That point is the balance between heat removal at the atomic scale and power. Seen a heat sink on a 220 watt socket lately? They are huge.


The Design, the Cost, the Chips to Fall


Ok. So making chips is going to get harder. Who can afford to invest the time and effort to build the tooling and the process to make these tiny, hot little things?


Over the last 10 or 15 years we have watched the vendors fall. After kicking Intel's tush around the X86 market place by creating the AMD64 chips, and thereby dooming the Itanium, AMD ended up divesting themselves of their chip fabrication plants and created Global Foundries in the process.


Before that, HP had decided it was not anything they wanted to be doing anymore, and made plans to dump the Alpha they had acquired from Digital via Compaq. They also decided to stop making the PA RISC line, and instead migrate to the short lived, rarely loved Itanium. To be fair, they didn't know what AMD was going to do to that AMD64 design. But there is a reason the Itanium's nickname was the Itanic, and actually it has lasted a while longer than most would have thought.


Intel could not let AMD have all the fun in the 64 bit X86 compatible world, and peddled hard to catch back up. They are having fun at AMD's expense these days, but I never count AMD out. They were not only the first to have the 64 bit X86 market, they had all the cool virtualization assists first. They were early to the party to integrate graphics controllers onto CPU silicon. They blazed trails where GPU's are used as co-processors.


Meanwhile IBM opened up itself to all sorts of speculation by PAYING Global Foundries to take its Fab business: Please. I guess the gaming platforms moving away from Power just hurt too much. Those were the days.


That leaves us with three chip architectures left for your future Data Center:



Plus the newcomer: ARM


Death by 1000 Cuts


Yes: Itanium is still around. May be for a while. If you have a Tandem / HP NonStop, then you have theses for now. Until HP finally moves them to AMD64. If they want feature / speed parity with what going on in the rest of the world, they'll have to do something like that.


The VMS Operating System problem was solved by porting it to AMD64 via VMS Software, Inc. And HP-UX (my first UNIX OS) seems to be slowing turning into Linux customers on, you guessed it, AMD64 chips. HP is a big player in Linux space, so that makes sense. HP-UX 11i v3 keeps getting updated, but the release cadence relative to the industry, especially Linux, looks and feels like it is meant to be on hold. Lets face it, if you have to sue someone to support you, your platform probably has larger issues to deal with. Not trying to be snarky there either. Microsoft and Red Hat Linux dropped their support for the chip. Server Watch says that its all over too. So does PC World.

Linux runs on everything so if Linux doesn't run on your chip... Just saying. You probably do not have to think about where in your DC to put that brand new Itanium based computer. Unless you are Tandem based, as noted.


So what does all this mean for What's Next?


There are few obvious outcomes to all this line of thinking. One is that the operating systems of the next decade are fewer. There is strong alignment of Chip to OS, except on AMD64. It has numerous varieties. There even used to be an AIX there, back in the day (version 1.3 on the PS/2, 1989).


Next is that operating systems themselves are going to hide. Really: As much as I love Linux, no one in the marketing department cares what OS their application is running on / under. The only time I hear an OS related observation from an application person is "why are you taking my app down?" "Oh.. It's Patch Tuesday". Or SSH was hacked. Or whatever.

Its a hard thing for a computer centric person to see sometimes but the change that mobile and DC consolidation and outsourcing (sometimes called "Cloud Computing" hath wrought is that the application itself is king. Its their world and our data centers are just the big central place that they run in.


Clearly Linux and MS Windows are in upward trajectories. Every major player such as IBM, HP, Oracle, etc. etc. supports those two.


The Sparc  / Solaris and Power / AIX applications are still alive and kicking (though with 30% of the market, they are being slowly eroded by Linux). With spinning of its X86 Server business to the same folks that bought their laptops, IBM is left with only high end servers (I Series is technically called midrange) (Oh, and Lenovo made that laptop business work out pretty well for themselves). IBM wants to be in the DC, where the margin is. Same thing more or less at Sun/Oracle. All their server hardware is being focused on making their core product run faster.


HP will be in the AMD64 or ARM world, and that's pretty interesting. The Moonshot product is nothing I have personally been able to play with, but it makes all kinds of sense. If you don't need massive CPU horsepower, you can do some pretty nice appliance like things here. And since Applications are king, not what hardware it runs on, chances to have lots of little units in a grid that are easy to just swap when they fail has a very Internet like flavor to it.


How will Santa Package all our new Toys?


Looking at Moonshot, and all the various CPU's, it seems that, for a while at least, we'll be seeing CPU's inserted into sockets or Ball Grid Arrays (Surface mounted). Apple has certainly proved with the Air line that CPU-soldered-to-the-mainboard solves lots of packaging problems. Till the chips get thicker, and start having water cooling pipes running through them because air just can't pull heat away the way that water can.


Yep: Liquid in the data center (spill cleanup on aisle three). We can be as clever about the packaging as we like, but physics rules here, and to keep trying to make these faster / better / cheaper is going to mean a return to hotter more than likely. That's a real problem in a blade chassis.  Even if the water is closed loop and self contained to the airflow of the RAM / CPU air path, it means taller. Wider.


Or, you go the other way, and just do slower but more. Like hundreds of Mac Mini's stacked wide and deep, or perhaps little slivers of mobos from Mac Airs ranked thirty across and four deep on every tray / shelf. You wouldn't replace the CPU anymore. The entire board assembly with CPU and RAM would become the service unit. Maybe everything fits into the drawer the same way that some disk vendors do it now.


When I designed our most recent data center, it was extremely hard to stay inside the 24 inch / 600 mm rack width. By going taller (48U) I could put more servers in one rack. Which meant more power and wiring to have to keep neatly dressed off to the side, in a rack that had little side room. The Network racks are all 750 mm for that exact reason.

If we go uber-dense on the packaging because of the CPU design limits, then what does that mean about the cabling? Converge the infrastructure all you like, the data paths to that density are going to grow, and 40Gb and 100 GB Ethernet don't actually travel in the Aether. I know, right? More like the Higgs field.


That's a conversation is for another post though.




I wrote that and never posted it apparently in July of 2016. Back to late 2017. Things have happened since then, and so that's what the NEXT post is about.