operating systems are harder than quantum physics

Went to hear Tilak Agerwala talk on the “Future of Data Centers” and was struck again by the way in which system developers and chip architects find solving problems like quantum power leakage and manufacturing devices with 100 angstrom feature sizes to be easier than improving operating system performance or design. Seymour Cray used to crack that the purpose of operating systems was to degrade the performance of his machines and that’s more the case now than ever. Agerwala spoke about moving “images” (virtualizable binary images encapsulating applications and execution environment) around the “Clouds” of computing elements in vast linked datacenters that churn out heat at an appalling pace. And then we realize that these images are largely composed of operating systems designed for PCs or minicomputers that are too hard to modify. Jim Allchin, recently retired from Microsoft wrote two pertinent documents to think about here. The more recent was a letter. The less recent was a Ph.D. thesis which seems unavailable on the web on the Clouds operating system.

clouds versus pcs

George Gilder had an article in Wired on data centers as clouds. My instinct is to dismiss anything Gilder writes because of his track record of wacky ideas (e.g. feminism is destroying civilization and supply side economics makes sense). But, in this article, Gilder reports on some smart people. The sheer massive use of electrical power in data centers comes up clearly. I remember 10 years ago looking at a room full of server racks in Sandia Labs and starting the obvious calculation of 300 watts times 2000 boxes and adjusting my whole view of PCs as small replacements for those huge dinosaur mainframes we used to have.

Discussing Sun’s efforts to build super dense systems, Gilder writes:

And with 1-terabyte drives, available next year, Bechtolsheim will be able to pack the Net into three cabinets, consuming 200 kilowatts and occupying perhaps a tenth of a row at Ask.com. Replicating Google’s 200 petabytes of hard drive capacity would take less than one data center row and consume less than 10 megawatts, about the typical annual usage of a US household.

(To me, the 10 megawatt/hours as the average consumption of a US household is an incredible number, but it turns out to be true. Also there is the, only an supply side economist would do this silly comparison to annual household use – really it’s more like 9000 times the annual use of a household.) The obvious flaw in the logic is the assumption that the size of the contents of the online network will remain about the same as storage costs drop – and Google is working hard to put more and more “stuff” online.

Nonetheless, one key question is the balance between centralized storage/compute centers and the edge and relative power efficiencies will make a big difference. If the dominant paradigm of computing becomes one of connecting your lightweight mobile device to a network and invoking operations somewhere in the cloud, the industry landscape will look very different from what it looks like today and computing will much more look like todays electrical system. Perhaps we will even see an ironic situation where computing becomes a centralized utility just when electrical power decentralizes or perhaps the decentralization of electric power generation will force decentralization of computing.

Locality has always been key to performance. But there are all kinds of locality. The data center cloud makes sense if high speed local networks are so much faster than the internet that it makes sense to centralize data centers, but the internet is fast enough that it makes sense to use remote data centers. That may be a durable balance or it may not.

who should control the internets

Op-ed in the NYTimes from Damian Kulash Jr. of Ok-Go

We can’t allow a system of gatekeepers to get built into the network. The Internet shouldn’t be harnessed for the profit of a few, rather than the good of the many; value should come from the quality of information, not the control of access to it.

end to end design versus BOM design

Grossly simplifying, some products are Bill of Materials (BOM) products and some are Designed products. BOM products come to market via a process of generating a parts list and then integrating. In place of designers, BOM products have buyers and integrators. In place of innovation, BOM products have standards. The standards are preferably “industry standards” produced by consortia of companies that manufacture or distribute the product. Vendors compete on price and “alliance” to be allowed to sell parts to the integrators. Buyers work in a bureaucratic system in which specialists oversee acquisition of parts and the product as a whole is viewed primarily in terms of the sum of parts costs. Usually there are a small number of large integrators who act as the gateway to the market.

All the recent progress in cell phone handsets has come from companies that defied the BOM process. Instead of trying to sell items into the “stack”, RIM and Apple both have been able to imagine the handset as a complete product and innovate. Google is also changing the game by bearing down on what services can be delivered over a mobile device. These companies are in the business of creating products for end users and not in the business of selling parts to buyers of components. If you create products, BOM issues don’t go away, but they do not dominate.

The BOM process can dominate even in what appear to be more open markets. In a previous note, I discussed Dell’s efforts to sell against Apple to the people and businesses found at SXSW. Even though, as Jay Pinkert points out, Dell has improved design for cases, it labors under a disadvantage against Apple, because Apple can do end-to-end product design and Dell is forced to live at least in part in the BOM process world. Apple is not only designing the case, they control the operating system, the windowing environment, the middleware, and they can strongly influence and package application software. So Apple can look at what a conference organizer or musician wants or maybe what they will want once someone shows it to them, and try to design a product that will be compelling in totality. A company like Dell is constrained to delivering what is ultimately just a vehicle for vanilla Windows (or Linux) – a component of a stack. While the packaging can be improved, they cannot reach the customer in the same way that Apple can. Musicians and conference organizers and marketing agents want to have an aesthetically pleasing communications/graphics design machine or email/composing system or presentation device or some combination of these. None of them demand Vista or OS/10 or an intel processor or any of the technical parts. Of course, the traditional downfall of companies like Apple is that they grow an internal BOM process that transforms their own engineering staff into integrators and parts vendors.