Microsoft by the numbers

From correspondent AY:

150,000,000
Number of Windows 7 licenses sold, making Windows 7 by far the fastest growing operating system in history.
<10
Percentage of US netbooks running Windows in 2008.
96
Percentage of US netbooks running Windows in 2009.
9,000,000
Number of customer downloads of the Office 2010 beta prior to launch, the largest Microsoft beta program in history.
24%
Linux Server market share in 2005.
33%
Predicted Linux Server market share for 2007 (made in 2005).
21.2%
Actual Linux Server market share, Q4 2009.

http://blogs.technet.com/b/microsoft_blog/archive/2010/06/25/microsoft-by-the-numbers.aspx

Those last numbers are especially damning since Linux started with superior technology, has an easier business model if your only goal is market share, and did not have a legacy ball-and-chain anywhere near the size MS has to drag along. But years of directionless bloat have taken a toll and the sponsor driven decisions to go for traditional “server/OS” technology methods in place of trying to find solutions to current customer problems has been very damaging.

Even discounting the obvious PR froth of this article: http://www.eweek.com/c/a/Linux-and-Open-Source/Linux-Losing-Market-Share-to-Windows-Server it is an interesting trend.

and just for kicks from April 2000

“During his presentation on scaling Linux to the enterprise, BitMover, Inc. CEO Larry McVoy raised a few furrowed eyebrows at the recently-held Colorado Linux Info Quest (CLIQ). His message: Symetric multiprocessing (SMP) scaling may be hazardous to your operating system (OS) health.”

“McVoy said that the level of harm is “directly proportional” to the amount of scaling and is “worse than linear” in the number of processors. Converting a uniprocessor OS to a four-way SMP OS introduces a “small amount of damage.” Converting the four-way SMP OS to a 32-way SMP OS does even more damage, he told the crowd. McVoy calls this phenomenon “the locking cliff.””

“The bottom line, said McVoy, is that: “Linux needs to have bragging rights on ‘big iron’ to be taken seriously in the enterprise. But the traditional way of getting those rights involves a series of changes which do a lot of damage to the source base. So the problems are that Linux needs to scale, and traditional scaling is a bad idea. Linux will use traditional scaling if nothing else shows up,” he said, “but SMP clusters is a better way to do it. I’m the driving force behind that and I’m not driving because I’m wrapped up in BitKeeper.””

http://www.linuxtoday.com/news_story.php3?ltsn=2000-04-13-010-06-NW&tbovrmode=3

Advertisements

Doing it wrong: Poul-Henning Kamp

Kamp writes to explain that virtual memory behavior can be important for program performance. That is, he explains

From the programmer’s standpoint, the working set of information is the smallest collection of information that must be present in main memory to assure efficient execution of his program.

But that note is actually from Peter Denning’s classical article (http://portal.acm.org/citation.cfm?doid=363095.363141 and http://cs.gmu.edu/cne/pjd/PUBS/WSModel_1968.pdf ) published in 1968 in Communications of the ACM – the same publication in which Kamp’s article appeared more than 40 years later. Kamp’s main result is that a search algorithm over a very large set of objects will perform poorly if it encounters a large number of page faults (if the working set is too large for the physical memory allocated to the process) ! Not since a researcher  explained during a talk at a serious academic conference that memory speeds could rival or outweigh processor speed in determining performance have I been so shocked but since that researcher was from Harvard, he had an excuse . The publication of Kamp’s article in ACM Communications shows something about the culture of computer science as an academic field these days.

Kamp’s article is easy to mock but, at least he’s doing some performance analysis and measurement – things that are definitely out of fashion in the field.  And he cares about performance.

What good is an O(log2(n)) algorithm if those operations cause page faults and slow disk operations? For most relevant datasets an O(n) or even an O(n^2) algorithm, which avoids page faults, will run circles around it.

Indeed.

A classic blunder in optimization was committed some years ago by one of the major software vendors. We had their first interactive timesharing system, and it was a “learning experience” in a variety of ways. One such experience was that of the FORTRAN compiler group. Now any compiler writer knows that the larger the hash table you use for a symbol table, the better your performance on lookup will be. When you are writing a multipass compiler in a 32K mainframe, you end up using a relatively small symbol table, but you create a really, really good hash algorithm so that the probability of a hash collision is reduced (unlike a binary seach, which is log n, a good hash table has constant performance, up to a certain density, so as long as you keep the density below this threshold you might expect that you will typically have an order 1 or 2 cost to enter or lookup a symbol, on the average. A perfect hash table (which is usually precomputed for constant symbols), has a constant performance between 1.0 and 1.3 or thereabouts; if it gets to 1.5 you rework the hashing to get it lower). So anyway, this compiler group discovers that they no longer have 32K, or 128K, or 512K. Instead, they now have a 4GB virtual address space. “Hey, let’s use a really big hash table!” you can hear them saying, “Like, how about 1 MB table?” So they did. But they also had a very sophisticated compiler technology designed for small and relatively dense hash tables. So the result was that the symbols were fairly uniformly distributed over the 256 4K pages in that 1MB, which meant that every symbol access caused a page fault. The compiler was a dog. When they finally went back to a 64K symbol table, they found that although the algorithm had poorer “absolute” performance from a purely algorithmic viewpoint (taking many more instructions to look up a symbol), because it did not cause nearly as many page faults, it ran over an order of magnitude faster. So third-order effects do matter. http://www.flounder.com/optimization.htm

You also have to worry about that constant factor hidden in the Big-O notation.

What’s interesting is how basic knowledge in computer science keeps being lost. There is no culture of measurement, education ignores basic technology, and engineering methodologies are discarded.