Notes on modularity

Modularity is a desirable but elusive design property for large scale programs and computer systems [10, 11]. Designs that appear modular may, once put into practice, actually turn out to be example of  “false modularity” because of interdependencies between components. For example, the apparently modular architecture of micro-kernel operating systems [9] has run into practical difficulties [6, 3]. Examining how state machines can be constructed by connecting simpler state machines together, provides some insight into why real modularity is so powerful and how to avoid false modularity. In this paper, I’m going to sketch out how to look at modularity from the perspective of state machines, try to derive a few “rules of thumb”, and point to some of the deep mathematical basis of this method.

Advertisements

Oh snap – Maurice Wilkes on Alan Turing

He was a real mathematician except that he only learned one little bit of mathematics and then didn’t learn any more. He was no practical organizer and, well, if you had Turing around in the place you wouldn’t get it going.

Looking back, what would you say was the significance of Turing’s 1936 Entscheidungs-problem paper?

I always felt people liked to make a song and dance. Something like the doctrine of the Trinity involved whereas to an engineer you’ve only got to be told about the stored program idea and you’d say at once “That’s absolutely first-rate, that’s the way to do it.” That was all there was to know.

There was no distinction in that paper that had any practical significance. He was lucky to get it published at all but I’m very glad he did. I mean [Alonzo] Churchl had got the same result by other methods.

I liked Turing; I mean we got on very well together. He liked to lay down the law and that didn’t endear him to me but he and I got on quite well. People sometimes say I didn’t get on with Turing but it’s just not true. But then I was very careful not to get involved.

From ACM Communications interview with Maurice Wilkes.

Peer review in the electronic age

In academic computer science, conferences are often more important than journals, but

The current system of publication in biomedical research provides a distorted view of the reality of scientific data that are generated in the laboratory and clinic. This system can be studied by applying principles from the field of economics. The “winner’s curse,” a more general statement of publication bias, suggests that the small proportion of results chosen for publication are unrepresentative of scientists’ repeated samplings of the real world. The self-correcting mechanism in science is retarded by the extreme imbalance between the abundance of supply (the output of basic science laboratories and clinical investigations) and the increasingly limited venues for publication (journals with sufficiently high impact). This system would be expected intrinsically to lead to the misallocation of resources. The scarcity of available outlets is artificial, based on the costs of printing in an electronic age and a belief that selectivity is equivalent to quality. Science is subject to great uncertainty: we cannot be confident now which efforts will ultimately yield worthwhile achievements. However, the current system abdicates to a small number of intermediates an authoritative prescience to anticipate a highly unpredictable future. In considering society’s expectations and our own goals as scientists, we believe that there is a moral imperative to reconsider how scientific data are judged and disseminated. (cite)

See my previous notes on refereeing in computer science.

Here

Here

Here

Continuing mathematician sanity thread: Boltzmann

[Boltzmann] Opposition to his ideas was harsh and his moods were volatile. Despondent, fearing disintegration of his theories,he hanged himself in 1906. It wasn’t his first suicide attempt, but it was his most successful.

This kind of expresses the mood of Janna Levin’s book about topology, the size of the universe, and all sorts of things that are too deep to even think about, like English apartments.  Silly physicists take everything too seriously.

New York.

How Did the Universe Get Its Spots?
Performance artist Laurie Anderson + astrophysicist Janna Levin
The first (and only) artist-in-residence at NASA engages in a free-form conversation with the novelist and professor of physics and astronomy at Barnard College. Levin teases apart the implications of black holes and the early conditions the universe; her first novel, A Madman Dreams of Turing Machines, won the PEN/Bingham Fellowship for Writers.

Date & Time: Saturday, March 6, 2010, 6:00 PM
Location: Rubin Museum of Art, 150 West 17th Street, New York, NY 10011

One mathematician

a very different one

More computer science as humbug [updated with Brazil dream sequence]

Continuing my experiments in academic papers, I sent a paper on automata and circuit verification to CAV2010 (here’s an updated version with some corrections and better references). I actually got one carefully written and well informed review. Sending a paper to an academic conference with no references more recent than 20 years, was probably more of a deliberate provocation than needed, but I’m still amazed at the low quality of the reviewing. First, it’s absurd and frankly shameful that so-called academics can make an unsupported claim that something is well known in the literature. One of the reviewers at CAV2010, was kind enough to write:

There is nothing wrong with the approach described but I fail to see that there is anything novel here either. Circuits have been modeled as state machines since the 60s (e.g. Booth 67), there is a wealth of literature on the correspondence between state machines, automata and strings, and so forth. All of this has been done before, and more completely.

I am sure that one could look in most computer science curricula and find almost all of this material being taught at the undergraduate level.

I’m really embarrassed for the entire profession here. Of course circuits have been modeled as state machines – well before 1967 in fact. Booth’s 1967 book is one of innumerable surveys.  The paper showed an alternative way of representing state machines and illustrated their use on the time sensitive behavior of combinational circuits where they are not usually employed because the number of states is so great as to overwhelm standard methods.  The specific methods used in this papers, as far as I know, are not used elsewhere in the literature – and I still don’t know any different because the CAV2010 reviewer was unable to provide a reference. To me, this is a mark of humbug. Look, if you are going to tell some poor author that their work is retelling some well known tale, you should at the very least be able to point to the well known explications. Creditably enough, CAV2010 offers a rebuttal period for reviews: but even given a second chance, and an explicit request for a reference, both the author of the review and the program committee failed.  An assertion is made that the material is covered in “most computer science curricula” and when challenged, the assertion is just left as is. No wonder so much of academic computer science is mired in stale repetition of old material when basic standards of scientific rigor are discarded by the people who are supposed to be enforcing the rules.

I don’t want to seem entirely ungrateful for the work of the reviewers: one of whom gave a perfectly rational and solid reason for rejecting the paper.

Conclusion: an elegant paper, but doomed for CAV by lack of any attempt at automation or convincing evidence that automation of the methods described would be a scientific advance.

I can’t really argue with that. The paper does not either describe automation or attempt to make an argument of why automation would succeed – although I am in no doubt about the second question. Perhaps the paper is not suitable by reason of the focus of the conference. But of 4 reviews, this reviewer was the only one who appeared to understand the material. One of the other reviewers actually responded to the rebuttal with this:

About the KR theorem and feedback, an ordinary finite state automaton “encodes” implicitly the feedback information of the underlying sequential circuit in its states, so I would be surprised that the theorem is not applicable to digital circuits with feedback

Ok. The KR (Krohn-Rhodes) theory is based on a decomposition called “loop free”. Let me type that again LOOP FREE.  The theory is concerned with the decomposition of state machines into pipelines where components can be ordered so that the output of component i depends only on the input to the system and the outputs of components j<i . What I show in the paper is how to describe one of the simplest and most beautiful achievements of Computer Engineering, a SR-latch via the interconnection of state machines that represent logic gates. The SR-latch is constructed with cross-coupled gates where the output of each gate feeds back into the input of the other gate. When the output of gate G1 feeds into the input of gate G2 and the output of G2 feeds into the input of G1, we have what is known in the technical literature as a “loop”. While the reviewer would not be surprised if a loop-free product could be used to describe a loop,  the comment makes me doubt his/her grasp on the basics of the material (BTW: the identity of this reviewer is known to me, but I’ll leave to him/her to respond or not).

One of the other reviewers included (among others) two comments  that I noted were wrong, but were not modified or defended.

2. The term “propagation delay”: It seems that you blend different concepts into this single term, namely the duration that an input has to be held stable in order to be sure that it will be reflected by the output and the latency of a circuit in forwarding such a sufficiently stable input to its outputs. While your model, by lumping together the two into a single latency, is a conservative approximation of more detailed models, you may want to make sure that your terminology is in accordance with accepted terminology. it.

Here the reviewer apparently does not understand the difference between latch time and propagation delay. Have those definitions changed since I last looked at a digital circuits book? “The propagation delay t_p of a signal path is the amount of time that is takes for a change in the input signal to produce a change in the output signal” (Wakerly, 1990, P 76).

3. The abstraction of your latch leaves the case (set,reset): (0,0) -> (1,1) unspecified, which is fine. You may, however, want to mention this detail in the accompanying text.

Ah well, the second week of Digital Electronics II is not for everyone. “However if we negate both inputs simultaneously, the latch goes into an unpredictable next state and it may in fact oscillate or enter the metastable state.” (Wakerly 1990 P355). See it’s really not so hard to provide a citation.

There’s actually a lot more to ridicule in these reviews – I’ll post more on request, but I do want to look at one more comment from Reviewer4.

I still believe that this paper lacks motivation and new results for being accepted. As the author says himself, the paper is about a technique for specifying automata, and I am not convinced why not doing it directly with automata, using standard and well-studied techniques. For instance, given that the propagation delays of the gates are modeled using discrete time, why embedding them directly into the definition of gates (even underspecified), instead of modelling them directly with I/O finite state machines (or sequential circuits with feedback)?

It’s not at all clear to me what  modeling a gate as a sequential circuit with feedback even means. In terms of digital circuit engineering, a gate is a combinational circuit, not a sequential circuit. The cool thing about a cross-coupled latch is that the feedback produces sequential properties from combinational circuits. So this comment is quite obscure. On the other hand, I/O automata (from Lynch/Tuttle) are a good example of the augmented automata that I’m deliberately avoiding because of their mathematical ugliness. You can take classical state machines and add all sorts of stuff to them: clocks, fairness conditions, oracles depending on the Axiom of Choice, steroids, flavor enhancers, and so on.  The “standard and well-studied techniques” Reviewer4 has in mind are almost certainly timed and hybrid automata which are types of augmented automata. But the argument made in my paper is that the austere flavor of classical automata theory is strong enough to model timing without additives. Rather than responding to this argument, the Reviewer asks why I don’t use the “standard” methods with additives.

It’s certainly possible that the work described in the paper lacks interest to anyone but me ( the “crank” hypothesis). But lacking credible reviews, it’s hard to make any conclusion at all.

A response.

Hi Victor, good to hear from you.

About your CAV paper, don’t get pathetic. Each PC member has to review in a short time numerous papers and he cannot afford the time needed to give full feedback to the author of a paper which shows strict disconnectedness with contemporary verification. Instead of thanking (or even apologizing) to the PC members for the fact that they bothered to read your paper and give you some feedback (one minute browsing was sufficient to predict its rejection) you still complain and make useless noise.

I will send you my rejected paper and its reviews, if you enjoy reading such things.

Pathetic! Ouch.

In other matters

For a variety of reasons, this reminds me of  a conference I attended in Salvador Brazil many years ago which was like something out of a Magical Realism novel involving lachrymose Argentines who longed to teach meringue, drum bands, unbearably tedious talks, dreamlike missed airplane sequences,  a moment when I was sure that Amir Pnueli, his wife, and myself were about to be assaulted while we were discussing the Israeli novel Blue Mountain which according to Pnueli, was set in his home town, random appearances of gigantic fashion models, an abandoned building in which rooster fighting was taking place,  the most delicious beer ever drunk in world history, a peculiar lecture on Fra Angelico and the relationship between New Mexico and Hassidism (“you must look at the white spaces between the words”) , a hero named Hercules, and much more. I returned from the trip to a faculty meeting chaired by Jean Louis Lassez.  Did any of it even really happen? The conference also provided me with an astounding article review: the reviewer explained that I had made an elementary error when I proved something about a set without taking into account the ordering of the set. Even the Baal Shem Tov would not be able to discern meaning in such a comment.

Shape of the internet

“When we started releasing data publicly, we measured it in petabytes of traffic,” said Doug Webster, a Cisco Systems market executive who is responsible for an annual report by the firm that charts changes in the Internet. “Then a couple of years ago we had to start measuring them in zettabytes, and now we’re measuring them in what we call yottabytes.” One petabyte is equivalent to one million gigabytes. A zettabyte is a million petabytes. And a yottabyte is a thousand zettabytes. The company estimates that video will account for 90 percent of all Internet traffic by 2013. (cite)

(via Trevor Loy)

Power savings via software

This press release is particularly fluffy, but whatever the reality of this very vaguely defined algorithmic development the basic message is correct

to validate nine terabytes of data (nine million million or a number with 12 zeros) in less than 20 minutes, without compromising accuracy.  Ordinarily, using the same system, this would take more than a day. Additionally, the process used just one percent of the energy that would typically be required.

The way to reduce power use in data centers is to improve the ratio of  (useful clock cycles)/(overhead + idle clock cycles) and to optimize I/O.

But this requires a 100% change in the way that software is designed from the current method.