It’s sad that after all this time, one can look at any random article on parallel programming and find some variation of:
for i = 1 ... n
create thread i
as if that was the only way to express parallel computation. This is such an awkward way of looking at problems.Â I think many problems come from the sloppy “non-determinism” of the operating systems and multi-core machines.Â One of the fewÂ interesting ideas seen in the last 20 years on parallel processing is the Google map-reduce scheme ( . But what I find impressive is bashreduce. This is really a clever trick and a great validation of the UNIX toolset design (as if it needed another validation).
Real-time operating systems are either a solved problem or a backwater of engineering design. Threads, semaphores, mutexes, some basic I/O, priority scheduling all of this has been more or less standardized in the POSIX 1003.13 smaller profiles (51,52) for many years. The basic programming model has not changed in years. Even FSM’s original RTOS and QNX, the two most unusual RTOS’s, are pretty similar from a programming point of view except for the split between real-time and non-real-time in our old product. My suspicion is that the programming model provided by these RTOS designs can be replaced by something better mostly because I think synchronization is a painful and error prone exercise that only gets worse as systems become more complicated. On the other hand, although many of our customers really loved it, and I think it was a huge advantage, the split mode approach in the old RTLinux* was a difficult learning experience for a lot of people.
Consider a simple design:
- Task A: collect data from an Analog/Digital converter, sampling at rate 1/t seconds and filling a small buffer (1/t)*n seconds.
- Task B: Aggregate A data and produce 3 streams of processed data that are functions of the input data and some settings.
- Task C: produce control data from the processed data at an output rate of 1/t2 seconds
- Task D: provide a secure internet port which will process requests to interrogate data, to setup data push at some rate maximum 1/t3 pushes per second and update the settings with worst case delay from arrival of input packet to reset of 1/t4 seconds.
- Task E: operate a touch screen display that has some of the same functionality as task D
That seems like it should be straightforward, but it’s not: it’s a one year project that has a 70% chance of failing. Making sure shared data is efficiently and correctly shared, validating that the scheduling and timing is ok, and optimizing for hardware limits and power are all hard. And that seems wrong because there is nothing in such a project that has not been done thousands of times before. As the electric power industry stumbles into modern software based control, the slow development times and poor reliability of products developed under this model is a serious problem.
* NOTE: RTLinux is a trademark of WindRiver Systems.
[note: edited to remove garbage characters]
I agree with this but really, basic ideas of calc are simple and could be taught easily early on too. Found here
. But this paper
is also good and even though I don’t agree with it 100%, I think it is a brilliant diagnosis
A musician wakes from a terrible nightmare. In his dream he finds himself in a society where music education has been made mandatory. â€œWe are helping our students become more competitive in an increasingly sound-filled world.â€ Educators, school systems, and the state are put in charge of this vital project. Studies are commissioned, committees are formed, and decisions are madeâ€” all without the advice or participation of a single working musician or composer. Since musicians are known to set down their ideas in the form of sheet music, these curious black dots and lines must constitute the â€œlanguage of music.â€ It is imperative that students become fluent in this language if they are to attain any degree of musical competence; indeed, it would be ludicrous to expect a child to sing a song or play an instrument without having a thorough grounding in music notation and theory. Playing and listening to music, let alone composing an original piece, are considered very advanced topics and are generally put off until college, and more often graduate school. As for the primary and secondary schools, their mission is to train students to use this languageâ€” to jiggle symbols around according to a fixed set of rules: â€œMusic class is where we take out our staff paper, our teacher puts some notes on the board, and we copy them or transpose them into a different key. We have to make sure to get the clefs and key signatures right, and our teacher is very picky about making sure we fill in our quarter-notes completely. One time we had a chromatic scale problem and I did it right, but the teacher gave me no credit because I had the stems pointing the wrong way.â€ (found here )
The first part of a critique of process algebra is below.Â This relates to the Recursion and State paper and explicatory blog entry where I show how to compose classical automata and define them “abstractly” and to a complaint about the weak critique of automata theory in standard process algebra literature and also to a remark about Dijkstra’s error. Â There are other parts of it scattered around in some recent posts and some other issues that need to be raised, but the basic argument is in place.
The idea that Germany is playing catch-up with Europeâ€™s most promising strategy for renewable energy is jarring. This is Germany, after all, the country that 11 years ago put the Green Party in government, decided to phase out nuclear power, and pushed wind energy and photovoltaics to grid scale. Today Germanyâ€™s installed wind-turbine capacity of 24 gigawatts ranks second only to that of the United States (which has 25 GW). But despite the promises, greenhouse-gas emissions there havenâ€™t plummeted. Rather, they have gone down only slightly since 2000. Germany, it seems, has lost its groove.
The result is a turnabout that would have seemed preposterous even six months ago: â€Everyone in the environmental community is looking to the U.S. now,â€ says Elias Perabo, who codirects a campaign against the use of coal for Germanyâ€™s Berlin-based Climate Alliance. IEEE Spectrum
One problem is the old grid which is not capable of dealing with wind as a serious power source.Â Germany, like the US, has a legacy power grid that is designed around a few big stable power sources, but distributed energy resources (DERs) and especially sources like wind put very different stress on the system. Suddenly you need a lot of high speed, low-cost, distributed control systems – and they better be secure.
SAN FRANCISCO â€” In a direct challenge to Microsoft, Google announced late Tuesday that it is developing an operating system for PCs based on its Chrome Web browser.
The move sharpens the already intense competition between Google and Microsoft, whose Windows operating system controls the basic functions of the vast majority of personal computers. NYT
Speed, simplicity and security are the key aspects of Google Chrome OS. We’re designing the OS to be fast and lightweight, to start up and get you onto the web in a few seconds. The user interface is minimal to stay out of your way, and most of the user experience takes place on the web. And as we did for the Google Chrome browser, we are going back to the basics and completely redesigning the underlying security architecture of the OS so that users don’t have to deal with viruses, malware and security updates. It should just work.
Google Chrome OS will run on both x86 as well as ARM chips and we are working with multiple OEMs to bring a number of netbooks to market next year. The software architecture is simple â€” Google Chrome running within a new windowing system on top of a Linux kernel. For application developers, the web is the platform. All web-based applications will automatically work and new applications can be written using your favorite web technologies. And of course, these apps will run not only on Google Chrome OS, but on any standards-based browser on Windows, Mac and Linux thereby giving developers the largest user base of any platform. Press release
Looks like the revenge of Plan9 is on.
Here’s Edsger Dijkstra discussing the birth of the use of axiomatics in computer science – the start of “formal methods” research.Â What’s striking is the assumed choice between “axiomatic” and “mechanistic” as if there was no other way. In a later note he writes:
And now we are back at our old dilemma. Either we take by definition all properties of the model as relevant, or we specify in one way or another which of its properties are the relevant ones. In the first case we have failed to introduce in the name of “divide et impera” an interface that would allow us to divide and rule and the study of how we could build upon the (only implicitly defined) interface seems bound to deteriorate into a study of the model itself; in the second case we are again close to the axiomatic method….
Or, to put it in another way: if the traditional automata theory tends to make us insensitive to the role interfaces could and should play in coping with complex designs, should it then (continue to) occupy a central position in computing science curricula?
And I’m struck by the ideaÂ that seems utterly wrong to me, that one either uses the methods of formal logic OR one is stuck without any ability to abstract or underspecify
The reason mathematics has advanced so much was not because of the Euclidean axioms-lemma-theorem straitjacket, but in spite of it. Luckily, when we actually discover mathematics, we do it the Babylonian way, empirically and algorithmically. It is only when it is time to present it, that we put on the stifling Greek formal attire.
so says Doron Zeilberger
UPDATE: I have a draft of the “process algebras considered harmful” note up.
Working on a “process algebra” post, I had to look up previous posts on microkernels where I wrote this:
Information hiding is only good design when the hidden information is not needed by the software it is hidden from! If you hide information that you need to share youâ€™re just wasting time. A great example of real modularity is the splitting off of the command interpreter (first in CTSS (corrected)) from the kernel. This split is possible because the designers recognized that there is very little information exchange between kernel and interpreter. An example of a fake module is the attempt of most microkernels to split virtual memory paging and storage caching. The most elegant and efficient message passing interface that can be imagined cannot fix the problem that too much information must be shared to make the resulting monstrosity run properly.
Which applies to process algebras in that the basic “primitive” of process algebra is the communicating process – dumb idea. The one thing I’d take back from the para above is the overuse of exclamation points. I was more enthusiastic in those days. A related point was made here:
The Le Corbusier fallacy is the mistaken belief that the purpose of engineering works is to impress passersby with a show or spectacle or to exhibit the aesthetic sensibility of the â€œartistâ€. In architecture, this fallacy produces high rise slums that look striking from the highway but that are uninhabitable. In programming, the fallacy produces microkernel operating systems, â€œformal methodsâ€, functional programming languages, and other â€œsimpleâ€, â€œcleanâ€, â€œelegantâ€ designs that do nothing interesting. De Botton compares the Salginatobel Bridge in Switzerland to Brunelâ€™s Clifton Suspension Bridge in England and finds the first to be â€œelegantâ€ and the second to be clunky and â€œponderousâ€. Of course the Brunel bridge is bulkier than the swiss toy: it is far larger, carries much more and much heavier traffic, has been adaptable, and was built 100 years earlier with much less capable materials. But one might as well argue that clipper ships are ponderous compared to jet-skis, or that 727â€™s are not as lightweight as paper airplanes or that UNIX systems are more â€œbloatedâ€ than some academic proof of concept. The Salginotobel bridge is cute and features a bravura use of what was new technology in the 1930s when it was designed, but the Clifton bridge has been an engineering and aesthetic triumph since it was completed in the 1860s and it is a far more powerful work.
This post has a perhaps unmerited slap at Corbusier who did design some nice buildings.
What does it have to do with process algebra?
I always had the impression that the entire group of “process algebra” people didn’t know much about automata, but this is surprising.
But meanwhile I got somehow interested, and I don’t know how, in concurrency. I remember that, without linking it to verification particularly, I wondered about interacting automata. I had an idea that automata theory was inadequate, because it hadn’t said what it was for two automata to interact with each other. Except for the Krohn-Rhodes Representation Theorem, which said something about feeding the output of one automata into another. But there wasn’t interaction between the automata. (cite)
I don’t want to read too much into this, but Krohn-Rhodes theoryÂ provides an algebraic result about efforts to simplify state machines by “dividing” them into simpler machines connected in a cascade (with information flowing one way). It has nothing to do with networks of communicating automata, a subject that was not particularly of interest during the early 1960s when the minimization of state machines was a focus for research. In fact, it is not unusual to find in papers of that period a definition of very general automata products in which the machines “interact”, followed by a definition of the special case of “loop free products” which were seen as critical in an age where reducing a design from 100 flip flops to 70 flip flops was a tough and important engineering task. The “network” case where there is arbitrary communication between machines was apparently seen as obvious.
I show an application of such a product in the recursion and state paper.
Despite some deep results, algebraic automata theory has fallen out of favor in theoretical computer science. Reasons include the disciplines failings such as a love of over-generality, weak mathematical background of people working on “formal methods”, and gap between theoreticians and engineers. But perhaps the key reason is that traditional state machine presentations in terms of state sets and transition maps are awkward and carry too much irrelevant information.
The standard presentation of a Moore machine isÂ M=(A,X,S,st,Î´,Î³)Â whereÂ “A” is the alphabet of events, “X” is the set of outputs, “st” is the start state,Â Î´:S x Aâ†’ S is the transition function and Î³:S â†’ X is the output function.Â ButÂ we really don’t care about the state set in all but the simplest state machines. Interesting systems have ridiculously large numbers of states.Â Names of states don’t matter.Â Extra states don’t matter. Suppose “s” is an element of “S” that is unreachable – no sequence of inputs will drive M to “s” from “st”. In that case what difference does removing “s” from “S” make? Or suppose s and s’ are both reachable but not only is Î³(s)=Î³(s’) but for any sequence of inputs that can drive the machine from s to a new state, the output of that new state is identical to the output of the state reached by following the same sequence of inputs from s’.Â In that case, we could simplify S and Î´ by removing one of the duplicative states and merging paths.
What’s more interesting is to consider what output a Moore machine will produce from a sequence of inputs: thinking of the Moore machine as a map from input strings to outputs. This isÂ a function that can be constructed by primitive recursion on strings.Â First, extend Î´ to strings to define a new function Î”. TheÂ Â empty string that has zero length doesn’t cause any state change Î”(s,empty)=sÂ Â and if “w” is a string and “wa” is the string obtained by appending “a” to “w”, then Î”(s,wa)= Î´(Î”(s,w),a).Â DefineÂ M*(w) = Î³(Î”(st,w)). The idea is that M*(w) is the output produced by M in the state reached by following “w” from the initial state.Â If you look at M* you see it contains all the interesting information in M -Â because the names of states and unreachable states and duplicate states are not interesting. In fact in the 1950s Myhill and Nerode described how to generate a state machine (not necessarily a finite state one) from any function on the set of strings over an alphabet.
These functions are much more useful in describing complex state systems than are Moore machines – not least because they can be partially specified when we don’t know or care about all the possible behaviors of the system and because they can be composed in a way to represent any interconnection.
In the next post, I’m going to discuss composition of message passing processes and compare to the “formal methods” approach which is based on the incorrect assumption that automata need to be “enhanced” in some way and that replaces automata with things that have very little mathematical structure at all.