What happens when you do not use enterprise quality technology.

News from last year.
Aug. 26, 2013 11:5primary_SafetyLastStillClock2 a.m. ET

A glitch in time synchronization caused German exchange operator Deutsche BoerseAG DB1.XE -0.16% ‘s Eurex Exchange arm to halt trading for slightly more than an hour early Monday, the latest in a thread of technical issues at global exchanges.

The Eurex derivatives trading market was halted at 0620 GMT, 20 minutes after trading started, “in order to protect the integrity of the market,” Deutsche Boerse said.

“An incorrect time synchronization within the system” triggered the market halt. The problem was solved, pre-trading started at 0720 GMT and, as of 0730 GMT all products were again tradable, the exchange said.

A person close to the matter said the glitch was caused by a problem with the system clock, not extreme data load, noting that current trading volumes are far below previous peaks.

Wall Street Journal

From what we understand, the problem was due to a number of GPS time servers that dropped leap year adjustments, so jumping the time back 36 seconds. Then PTP2 software adjusted server time to jump backward 36 seconds. It’s not the first failure of this sort (or the last). The legacy technology for time synchronization lacks the cross check, failover, and alerting that are built into TimeKeeper. That technology was not designed with enterprise in mind and although it has been heavily modified over time, it is very difficult to engineer a solution to basic architectural mismatch. For example, TimeKeeper is architected to treat all time sources the same, but software like PTPd and NTPd is designed for a specific protocol. So TimeKeeper can use a feed from one protocol to crosscheck another, but that requires a clumsy grafting process in one of the protocol specific time synchronization programs.  As another example, the PTP standard has a completely inappropriate fault-tolerance technique baked into the standard – a technique that is a holdover from the PTP standard origin in device control. The standard was designed for systems with really simple networks and time consumers. A single cable with some welding machines on the end is perhaps a typical intended use case. The idea was that the time sources would advertise how good they were and the consumers would simply pick the one that said it was the best. This is an absurd policy for an enterprise network with super-powerful server computers receiving time packets across complex networks dotted with routers and switches. TimeKeeper was designed to ignore this “best master” protocol for fault tolerance and to utilize the smarts of the consumers to detect problems in the feed and to select among alternatives.

 

 

Fischer Lynch Patterson and timeouts

There is a widely cited (over 1400 cites in CiteseerX ) result called the Fischer-Lynch-Patterson text_message_from_godot-469763theorem about consensus – a key issue in distributed databases or any system where data is either distributed or replicated or both.

In this paper, we show the surprising result that no completely asynchronous
consensus protocol can tolerate even a single unannounced process death.

As far as I can tell the “surprising” result is that unbounded delays are not bounded. Without timeouts, we can not distinguish between a site B that will eventually send a message to site A and a site B that has failed (crashed) and will never send a message.

We do not consider Byzantine failures, and we assume that the message system is reliable it delivers all messages correctly and exactly once. Nevertheless, even with these assumptions, the stopping of a single process at an inopportune time can cause any distributed commit protocol to fail to reach agreement. Thus, this important problem has no robust solution without further assumptions about the computing environment or still greater restrictions on the kind of failures to be tolerated!
Crucial to our proof is that processing is completely asynchronous; that is, we make no assumptions about the relative speeds of processes or about the delay time in delivering a message. We also assume that processes do not have access to synchronized clocks, so algorithms based on time-outs, for example, cannot be used. (In particular, the solutions in [6] are not applicable.) Finally, we do not postulate the ability to detect the death of a process, so it is impossible for one process to tell whether another has died (stopped entirely) or is just running very slowly.

The Wikipedia summary is similar:

In a fully asynchronous system there is no consensus solution that can tolerate one or more crash failures even when only requiring the non triviality property.[11] This result is sometimes called the FLP impossibility proof. The authors Michael J. Fischer, Nancy Lynch, and Mike Paterson were awarded a Dijkstra Prize for this significant work. The FLP result does not state that consensus can never be reached: merely that under the model’s assumptions, no algorithm can always reach consensus in bounded time. In practice it is highly unlikely to occur.

I don’t get what is “significant” about this. It’s just a tautology wrapped in a complex formalism. If there is no upper bound on how long it can take for a node to send a message, and the only information passed between nodes is via message, then there is no way for one node to tell the difference between a delayed response and a response that will never come because the corresponding node has failed.

 

Why is clock synchronization so important in big data

Distributed transactions have historically been implemented by the database community in the manner pioneered by the architects of System R* [22] in the 1980s. The primary mechanism by which System R*-style distributed transactions impede throughput and extend latency is the requirement of an agreement protocol between all participating machines at commit time to ensure atomicity and durability. To ensure isolation, all of a transaction’s locks must be held for the full duration of this agreement protocol, which is typically two-phase commit.

The problem with holding locks during the agreement protocol is that two-phase commit requires multiple network round-trips between all participating machines, and therefore the time required to run the protocol can often be considerably greater than the time required to execute all local transaction logic. If a few popularly accessed records are frequently involved in distributed transactions, the resulting extra time that locks are held on these records can have an extremely deleterious effect on overall transactional throughput. [ Calvin: Fast Distributed Transactions for Partitioned Database Systems. Alexander Thomson et al]

This is one reason why we have all the new databases without transaction support – because high transaction rates in a distributed environment (e.g. web or IOT applications in the Cloud) cannot be scaled in face of lock overhead.

 

Patents considered harmful, by some.

That’s not how it looks from here but I think part of the muddiness in the software patent argument is the result of arguments that really attack the entire idea of patents but are advanced as being specific to software patents. The whole idea of a patent is that you do foster innovation and competition by “handing out monopolies”. Maybe that idea is wrong, but if you think it is wrong you are likely opposed to patents – period.  Tim Lee and many of the other

Famous anti-innovation patenter.
Famous anti-innovation patenter.

people who dislike software patents confuse the issue by simultaneously claiming that (1) software patents are inherently worse/different than other patents and (2) that software patents choke off innovation because of properties that turn out to apply to all patents. Opposing patents in general, however, is a more radical proposition than opposing software patents – perhaps more a more radical proposition than people feel comfortable making.

My position is that many patents are wrongly granted for “inventions” that are neither novel nor un-obvious and that the system for adjudicating patents is way too slow, error prone, and expensive. But patents themselves serve a useful purpose.  The obvious example is Excel which is effectively a monopoly without the benefit of any patents at all. The work of the innovators was, without patent protection, rapidly copied by companies with better marketing and greater resources. Innovation stopped. End of story.

And “business method” patents, in general, are not really software or computer patents at all. Usually they are efforts to patent a well known method of doing business adding some technology to the mix to buttress a claim of novelty. One could have similarly claimed a hundred years ago that making sales calls via telephone was an invention or that delivering adverts by TV instead of by radio was an invention.

 

//platform.twitter.com/widgets.js

A claimed validated operating system.

The claim: we have demonstrated the comprehensive formal verification of the seL4 microkernel, with a complete proof chain from precise, formal statements of high-level security and safety properties to the  binary executable code. GD

The L4 base is useful –  we advocated a similar approach with RTLinux which was, um, very similar to L4. It looks like the L4 version here uses the interrupt emulation method at the heart of RTLinux (of course, without any attribution or reference). Just for the record, here’s a much earlier RTLinux based effort.

As for the verification, I am highly skeptical of the claim. Here’s the claimed proof and the research paper.  It’s not at all clear exactly what was validated, but from the paper it looks like the 8000 odd lines of l4 microkernel were shown to provide the functionality described in the Hoare logic specification. No device drivers appear to have been validated.

Cassandra

kly and kass
Cassandra’s unfortunate end.

Cassandra is quite interesting – and time sync seems increasingly critical to correct operation. Here are some resources:

A paper on the storage model from developers at Facebook.

A reasonably clear introduction from IBM Developer Works.

A big difficulty for people who are, like me, more familiar with the relational databases is that Cassandra and similar DBs are concerned about a very different kind of computation/storage platform and IT environment (different kind of technical user/programmer).

 

A hard theorem becomes easy over time.

IMG_20140910_141117_570The basic result of the incompleteness proofs is pretty simple.  The proofs are complicated because the idea was literally unthinkable until programming was invented. The heroic efforts of Gödel, Skolem, Post, and Church to create a mathematics in which doing mathematics could be represented as some sort of calculation were an incredible feat of imagination. But today, programming is kind of obvious. So the general result should be simpler.

Suppose we have a computer with an infinite memory that holds arbitrary integers and  can execute a simple programming language that consists of numbers. The key thing about the computer is that numbers in memory can be interpreted as integers or instructions, as we please. It’s certainly possible that the number encodes a program that makes no sense, but every program is just an integer and every integer encodes some program. Suppose that the computer has the usual arithmetic operations and conditionals plus recursion. Because numbers are programs,  we can have a number, say x, and a second number y and compute x(y) because x “encodes” some program.

Programs as values, means that we can pass programs that compute functions as arguments to other functions.  For example we should be able to write “evaluate” so that “evaluate(x,y) = x(y)”. If f(x) = x*x then evaluate(f,5) = 25.  It would be useful to have a function “S”  which computes a “solution” to a function so that “S(f,x) = the smallest number y ≥ x  so that f(y)=0”. In fact, such a function is easy to define in our language: S(f,x) = ( if f(x)= 0 then x; else S(f,x+1) ). It is easy to see that S(f,x) will never terminate if there is no y ≥ x so that f(y)=0. For example if g(x) = x+1 then S(g,1) never terminates.  The question is whether we can write a function E(f,x) so that E(f,x) always terminates and so E(f,x)=1 if S(f,x) terminates and E(f,x)=0 if S(f,x) does not terminate. E doesn’t  find the solution, it just has to tell us whether S will find the solution. However, we can prove that we cannot write E with these specifications.

The proof is by contradiction. Suppose we could write E. Then someone could write a “diagonal” program

D(f) = { if E(f,f)=0 then 1; else S(f,f) }.

So D(f) =1 if  E(f,f)=0;  there is no y ≥ f so that f(y)=0 but D(f) never terminates otherwise.  What about D(D) ? Does it terminate? If E(D,D)=0 then D(D) terminates with value 1. But if E(D,D)=0 then  for every y ≥ D, D(y) never completes. Which means D(D) does not terminate. So

D(D) terminates with value 1 if and only if D(D) does not ever terminate.
D(D) does not terminate if and only if D(D) does terminate.

Assuming that the world makes logical sense ( a big leap) then  to get to this paradoxical result, we must have assumed something that is false: namely that we could somehow define E(f,x) so that it always terminates and correctly computes the answer to whether S(f,x) terminates.

For mathematics, this result has all sorts of interesting implications. For example, if it were not true, we could, in principal, write a function that tested numbers to see if they were odd perfect numbers and marched up the positive integers until it found one. If that function could be evaluated by E, we’d know if there was such a number – something nobody seems to know right now. Any theorem Exists x, P(x) where P can be represented in  our language could be tested by evaluating E on a function that tests P on consecutive integers until it finds a solution. Any theorem Forall x P(x) could be evaluated by seeing if a program that looks for a
counter-example ever terminates. All that should indicate that E is implausible as well as impossible. For computer science the implications are more subtle because the fact that we don’t really have computers with infinite memory matters in computer science.  That’s also interesting in pure mathematics: see Godel’s lost letter.