Telecom is 10 years behind wall street on time synchronization

This is an interesting paper, but Telecom has not yet come to grips with the problems and advantages of fast shared commodity ethernet interconnect.

North American service providers are in the process of upgrading their radio access networks with next generation LTE equipment. They arefinalizing a 4G rollout that involves highly stringent timing requirements, but in many cases theyare relying on sole-source synchronization byusing Global Navigation Satellite System (GNSS). Natural occurring disturbances, as well as unintentional radio frequency jamming, intentional jamming, and spoofing, make GNSS vulnerable to interference.

This article presents a novel approach for addressing the issue of GNSS vulnerability by introducing a standard means of providing a redundant packet-based synchronization source for LTE base stations. It also describes how this new approach can mitigate noise caused by asymmetry and transit delay variation in packet networks.

commag-pearson-rev2colonial-williamsburg

Project Roseline

Accurate and reliable knowledge of time is fundamental to cyber-physical systems for sensing, control, performance, and energy efficient integration of computing and communications. This simple statement underlies the RoseLine project.  Emerging CPS [Cyber Physical Systems – vy]  applications depend on precise knowledge of time to infer location and control communication.  There is a diversity of semantics used to describe time, and quality of time varies as we move up and down the system stack.  System designs tend to overcompensate for these uncertainties and the result is systems that may be over designed, in-efficient, and fragile.  The intellectual merit derives from the new  and fundamental concept of time and the holistic measure of quality of time (QoT) that captures metrics including resolution, accuracy, and stability.  

The project will build a system stack that enables new ways for clock hardware, OS, network services, and applications to learn, maintain and exchange information about time, influence component behavior, and robustly adapt to dynamic QoT requirements, as well as to benign and adversarial changes in operating conditions.  Application areas that will benefit from Quality of Time will include: smart grad, networked and coordinated control of aerospace systems, underwater sensing, and industrial automation.  The broader impact of the proposal is due to the foundational nature of the work which builds a robust and tunable quality of time that can be applied across a broad spectrum of applications that pervade modern life. Roseline.

Cassandra

kly and kass
Cassandra’s unfortunate end.

Cassandra is quite interesting – and time sync seems increasingly critical to correct operation. Here are some resources:

A paper on the storage model from developers at Facebook.

A reasonably clear introduction from IBM Developer Works.

A big difficulty for people who are, like me, more familiar with the relational databases is that Cassandra and similar DBs are concerned about a very different kind of computation/storage platform and IT environment (different kind of technical user/programmer).

 

From Jersey to Wall Street – or the equivalent

cartaretA common configuration for FSMLabs TimeKeeper customers is to cross-couple time sources in New Jersey and New York City or London and Slough or Chicago and Aurora or Singapore and Sidney- any two trading locations that are connected with high quality network. Sometimes the network connection does not even have to be that great. TimeKeeper will cross-check time sources, complain when things look wrong, and failover when needed.  Multiple time sources also produces a timestamp provenance. Trading applications will have a record showing that the  timestamps they produce were in agreement with two or more independent sources. A number of firms scale this basic design to multiple sites: increasing the depth of fault-tolerance and the strength of the provenance. Cross-coupling time feeds also  provides early warning on a number of network problems. Several customers saw TimeKeeper warnings about secondary sync  and found on investigation that their network providers were changing equipment or rerouting without notice.

 

 

 

Cassandra Cluster Synchronization

Cassandra is a highly-distributable NoSQL database with tunable consistency. What makes it highly distributable makes it also, in part, vulnerable: the whole deployment must run on synchronized clocks.

It’s quite surprising that, given how crucial this is, it is not covered sufficiently in literature. And, if it is, it simply refers to installation of a NTP daemon on each node which – if followed blindly – leads to really bad consequences. You will find blog posts by users who got burned by clock drifting. [Holub]

Cassandra labels updates with times and then uses those times to figure out the freshest update. Suppose machine A and B are both working with a database of records. B updates record R, then A  updates record R and 3 out of 5 machines holding copies of R get this second update. Then B asks for record R and the read operation gets 5 responses – it can throw away the stale copies of R and forward the fresh copy because the timestamp on the update from A is more recent than those on the older copies.   But this only works if the times are right – which means you need solid clock synchronization. Time synchronization clients like NTPd and PTDp (and variants) will silently fail, so the system can lose synchronization without  notice.  This will silently corrupt data. One of the most sophisticated parts of TimeKeeper is the algorithm to detect incorrect times, to failover where possible, and to alarm where not possible.

See: TimeKeeper, Clouds, Big Data

Time Maps in Action

time map2This shows a time map for a client (center) that is getting NTP time from a variety of standard internet NTP servers and via a customer site (to the left). A TimeKeeper Pocket GrandMaster (bottom left) is serving multiple Stratum servers within the Customer network and we are connecting our server to use those as part of our fault-tolerant multi-source configuration.

Efficient Optimistic Concurrency Control Using Loosely Synchronized Clocks

Efficient Optimistic Concurrency Control Using Loosely Synchronized Clocks

This paper describes an efficient optimistic concurrency control
scheme for use in distributed database systems in which objects are
cached and manipulated at client machines while persistent storage
and transactional support are provided by servers. The scheme
provides both serializability and external consistency for committed
transactions; it uses loosely synchronized clocks to achieve global
serialization. It stores only a single version of each object, and
avoids maintaining any concurrency control information on a per-
object basis; instead, it tracks recent invalidations on a per-client
basis, an approach that has low in-memory space overhead and no
per-object disk overhead. In addition to its low space overheads,
the scheme also performs well. The paper presents a simulation
study that compares the scheme to adaptive callback locking, the
best concurrency control scheme for client-server object-oriented
database systems studied to date. The study shows that our
scheme outperforms adaptive callback locking for low to moderate
contention workloads, and scales better with the number of clients.
For high contention workloads, optimism can result in a high abort
rate; the scheme presented here is a first step toward a hybrid scheme
that we expect to perform well across the full range of workloads.