Time out of joint

Financial trading venues and trading systems operate so quickly and rely on clocks so deeply that events like the one noted in this FINRA report are more common than many understand

The findings stated that the firm transmitted to OATS New Order Reports and related subsequent reports where the timestamp for the related subsequent report occurred prior to the receipt of the order,

In electronic trading such errors are easy to make. Two computer servers split the work in some data center and the clock on one is 10 milliseconds faster than the clock on the second. The faster device sends an order to a market and stamps it with the time. The slower device gets the response from the market and stamps it with the time.

Real time Server One Server Two
12:00 Send order clock=12:00.010 Clock=12:00
12:00.05  Clock=12:00.15 Get confirmation. Clock=12:00.05

In fact, for many trading organizations this is scenario does not even require two servers because their clocks can jump backward.

 

MiFID2 Timestamp regulations

800px-Hans_Holbein_der_Jüngere_-_Der_Kaufmann_Georg_Gisze_-_Google_Art_Project

There are a number of places in the new guidelines that increase the rigor required for timestamping data. One key part covers SI’s (systematic internalizers) who operate kind-of like private exchanges. TimeKeeper’s ability to produce traceable audit and to use multiple sources is designed for precisely this kind of application.

Moreover, the inclusion of the timestamp in the pre-trade information published by the SI is a key information for the client to better analyse ex-post the quality of prices quoted by SIs, and in particular to assess with accuracy the responsiveness of the SI and the validity periods of quotes. Without a timestamp assigned by the SI itself, market participants would need to rely on the information potentially provided by data vendors, the timestamps of which would be less accurate, especially when quotes are published through a website as pointed out by some respondents to the question on access to the quotes of SIs

Image is by Hans Holbein the Younger (1497/

Data tiedowns with reliable time stamping

Management teams are  growing more reliant on the ability to immediately  access and quickly sort through massive amounts of data to find the information they need – Data Governance for Financial Institution

A “data tiedown” is a reliable and cross-checked timestamp that secures a data item or group of data items to one or more fixed times – such as time of collection, generation, or storage. Data analysis cannot compensate for bad data and distributed processing makes it easy to generate bad data whether from new inputs or by corrupting existing data. As transaction rates increase and data size grows, traditional locking techniques are both more expensive to implement and easier to get wrong. Timestamps can be an important part of data integrity assurance by imposing order on data, even if it is generated or stored by multiple processing elements that are only loosely coupled. But timestamping is in itself a fragile process.  One common error mode for networks relying on IEEE 1588 Precision Time Protocol (PTP) for clock synchronization involves a loss or incorrect calculation of the number of leap seconds since the epoch.  If data integrity depends on timestamps a sudden jump of 35 or 16 seconds can have a devastating impact. So the integrity of timestamps is something that also  needs to be safeguarded. One way to do that is to do what TimeKeeper does – track multiple reference time sources so that the clock can be continuously cross checked against other clocks.  The timestamp is then provided with a record of how well it matches secondary, tertiary, or deeper sources. When these all match, the timestamp has high reliability. Errors can also be traced – there is a forensic methodology available to validate time and locate problems. Data is then tied to a timestamp and the timestamp is tied to a log that verifies its accuracy – the combination is an example of a data tiedown.

 

Data integrity depends on time synchronization

In a distributed compute system, such as any multi-device transaction system or database, time synchronization is essential to data integrity. The simplest case is a multi-step transaction over multiple compute devices – something that is common to a wide range of applications. Consider a financial trading application where machine A gets a tick containing a price change, machine B sends a bid out to an exchange via machine X that does some sanity/safety check, machine D gets a trade conformation of this trade, and machine E reconciles the book. We distribute this computation over 5 machines because we need both the compute and I/O bandwidth (both network and storage i/o) and because the system needs to be able to continue to operate even if machines go down, and because different machines may have different advantages. Without authoritative time stamp, we cannot serialize this single transaction in the record or log. We can’t analyze performance to see where there are bottlenecks. We can’t catch emerging problems before failure. We don’t have a sensible forensic log.