Friday, February 4, 2011

FX business flow instrumentations

Like Formula 1 in automobile race, the drive for real-time latency monitoring is coming directly from the technical teams here as they are being put under intense pressure by the business to sort it out. The FX Business sees a P&L issue and assumes that a technology problem is the root cause. 
First With the advent of real-time latency monitoring it allows the business and technology to track the behaviour of individual event histories and to find the causes of a P&L problem. In short it is the explanatory power that is the initial benefit to an organisation. Once you have this capability then other benefits can be achieved.

Anyone here who has sat on the support desk of an FX trading environment can see that there is a vast array of computing technology being utilised to monitor and report on systems. Can there be a new way of monitoring these complex rapidly changing architectures without impacting the highly sensitive low latency business flows? Is there a need for a new way of monitoring these systems

What has been available here until recently has been two generations of technology monitoring system. The first generation was built around hardware and operating system capability which measured the utilisation of individual components within boxes such as memory and CPU usages.

Then in the last five years we have seen the advent of a second generation of products and approaches that started to take some of the business logic out of the price, quote and trade data to understand the individual flows, the routes they take and the time they take, normally at best a few minutes after the flow events. These are typically focussed on the network traffic or the intra-application performance testing environment.

These first two generations have focussed on the root cause technology space - is the machine working at 99.9% efficiency, where are the bottlenecks inside my application and so on.

Co-location and highly specified hardware and network provision are available to the FX purchaser today - millisecond response times, microsecond latencies, and sometimes even nano second targets can be offered. But, as in the Formula 1 scenario, are we keeping instrumentation pace with the underlying technology.
The new generation of tools is looking at the problem from the business perspective, where the focus is on understanding the trade flows as they pass across different systems. The instrumentation of the business flows is done by tracking the messages, abstracting a mixture of business and technical data to provide real-time feedback. These tools enable the business to track back quickly from a loss-making trade to re-assemble all the price and quote inputs. Post-trade analysis can even be performed algorithmically providing a live feedback loop into the quoting engines.

The FX business has a number of crucial business questions. In a low latency world it wants to know how long it is taking for particular market data to reach their pricing engines, how quickly they are generating quotes, and how quickly the clients are trading against those prices. They would like to have alerts that can tell them when individual prices are stale or old. They want to see whether the price to quote or trade are taking longer than expected. They are asking whether they can feed live latency data into their algorithms to drive more aggressive margins. They want to know when their trading systems are likely to fail so that corrective action can be taken in time.

Today most of the analysis of this is being done forensically by analysing log files and then ascribing business information to those flows, all after the fact. This type of analysis can sometimes take days to identify the individual price data feeds that were impacting quote and trade engines. We are in the middle of a micro-second battle for margin and we take days to carry out an investigation! Like the Formula 1 scenario, this is just not sustainable.

No comments:

Post a Comment

  © Privacy Policy

  © Disclaimer