Friday, February 4, 2011

The new instrumentation world in forex technology

New instrumentation techniques allow in  individual FX trade flows to be monitored across networks, inside applications are, and then correlated into a coherent view of the whole trade cycle, all close to real-time. 
The real-time element allows alerts and live latency data by hop, or venue or end-to-end to be directed into the trading desks and operational support desks.

The principle of this flow approaches allows these new technologies to look at the FX trading environment from a business perspective. Data is collected through a series of probes or agents unobtrusively- the sole aim being to gather data from a passing message or transaction off the network or from within processes within an application, take a time stamp and pass it back to the analysis centre.

The heterogeneous approach to the collecting of the data with the time stamp - to microsecond accuracy, allows the analysis engine to correlate within and across different business flows. For example a Market Data Flow may be measured from the point that it was initiated by the Price supplier all the way through the feed handler into the pricing or quote engine. The quote flow may be correlated from the quote engine to the client gateway.

FX business flow instrumentations

Like Formula 1 in automobile race, the drive for real-time latency monitoring is coming directly from the technical teams here as they are being put under intense pressure by the business to sort it out. The FX Business sees a P&L issue and assumes that a technology problem is the root cause. 
First With the advent of real-time latency monitoring it allows the business and technology to track the behaviour of individual event histories and to find the causes of a P&L problem. In short it is the explanatory power that is the initial benefit to an organisation. Once you have this capability then other benefits can be achieved.

Anyone here who has sat on the support desk of an FX trading environment can see that there is a vast array of computing technology being utilised to monitor and report on systems. Can there be a new way of monitoring these complex rapidly changing architectures without impacting the highly sensitive low latency business flows? Is there a need for a new way of monitoring these systems

What has been available here until recently has been two generations of technology monitoring system. The first generation was built around hardware and operating system capability which measured the utilisation of individual components within boxes such as memory and CPU usages.

Then in the last five years we have seen the advent of a second generation of products and approaches that started to take some of the business logic out of the price, quote and trade data to understand the individual flows, the routes they take and the time they take, normally at best a few minutes after the flow events. These are typically focussed on the network traffic or the intra-application performance testing environment.

  © Privacy Policy

  © Disclaimer