New instrumentation techniques allow in individual FX trade flows to be monitored across networks, inside applications are, and then correlated into a coherent view of the whole trade cycle, all close to real-time.
The real-time element allows alerts and live latency data by hop, or venue or end-to-end to be directed into the trading desks and operational support desks.
The principle of this flow approaches allows these new technologies to look at the FX trading environment from a business perspective. Data is collected through a series of probes or agents unobtrusively- the sole aim being to gather data from a passing message or transaction off the network or from within processes within an application, take a time stamp and pass it back to the analysis centre.
The heterogeneous approach to the collecting of the data with the time stamp - to microsecond accuracy, allows the analysis engine to correlate within and across different business flows. For example a Market Data Flow may be measured from the point that it was initiated by the Price supplier all the way through the feed handler into the pricing or quote engine. The quote flow may be correlated from the quote engine to the client gateway.
The analysis engine then has the capability to correlate multiple different flows and then associate the links across these flows thereby conjoining the individual item flows across the whole of the FX business transaction lifecycle, even into the hedging and settlement processes. To do this requires powerful tools and processing capability that until recently hasn't been easily available. With the power comes the ability to start providing real-time latency data by individual item and item hop.
There are some technical hurdles to overcome, for instance the system clocks need to be synchronised, the probes mustn't add to the latency, the volume of data that can be generated can be immense, there are correlation strategies across the flows to be built and maintained, but these hurdles are now surmountable.Experience is showing that using these light touch systems FX trading architectures can be instrumented and then benchmarked within a matter of days. Where technical teams were in the dark in relation to when, where and why latency was being experienced, the principle problem is now being solved. The latency is now measured consistently, accurately and down to the business data, quote, request for quote and transaction flows and into the individual event histories.
With the complete instrumentation of the business flows other more dynamic benefits can be realised. The advantage of the spanning of both software and network/hardware is that there are multiple options for the probes to be put in place. The probes can be turned on and off dynamically so that additional data may be gathered only when latency in a particular segment of the business flow needs to be analysed in more depth and then the probes switched off. The probes themselves don't need to know where the other probes are. They only need to be put in the places that are to be monitored. Once enabled, as messages pass the probe, data is serviced back to the Analysis Centre. This dynamic probe capability can also be used to control the data volumes and processing demand that the monitoring environment might be creating.
The data is immediate and once gathered is available for manipulation through a number of alerts, aggregator and historical and charting engines that you would expect.
There are some interesting spin-offs from this data collection. There is a new set of business flow data that can be analysed in real-time and then stored into appropriate historic databases for further analysis. The collection of the flow data allows the interpolation of both business and technical data, so allowing alerts to be created if particular simple business rules are broken, for example a customer using your FX flow business is constantly hitting the quote at the extreme end of the quote validity window.
The principle of this flow approaches allows these new technologies to look at the FX trading environment from a business perspective. Data is collected through a series of probes or agents unobtrusively- the sole aim being to gather data from a passing message or transaction off the network or from within processes within an application, take a time stamp and pass it back to the analysis centre.
The heterogeneous approach to the collecting of the data with the time stamp - to microsecond accuracy, allows the analysis engine to correlate within and across different business flows. For example a Market Data Flow may be measured from the point that it was initiated by the Price supplier all the way through the feed handler into the pricing or quote engine. The quote flow may be correlated from the quote engine to the client gateway.
The analysis engine then has the capability to correlate multiple different flows and then associate the links across these flows thereby conjoining the individual item flows across the whole of the FX business transaction lifecycle, even into the hedging and settlement processes. To do this requires powerful tools and processing capability that until recently hasn't been easily available. With the power comes the ability to start providing real-time latency data by individual item and item hop.
There are some technical hurdles to overcome, for instance the system clocks need to be synchronised, the probes mustn't add to the latency, the volume of data that can be generated can be immense, there are correlation strategies across the flows to be built and maintained, but these hurdles are now surmountable.Experience is showing that using these light touch systems FX trading architectures can be instrumented and then benchmarked within a matter of days. Where technical teams were in the dark in relation to when, where and why latency was being experienced, the principle problem is now being solved. The latency is now measured consistently, accurately and down to the business data, quote, request for quote and transaction flows and into the individual event histories.
With the complete instrumentation of the business flows other more dynamic benefits can be realised. The advantage of the spanning of both software and network/hardware is that there are multiple options for the probes to be put in place. The probes can be turned on and off dynamically so that additional data may be gathered only when latency in a particular segment of the business flow needs to be analysed in more depth and then the probes switched off. The probes themselves don't need to know where the other probes are. They only need to be put in the places that are to be monitored. Once enabled, as messages pass the probe, data is serviced back to the Analysis Centre. This dynamic probe capability can also be used to control the data volumes and processing demand that the monitoring environment might be creating.
The data is immediate and once gathered is available for manipulation through a number of alerts, aggregator and historical and charting engines that you would expect.
There are some interesting spin-offs from this data collection. There is a new set of business flow data that can be analysed in real-time and then stored into appropriate historic databases for further analysis. The collection of the flow data allows the interpolation of both business and technical data, so allowing alerts to be created if particular simple business rules are broken, for example a customer using your FX flow business is constantly hitting the quote at the extreme end of the quote validity window.
No comments:
Post a Comment