Setup and Problem

Algorithmic execution is presented by a sell side as a solution to best execution. However, given the large number of LP each offering significant number of algo the task of selecting a right algo for the job is a not a trivial one. The bullet points below describe a typical situation:

  • Market Participant (MP) uses a number of Broker Algos to execute relatively large orders
  • MP objective is to find an optimal balance between market impact minimization and accepting time risk. Different possible benchmarks can be considered but implementation shortfall is an obvious start
  • Each Broker has a generic description of the algo and provides all the individual executions but not the original orders or rejects
  • MP would like to rank performance of different algos and develop some quantitative framework which allows to apply different algo to different situation.

Path to a solution:

  • Load all execution data into Tradefeedr Platform
  • Use Tradefeedr GUI standard statistics to do basic algo ranking
  • Use Tradefeedr Smart API to create different metric and visualizations and develop proprietary Broker Algos usage framework

     

Implementation Shortfall

We will start from Implementation Shortfall (IS) as benchmark. This is one of the oldest benchmark going back to 80s (see Perold, 1988). IS logic comes from portfolio management industry. Portfolio manager records a theoretical price before sending his orders to his execution desk. Then execution desk does the actual execution. The performance difference between paper portfolio and real portfolio is called implementation shortfall.

In the context of algo-trade a “fair” mid-price prevailing at the time the execution has been decided serves as a benchmark. Then algo will be executed and size-weighted average price would be compared with the benchmark price. The basis point (or $/m as more customary in FX) difference between benchmark and actual execution price is an implementation shortfall.

Algo are used to improve the execution. The main tradeoff in algo-trading is to pay less spread (or to minimize market impact) by taking more time (and hence price) risk. In simple time slicer algo a fraction of algo total order is executed every time slice. Even if execution is deterministic the spread paid gain (because executed quantity if smaller) may compensate for the additional price risk to justify the use of algo. Whether the spread/time tradeoff is acceptable or can be improvement is exactly a question for algo grading framework.

The results of standard implementation shortfall algo analysis are shown on Figure 1. We consider 5 types of typical algos as per below descriptions

  • Sweep – just trade the whole order against the entire available FX liquidity. It is equivalent to “sweeping” the book. It is just that in the case of FX there is no book, rather than an aggregation of individual LP quotes.
  • Aggressive TWAP – split the time horizon into equal spaces and cross the spread (aggress) in small size thus trading the spread for time risk.
  • Passive TWAP – same as above but orders are placed in resting fashion (trying to earn the spread rather than pay it) and only switch to aggressive when behind the schedule.
  • Passive Splitter – randomize the time period to avoid the signaling risk in TWAP. Also do execution in passive way and only switching to aggressive when behind the schedule
  • Liquidity Seeker – Stays passive most of the time. Targets a percentage of visible liquidity for passive order on one side to accumulate the required position. Can be in stealth (hidden orders) mode if required position is big.

Figure 1 shows a typical Implementation Shortfall results. Implementation shortfall is cost of execution while time is the risk taken by the algo. If algo takes more time risk, it should be expected to deliver overall savings in implementation shortfall. Otherwise risk is not worth it. This is not dissimilar to risk-reward tradeoff in investment. As Figure 1 show in this example the overall trend in IS is down as time taken by an algo increases. This is the correct trend and direction: the more risk we take the high should be the return (or cost saving or shortfall). The next step is typically to analyse individual algo executions and compare specific algos against each other.

Figure 1: Implementation Shortfall

 

 

As described above in algo trade the major reason for a decrease in the implementation shortfall is the savings on the spread paid. In fact execution price of the algo can decomposed into spread and price evolution components:

Algo Execution Price = Size Weighed Spread + Size Weighed Mid Price

From the expression above it is obvious that there two way to improve the execution price – pay less spread or forecast the mid-price well. Figure 2 shows the spread paid as a function of time taken. It is clear that as algos become less aggressive they pay less spread either because they trade in smaller size or because they try to collect the spread instead of paying it.

Figure 2: Spread Paid

 

Comparing Algo

In the example above we have seen the spread/price tradeoff embedded in algo execution. Implementation shortfall metric simplifies algo performance to one number per algo execution (the shortfall in $/m). However, the variability of this number across different algo execution runs gives an idea of the risks involved.

Suppose we now would like to compare two classes of algo. For example we would like to compare Aggressive and Passive TWAP and understand whether additional risk taken by Passive version is worth taken (you can substitute more colorful algo number from your Broker marketing material). There will be a difference in shortfall across those types of algo but

  • Is the difference between algo performances statistically significant?
  • Is the difference between algo performances economically significant?

The answer to the first question can be achieved by purely looking the comparison between two types of algo. The crucial assumption is that within one Algo the distribution of spread Paid is the same so that averaging it across algo execution make sense statistically.

Then a relatively standard statistical tests such as Wilcoxon test would be applied to see if Spread Paid is any different across algo classes. Figure 3 presents example analysis. It does pair-wise Spread Paid comparison between algo starting from the most aggressive one and picking the pairs in decreasing aggressiveness. Figure 3 shows that in 3 out of 4 pairwise comparison patience pays off: longer time taken results in statistically lower spread paid. In one case though the p-value is 0.15 (Passive versus Aggressive TWAP) and hence the difference between Spread Paid is not statistically significant.

Figure 3: Algo Spread Paid Pairwise comparison

 

Economic comparison is much more subtle as it very much depend on the Trader objective function: how many units of risk the Trader is happy to accept to decrease the expected cost (implementation shortfall) by certain number. Figure 4 below illustrated the point. As Trade move from Aggressive TWAP the over Cost drop from over 200 $/M to negative all for the additional volatility of 200 $/M. So a surface of it is a good improvement. Liquidity Seeks on the other hand stands out. It is the algo which takes more risk while does not deliver any reduction in cost.

Figure 4: Risk vs Cost Tradeoff

 

Figure 5 highlight the potential problem with just looking at point estimates. If we look at the distribution of Implementation shortfall it is much more volatile that the Spread Paid. It is because the market direction plays a strong role. The only statistically significant difference is the one between AggressiveTWAP and Sweep. Interestingly while Liquidity Seeker looks horrible on Figure 4 it is NOT statistically significant from Passive Splitter. Hence point estimated can be very misleading

Figure 5: Implementation Shortfall statistical Tests

 

Luck and Implementations Shortfall

Implementation shortfall as a benchmark has a lot of good properties. Possibly the best one is that it cannot be “tricked” by the executing brokers unlike benchmark which include prices after the execution start into a benchmark. However, it is very dependent on market price dynamics. In other words, as algo time horizon increases the element of luck plays a very high role. It is very easy to beat an inception price for a BUY order on a falling market.

For example Figure 6 below highlights that the implementation short very much depends on market direction. The way to interpret the result is a follow. If the order is BUY and the market is moving UP we record positive market move and expect positive shortall as the costs are high for BUY order on a raising market. For sell order the logic is reverse. Typical the slope of the curve is above 0.5 which is what is observed in Figure 6. The coefficient of 0.5 can be explained by relatively consistent execution over time duration of the algo. If the algo execution is skewed to the end the coefficient would increase toward 1.

Figure 6: Market Move and Implementation Shortfall

 

 

Aggregation

Tradefeedr platform allows an easy aggregation of Implementation shortfall statistic and allows to track it over time. As Figure 7 demonstrates the ISs are aggregated by symbol. Also comparison with spreadPerChild is very useful. For example results for GBPUSD in the Figure 7 (see the chart) show the algo execution overall has been very costly in GBPUSD (106 $/M) versus 38 $/m for outright spread crossing). Therefore an algo usage potentially costs the trader 106-38=68 $/m. Looking at the aggregated statistics can quickly highlight potential problems. However, all the differences should still be tested with statistical test described above.

Figure 7: Aggregation

 

 

 

Visual Comparison

Tradefeedr platform provides a number of tools to understand Algo execution. Sometimes the most important one if to track the performance of Algo visually down to tick level. Figure 8 presents an example of how the execution of induvial algo can be visualized. The algo below is a TWAP algo. It does a large number of small trades (size of the dot is proportional to the size of the trade) over time on a fixed schedule.

Figure 8: Visual Representation of Algo performance

 

 

Conclusion

Tradefeed platform provides a number of data processing and statistically tools which greatly simplify the comparison of algo performance. Which algo to choose is very much a personal choice of an algo user. It depends on his objective function and frequently on an external benchmark. However, ability to quickly analyze different aspects of algo execution is paramount to efficient algo implementation.

References

André F. Perold “The implementation shortfall. Paper versus reality”. The Journal of Portfolio Management Spring 1988, 14 (3) 4-9