A strategy suffers a drawdown whenever it has lost money recently. A drawdown at a given time t is defined as the difference between the current equity value (assuming no redemption or cash infusion) of the portfolio and the global maximum of the equity curve occurring on or before time t. The maximum drawdown is the difference between the global maximum of the equity curve with the global minimum of the curve after the occurrence of the global maximum (time order matters here: The global minimum must occur later than the global maximum). The global maximum is called the “high watermark.” The maximum drawdown duration is the longest it has taken for the equity curve to recover losses. More often, drawdowns are measured in percentage terms, with the denominator being the equity at the high watermark, and the numerator being the loss of equity since reaching the high watermark. Figure 2.1 illustrates a typical drawdown, the maximum drawdown, and the maximum drawdown duration of an equity curve. I will include a tutorial in on how to compute these quantities from a table of daily profits and losses using either Excel or MATLAB. One thing to keep in mind: The maximum drawdown and the maximum drawdown duration do not typically overlap over the same period. Defined mathematically, drawdown seems abstract and remote. However, in real life there is nothing more gut-wrenching and emotionally disturbing to suffer than a drawdown if you’re a trader. (This is as true for independent traders as for institutional ones. When an institutional trading group is suffering a drawdown, everybody seems to feel that life has lost meaning and spend their days dreading the eventual shutdown of the strategy or maybe even the group as a whole.) It is therefore something we would want to minimize. You have to ask yourself, realistically, how deep and how long a drawdown will you be able to tolerate and not liquidate your portfolio and shut down your strategy? Would it be 20 percent and three months, or 10 percent and one month? Comparing your tolerance with the numbers obtained from the backtest of a candidate strategy determines whether that strategy is for you. Even if the author of the strategy you read about did not publish the precise numbers for drawdowns, you should still be able to make an estimate from a graph of its equity curve. you can see that the longest drawdown goes from around February 2001 to around October 2002. So the maximum drawdown duration is about 20 months. Also, at the beginning of the maximum drawdown, the equity was about $2.3 × 104, and at the end, about $0.5 × 104. So the maximum drawdown is about $1.8 × 104
Every time a strategy buys and sells a security, it incurs a transaction cost. The more frequent it trades, the larger the impact of transaction costs will be on the profitability of the strategy. These transaction costs are not just due to commission fees charged by the broker. There will also be the cost of liquidity—when you buy and sell securities at their market prices, you are paying the bid-ask spread. If you buy and sell securities using limit orders, however, you avoid the liquidity costs but incur opportunity costs. This is because your limit orders may not be executed, and therefore you may miss out on the potential profits of your trade. Also, when you buy or sell a large chunk of securities, you will not be able to complete the transaction without impacting the prices at which this transaction is done. (Sometimes just displaying a bid to buy a large number of shares for a stock can move the prices higher without your having bought a single share yet!) This effect on the market prices due to your own order is called market impact, and it can contribute to a large part of the total transaction cost when the security is not very liquid. Finally, there can be a delay between the time your program transmits an order to your brokerage and the time it is executed at the exchange, due to delays on the Internet or various software related issues. This delay can cause a “slippage,” the difference between the price that triggers the order and the execution price. Of course, this slippage can be of either sign, but on average it will be a cost rather than a gain to the trader. (If you find that it is a gain on average, you should change your program to deliberately delay the transmission of the order by a few seconds!) Transaction costs vary widely for different kinds of securities. You can typically estimate it by taking half the average bid-ask spread of a security and then adding the commission if your order size is not much bigger than the average sizes of the best bid and offer. If you are trading S&P 500 stocks, for example, the average transaction cost (excluding commissions, which depend on your brokerage) would be about 5 basis points (that is, five-hundredths of a percent). Note that I count a round-trip transaction of a buy and then a sell as two transactions—hence, a round trip will cost 10 basis points in this example. If you are trading ES, the E-mini S&P 500 futures, the transaction cost will be about 1 basis point. Sometimes the authors whose strategies you read about will disclose that they have included transaction costs in their backtest performance, but more often they will not. If they haven’t, then you just to have to assume that the results are before transactions, and apply your own judgment to its validity. As an example of the impact of transaction costs on a strategy, consider this simple mean-reverting strategy on ES. It is based on Bollinger bands: that is, every time the price exceeds plus or minus 2 moving standard deviations of its moving average, short or buy, respectively. Exit the position when the price reverts back to within 1 moving standard deviation of the moving average. If you allow yourself to enter and exit every five minutes, you will find that the Sharpe ratio is about 3 without transaction costs—very excellent indeed! Unfortunately, the Sharpe ratio is reduced to –3 if we subtract 1 basis point as transaction costs, making it a very unprofitable strategy. For another example of the impact of transaction costs,
A historical database of stock prices that does not include stocks that have disappeared due to bankruptcies, delistings, mergers, or acquisitions suffer from the so-called survivorship bias, because only “survivors” of those often unpleasant events remain in the database. (The same term can be applied to mutual fund or hedge fund databases that do not include funds that went out of business.) Backtesting a strategy using data with survivorship bias can be dangerous because it may inflate the historical performance of the strategy. This is especially true if the strategy has a “value” bent; that is, it tends to buy stocks that are cheap. Some stocks were cheap because the companies were going bankrupt shortly. So if your strategy includes only those cases when the stocks were very cheap but eventually survived (and maybe prospered) and neglects those cases where the stocks finally did get delisted, the backtest performance will, of course, be much better than what a trader would actually have suffered at that time. So when you read about a “buy on the cheap” strategy that has great performance, ask the author of that strategy whether it was tested on survivorship bias–free (sometimes called “point-in-time”) data. If not, be skeptical of its results.
Most strategies performed much better 10 years ago than now, at least in a backtest. There weren’t as many hedge funds running quantitative strategies then. Also, bid-ask spreads were much wider then: So if you assumed the transaction cost today was applicable throughout the backtest, the earlier period would have unrealistically high returns. Survivorship bias in the data might also contribute to the good performance in the early period. The reason that survivorship bias mainly inflates the performance of an earlier period is that the further back we go in our backtest, the more missing stocks we will have. Since some of those stocks are missing because they went out of business, a long-only strategy would have looked better in the early period of the backtest than what the actual profit and loss (P&L) would have been at that time. Therefore, when judging the suitability of a strategy, one must pay particular attention to its performance in the most recent few years, and not be fooled by the overall performance, which inevitably includes some rosy numbers back in the old days. Finally, “regime shifts” in the financial markets can mean that financial data from an earlier period simply cannot be fitted to the same model that is applicable today. Major regime shifts can occur because of changes in securities market regulation (such as decimalization of stock prices or the elimination of the short-sale rule, or other macroeconomic events (such as the subprime mortgage meltdown). This point may be hard to swallow for many statistically minded readers. Many of them may think that the more data there is, the more statistically robust the backtest should be. This is true only when the financial time series is generated by a stationary process. Unfortunately, financial time series is famously nonstationary, due to all of the reasons given earlier. It is possible to incorporate such regime shifts into a sophisticated “super”-model , but it is much simpler if we just demand that our model deliver good performance on recent data.
If you build a trading strategy that has 100 parameters, it is very likely that you can optimize those parameters in such a way that the historical performance will look fantastic. It is also very likely that the future performance of this strategy will look nothing like its historical performance and will turn out to be very poor. By having so many parameters, you are probably fitting the model to historical accidents in the past that will not repeat themselves in the future. Actually, this so-called data-snooping bias is very hard to avoid even if you have just one or two parameters (such as entry and exit thresholds), But, in general, the more rules the strategy has, and the more parameters the model has, the more likely it is going to suffer data-snooping bias. Simple models are often the ones that will stand the test of time. (See the sidebar on my views on artificial intelligence and stock picking.)
Execution of trades on stock exchanges based on predefined criteria and without any human intervention using computer programs and software is called algorithmic trading or algo trading. While being a subset of algorithmic trading, high-frequency trading involves buying and selling thousands of shares in fractions of seconds.
While it has its detractors, the general consensus is that algorithmic trading is an inevitable evolution of the trading process and markets around the world have implemented various measures to provide a seamless experience to investors. In the US and other developed markets, High-Frequency Trading and Algorithmic trading accounts for an estimated 70% of equities market share. In India, the percentage with respect to the total turnover has increased up to 49.8%.
- Algorithmic Trading in India: Past, Present and Future
- Regulations in Indian Stock Markets
- Algorithmic Trading Platforms
- How to Start your Algorithmic Trading Journey
- Frequently Asked Questions about the Future of Algorithmic Trading
Algorithmic Trading in India: Past, Present and Future
On April 3rd 2008, Securities & Exchange Board of India (SEBI), introduced algorithmic trading by allowing Direct Market Access facility to institutional clients. In short, DMA allows brokers to provide their infrastructure to clients and gives them access to the exchange trading system without any intervention from their part. Initially, it was provided only to institutional clients and not retail traders.
Nevertheless, the facility brought down costs for the institutional investor as well as help in better execution by cutting down the time spent in routing the order to the broker and issuing the necessary instructions.
April 29th 2008, this facility had already become popular with some of the top global players signing up for the DMA facility. FI’s & FII’s like UBS, Morgan Stanley, JP Morgan and DSP Merrill Lynch were the entities awaiting approval. Edelweiss Capital, India Infoline and Motilal Oswal Securities were among others who had submitted their request to the stock exchanges. It is worthwhile to note that Foreign Institutional Investors (FIIs) were allowed to use DMA facility through investment managers nominated by them, from February 24th 2009.
By July 31st 2008, leading brokerages along with stock exchanges were preparing the ground for operationalising Direct Market Access (DMA). Brokerages such as Citi, Merrill Lynch, Morgan Stanley, JP Morgan, Goldman Sachs, CLSA and Deutsche Equities had started holding test runs of their DMA software, in an attempt to synchronise it with the systems at the stock exchange.
NSE’s Contribution To The Industry
The National Stock Exchange (NSE) started offering additional 54 colocation server ‘racks’ on lease to broking firms in June 2010 in an effort to improve the speed in trading.
Deutsche Bank, Citi, Morgan Stanley, Goldman Sachs, and MF Global were among the foreign broking firms which availed of the facility. Motilal Oswal Securities, JM Financial and Edelweiss Capital figured among the prominent domestic firms who signed up for the racks.
Local brokerages like Globe Capital, SMC, Global Vision, East India and iRageCapital had also opted for the facility. Not surprisingly, with a few weeks of offering this facility, there was a long period of waiting up to 6 months to get a space on the server racks!
It was clear to the Indian exchanges and regulatory bodies that Algorithmic Trading is well-received by the institutional clients and banks in the country and its demand would continue to rise. This was the time when exchanges started improving their offerings in the automated trading domain, financial technology companies started offering automated trading platforms and SEBI continued to regulate the markets.
May 12th 2010, NSE moved to enable the Financial Information Exchange (FIX) protocol on its trading platform boosting transaction speed for overseas investors using direct market access.
In simple terms, the FIX protocol helps in converting the language of the orders given by the Foreign Institutional Investors (FII) in the language understood by the NSE, in effect reducing the time taken for the transaction to be executed.
Changes to the Brokerage Industry
Broker commissions had started shrinking as a result of an increasing number of institutional clients warming up to the Direct Market Access (DMA) concept. To keep up with the times, they started offering automated software to the clients.
The new entrants to this space are discount brokers who are essentially brokers who provide facilities at very low brokerage charges. They are able to do this by providing only minimal facilities, unlike full-service brokers who usually provide support as well as training programs for their clients.
Regulations In Indian Stocks Markets
Every year SEBI comes up with regulations to be followed by traders and brokers to keep the trading industry safe and risk-controlled. To read about SEBI’s recent announcement regarding the algorithmic trading industry in India, go to the post here.
Risk management is critical with algorithmic trading. That is why, for any algorithm to be approved by the markets, exchanges require a firm to undergo a series of stringent tests if it intends to trade through algo trading. These tests include the number of orders that would be placed per second, the maximum order value of any order placed, and the maximum traded quantity during a particular trading day.
A brief summary of the latest SEBI circular (SEBI/HO/MRD/DP/CIR/P/2018/62) dated April 09, 2018 is given below:
Managed colocation service
It is suggested that exchanges should change the pricing structure of their co-location renting to make it accessible to small and medium-sized members as the current practice of renting the entire server rack to one entity leads to a high cost.
In order to provide greater transparency when it comes to reporting the latency for colocation and proximity hosting, it has been suggested that the exchanges should provide minimum and maximum as well as the mean latencies along with the latencies at 50th and 99th percentile.
Tick-by-tick data feed
SEBI has suggested providing tick-by-tick data feed free to the members of the exchanges.
Unique identifiers for algorithms
SEBI has instructed that all algorithmic orders reaching their platform should be tagged with the unique identifier which is assigned when the specific algorithm was submitted for approval.
Future Of Algorithmic Trading In India
With several amendments over the years, India provides a good opportunity for algorithmic trading due to a number of factors such as colocation facilities and sophisticated technology at both the major exchanges; a smart order routing system; and stock exchanges that are well-established and liquid.
Given the rapidly growing trend and demand of HFT and Algorithmic Trading in developing economies & emerging markets, there have been efforts by various exchanges to educate their members and develop the skill sets required for this technology-driven field.
With many different trading platforms and tools available in the market, each claiming to be better than the other, a person who is testing the water in the field of Algo trading may be spoilt and confused by choice. Therefore, we have compiled a list of some of the most popular platforms and algo trading softwares that are being used in the market today (specifically for Indian equity markets), so as to level the playing field and give a clear picture to the users.
To keep up with the racing times, it is necessary that you keep yourself abreast with the latest skills and technology that will help you pave your success in Algo Trading and you need to be in the fastlane for that – like Naoya Ohara who experienced success with the Executive Programme in Algorithmic Trading (EPAT).
The obvious advantage is that an individual trader can create their algorithmic trading strategies in another environment but use the brokers API to place live orders in the market. At the same time, one should consider the cost associated with using the API as well as the general downtime, if any, when you use the API.
As you can see, depending on your requirements and level of expertise, you have a plethora of options to choose from. But how do you get started in algorithmic trading? In the next section, we will try to understand this.
How to Start your Algorithmic Trading Journey
Gaining an in-depth understanding of the financial market/instrument to come up with a hypothesis on which you can base your trades. You need to have/develop some knowledge-based edge in any market in which you wish to win over the rest of the participants.
Coding your strategy
Gaining an in-depth understanding of the financial market/instrument to come up with a hypothesis on which you can base your trades. You need to have/develop some knowledge-based edge in any market in which you wish to win over the rest of the participants.
Backtesting your hypothesis/strategy on historical data
Getting hold of quality data is important and is often not free (especially tick-by-tick data). You can try paid sources like Quandl or can check with your broker if they provide historical data. You can also use a third party backtesting engine to make your life simpler such as the one provided by Quantra Blueshift or Quantiacs.
The natural result of backtesting and validating is that it will either lead you to completely discard your hypothesis (90% of the time or more!!) or that you have managed to extract actionable signals from the pool of data you started with. You can then optimize your strategy parameters keeping in mind that your strategy should work well on out-of-sample data as well to avoid overfitting/data snooping bias.
Choosing the right broker and platform
It is very important to do thorough research on this beforehand, as your overall efforts should make business sense after all the overhead costs are taken into account. Make sure you only pay for the features you use to execute your strategy efficiently. In short, keep the trading costs low & operations nimble.
Going Live & Risk management
Once you are satisfied with your algorithm, let it do its job in live markets! Manage risks efficiently using limits, stop-loss and Var/Expected shortfall monitoring. Keep an eye on the larger economy/sector for structural shifts/regime changes in which case you might have to alter or scrap your strategy altogether. Remember that every strategy has a limited lifetime.
Keep learning and developing new skills
As they say the best investment is investing in yourself. Look to enhance and update both your domain knowledge and technical skills required to act on that knowledge/information. For example, pick up a book by the likes of Ernie Chan or do an online course to beef up your coding skills.
Granted, you might have a lot of questions now, with respect to algorithmic trading. Let’s try to preemptively answer them.
Is Algorithmic Trading legal in India?
Definitely Yes! April 03, 2008 is when SEBI allowed algorithmic trading in India, so since then it has been legal.
How tedious is it to get legal approval for any automation? How confidential and secured it will be if it goes to automation after approval, is approval process and infrastructure cost affordable for retail traders?
The approval process is not that costly, but yes the infrastructure, if you are going for HFT can be a big burden if you are a retail trader or individual trader but you can do automation and that would not be a huge cost as such.
Assuming this is from an Indian market perspective, India has a peculiar regulation which says that you have to approve each and every strategy before you take it live. This is different from most of the developed market regulations in which you have to get the platform approved and then you can code any strategy you want to on that platform. Same goes for other developing markets like Thailand where you have to get every algorithmic trading strategy approved before you can automate. The regulation demands that the broker should take the approval on your behalf, you as a retail trader cannot go to the exchange and ask for approval. The cost depends on the broker but technically it’s not that costly.
What are the approvals you need before going algo?
We touched upon this in brief in one of our previous questions but it depends on which geography you are trading into. In case you are trading in the CME, SGX or Eurex then the approval required is more of a conformance test which means that you will be taking approval for your trading platform. Once it is approved you can code whatever strategy on it and send out orders.
In case you are in geographies like India or Thailand then you will need to get your strategies approved and for that what you will be doing is creating a document for each strategy and sending it out to the exchange for approval. If you are a member of the exchange yourself you can send it directly and if you are not a member with the exchange then you send it through a broker. The process in India involves (can vary for different exchanges) to get the strategy signed from the auditor, participate in a mock trading session, then you demo it with the exchange, post that you get an approval from the exchange and then you start trading. That’s the rule you have to follow for each strategy.
How is a strategy confidential if it is going through the approval process?
The exchanges generally do not focus much on the strategy but more on risk management. The focus is that your strategy should not create havoc for the market or for them, which is the key concern for the exchange and not what your strategy does. They would ask you about the strategy at a broad level but I don’t think it goes to a level where your IP is threatened.
How risky is algorithmic trading towards manipulation such as colocation?
Colocation is not manipulation. It’s just a facility provided to you. It’s like saying how risky it is if you are travelling by air by spending more as compared to someone who is travelling by train to a destination, you are reaching faster but you are paying for it and you are getting it so it’s a fair market, you pay for what you get.
For those who colocation matters and for most of the exchanges across the globe it is not that expensive hence the exchanges also have been pretty responsible. Even in India you can get half racks (which is 21 units) you can place a good number of servers in half rack and that comes to around 50,000 rupees a month. I am not saying it’s very cheap but it is not that stringent if you are trading into strategies which are depended upon colocation for which every millisecond matters.
Disclaimer: All data and information provided in this article are for informational purposes only.
A traditional trading system consists primarily of two blocks – one that receives the market data while the other that sends the order request to the exchange. However, an algorithmic trading system can be broken down into three parts:
- The server
Exchange(s) provide data to the system, which typically consists of the latest order book, traded volumes, and last traded price (LTP) of scrip. The server in turn receives the data simultaneously acting as a store for historical database. The data is analyzed at the application side, where trading strategies are fed from the user and can be viewed on the GUI. Once the order is generated, it is sent to the order management system (OMS), which in turn transmits it to the exchange.
Gradually, old-school, high latency architecture of algorithmic systems is being replaced by newer, state-of-the-art, high infrastructure, low-latency networks. The complex event processing engine (CEP), which is the heart of decision making in algo-based trading systems, is used for order routing and risk management.
With the emergence of the FIX (Financial Information Exchange) protocol, the connection to different destinations has become easier and the go-to market time has reduced, when it comes to connecting with a new destination. With the standard protocol in place, integration of third-party vendors for data feeds is not cumbersome anymore.
Algorithmic trading has been shown to substantially improve market liquidity among other benefits. However, improvements in productivity brought by algorithmic trading have been opposed by human brokers and traders facing stiff competition from computers.
Technological advances in finance, particularly those relating to algorithmic trading, has increased financial speed, connectivity, reach, and complexity while simultaneously reducing its humanity. Computers running software based on complex algorithms have replaced humans in many functions in the financial industry. Finance is essentially becoming an industry where machines and humans share the dominant roles – transforming modern finance into what one scholar has called, “cyborg finance.”
While many experts laud the benefits of innovation in computerized algorithmic trading, other analysts have expressed concern with specific aspects of computerized trading.
“The downside with these systems is their black box-ness,” Mr. Williams said. “Traders have intuitive senses of how the world works. But with these systems you pour in a bunch of numbers, and something comes out the other end, and it’s not always intuitive or clear why the black box latched onto certain data or relationships.”
“The Financial Services Authority has been keeping a watchful eye on the development of black box trading. In its annual report the regulator remarked on the great benefits of efficiency that new technology is bringing to the market. But it also pointed out that ‘greater reliance on sophisticated technology and modelling brings with it a greater risk that systems failure can result in business interruption’.”
UK Treasury minister Lord Myners has warned that companies could become the “playthings” of speculators because of automatic high-frequency trading. Lord Myners said the process risked destroying the relationship between an investor and a company.
Other issues include the technical problem of latency or the delay in getting quotes to traders,security and the possibility of a complete system breakdown leading to a market crash.
“Goldman spends tens of millions of dollars on this stuff. They have more people working in their technology area than people on the trading desk…The nature of the markets has changed dramatically.”
On August 1, 2012 Knight Capital Group experienced a technology issue in their automated trading system, causing a loss of $440 million.
This issue was related to Knight’s installation of trading software and resulted in Knight sending numerous erroneous orders in NYSE-listed securities into the market. This software has been removed from the company’s systems. … Clients were not negatively affected by the erroneous orders, and the software issue was limited to the routing of certain listed stocks to NYSE. Knight has traded out of its entire erroneous trade position, which has resulted in a realized pre-tax loss of approximately $440 million.
Algorithmic and high-frequency trading were shown to have contributed to volatility during the May 6, 2010 Flash Crash,when the Dow Jones Industrial Average plunged about 600 points only to recover those losses within minutes. At the time, it was the second largest point swing, 1,010.14 points, and the biggest one-day point decline, 998.5 points, on an intraday basis in Dow Jones Industrial Average history.
Financial market news is now being formatted by firms such as Need To Know News, Thomson Reuters, Dow Jones, and Bloomberg, to be read and traded on via algorithms.
“Computers are now being used to generate news stories about company earnings results or economic statistics as they are released. And this almost instantaneous information forms a direct feed into other computers which trade on the news.”
The algorithms do not simply trade on simple news stories but also interpret more difficult to understand news. Some firms are also attempting to automatically assign sentiment (deciding if the news is good or bad) to news stories so that automated trading can work directly on the news story.
“Increasingly, people are looking at all forms of news and building their own indicators around it in a semi-structured way,” as they constantly seek out new trading advantages said Rob Passarella, global director of strategy at Dow Jones Enterprise Media Group. His firm provides both a low latency news feed and news analytics for traders. Passarella also pointed to new academic research being conducted on the degree to which frequent Google searches on various stocks can serve as trading indicators, the potential impact of various phrases and words that may appear in Securities and Exchange Commission statements and the latest wave of online communities devoted to stock trading topics.
“Markets are by their very nature conversations, having grown out of coffee houses and taverns,” he said. So the way conversations get created in a digital society will be used to convert news into trades, as well, Passarella said.
“There is a real interest in moving the process of interpreting news from the humans to the machines” says Kirsti Suutari, global business manager of algorithmic trading at Reuters. “More of our customers are finding ways to use news content to make money.”
An example of the importance of news reporting speed to algorithmic traders was an advertising campaign by Dow Jones (appearances included page W15 of The Wall Street Journal, on March 1, 2008) claiming that their service had beaten other news services by two seconds in reporting an interest rate cut by the Bank of England.
In July 2007, Citigroup, which had already developed its own trading algorithms, paid $680 million for Automated Trading Desk, a 19-year-old firm that trades about 200 million shares a day. Citigroup had previously bought Lava Trading and OnTrade Inc.
In late 2010, The UK Government Office for Science initiated a Foresight project investigating the future of computer trading in the financial markets,led by Dame Clara Furse, ex-CEO of the London Stock Exchange and in September 2011 the project published its initial findings in the form of a three-chapter working paper available in three languages, along with 16 additional papers that provide supporting evidence. All of these findings are authored or co-authored by leading academics and practitioners, and were subjected to anonymous peer-review. Released in 2012, the Foresight study acknowledged issues related to periodic illiquidity, new forms of manipulation and potential threats to market stability due to errant algorithms or excessive message traffic. However, the report was also criticized for adopting “standard pro-HFT arguments” and advisory panel members being linked to the HFT industry.
Network-induced latency, a synonym for delay, measured in one-way delay or round-trip time, is normally defined as how much time it takes for a data packet to travel from one point to another. Low latency trading refers to the algorithmic trading systems and network routes used by financial institutions connecting to stock exchanges and electronic communication networks (ECNs) to rapidly execute financial transactions.Most HFT firms depend on low latency execution of their trading strategies. Joel Hasbrouck and Gideon Saar (2013) measure latency based on three components: the time it takes for
- Information to reach the trader,
- The trader’s algorithms to analyze the information, and
- The generated action to reach the exchange and get implemented. In a contemporary electronic market (circa 2009), low latency trade processing time was qualified as under 10 milliseconds, and ultra-low latency as under 1 millisecond.
Low-latency traders depend on ultra-low latency networks. They profit by providing information, such as competing bids and offers, to their algorithms microseconds faster than their competitors. The revolutionary advance in speed has led to the need for firms to have a real-time, colocated trading platform to benefit from implementing high-frequency strategies. Strategies are constantly altered to reflect the subtle changes in the market as well as to combat the threat of the strategy being reverse engineered by competitors. This is due to the evolutionary nature of algorithmic trading strategies – they must be able to adapt and trade intelligently, regardless of market conditions, which involves being flexible enough to withstand a vast array of market scenarios. As a result, a significant proportion of net revenue from firms is spent on the R&D of these autonomous trading systems.
High-frequency trading Introduction
As noted above, high-frequency trading (HFT) is a form of algorithmic trading characterized by high turnover and high order-to-trade ratios. Although there is no single definition of HFT, among its key attributes are highly sophisticated algorithms, specialized order types, co-location, very short-term investment horizons, and high cancellation rates for orders.In the U.S., high-frequency trading (HFT) firms represent 2% of the approximately 20,000 firms operating today, but account for 73% of all equity trading volume. As of the first quarter in 2009, total assets under management for hedge funds with HFT strategies were US$141 billion, down about 21% from their high. The HFT strategy was first made successful by Renaissance Technologies.
High-frequency funds started to become especially popular in 2007 and 2008. Many HFT firms are market makers and provide liquidity to the market, which has lowered volatility and helped narrow Bid-offer spreads making trading and investing cheaper for other market participants.HFT has been a subject of intense public focus since the U.S. Securities and Exchange Commission and the Commodity Futures Trading Commission stated that both algorithmic trading and HFT contributed to volatility in the 2010 Flash Crash. Among the major U.S. high frequency trading firms are Chicago Trading, Virtu Financial, Timber Hill, ATD, GETCO, and Citadel LLC.
There are four key categories of HFT strategies: market-making based on order flow, market-making based on tick data information, event arbitrage and statistical arbitrage. All portfolio-allocation decisions are made by computerized quantitative models. The success of computerized strategies is largely driven by their ability to simultaneously process volumes of information, something ordinary human traders cannot do.
Market making involves placing a limit order to sell (or offer) above the current market price or a buy limit order (or bid) below the current price on a regular and continuous basis to capture the bid-ask spread. Automated Trading Desk, which was bought by Citigroup in July 2007, has been an active market maker, accounting for about 6% of total volume on both NASDAQ and the New York Stock Exchange.
Another set of HFT strategies in classical arbitrage strategy might involve several securities such as covered interest rate parity in the foreign exchange market which gives a relation between the prices of a domestic bond, a bond denominated in a foreign currency, the spot price of the currency, and the price of a forward contract on the currency. If the market prices are sufficiently different from those implied in the model to cover transaction cost then four transactions can be made to guarantee a risk-free profit. HFT allows similar arbitrages using models of greater complexity involving many more than 4 securities. The TABB Group estimates that annual aggregate profits of low latency arbitrage strategies currently exceed US$21 billion.
A wide range of statistical arbitrage strategies have been developed whereby trading decisions are made on the basis of deviations from statistically significant relationships. Like market-making strategies, statistical arbitrage can be applied in all asset classes.
A subset of risk, merger, convertible, or distressed securities arbitrage that counts on a specific event, such as a contract signing, regulatory approval, judicial decision, etc., to change the price or rate relationship of two or more financial instruments and permit the arbitrageur to earn a profit.
Merger arbitrage also called risk arbitrage would be an example of this. Merger arbitrage generally consists of buying the stock of a company that is the target of a takeover while shorting the stock of the acquiring company. Usually the market price of the target company is less than the price offered by the acquiring company. The spread between these two prices depends mainly on the probability and the timing of the takeover being completed as well as the prevailing level of interest rates. The bet in a merger arbitrage is that such a spread will eventually be zero, if and when the takeover is completed. The risk is that the deal “breaks” and the spread massively widens.
One strategy that some traders have employed, which has been proscribed yet likely continues, is called spoofing. It is the act of placing orders to give the impression of wanting to buy or sell shares, without ever having the intention of letting the order execute to temporarily manipulate the market to buy or sell shares at a more favorable price. This is done by creating limit orders outside the current bid or ask price to change the reported price to other market participants. The trader can subsequently place trades based on the artificial change in price, then canceling the limit orders before they are executed.
Suppose a trader desires to sell shares of a company with a current bid of $20 and a current ask of $20.20. The trader would place a buy order at $20.10, still some distance from the ask so it will not be executed, and the $20.10 bid is reported as the National Best Bid and Offer best bid price. The trader then executes a market order for the sale of the shares they wished to sell. Because the best bid price is the investor’s artificial bid, a market maker fills the sale order at $20.10, allowing for a $.10 higher sale price per share. The trader subsequently cancels their limit order on the purchase he never had the intention of completing.
Quote stuffing is a tactic employed by malicious traders that involves quickly entering and withdrawing large quantities of orders in an attempt to flood the market, thereby gaining an advantage over slower market participants. The rapidly placed and canceled orders cause market data feeds that ordinary investors rely on to delay price quotes while the stuffing is occurring. HFT firms benefit from proprietary, higher-capacity feeds and the most capable, lowest latency infrastructure. Researchers showed high-frequency traders are able to profit by the artificially induced latencies and arbitrage opportunities that result from quote stuffing.