交易所的延遲和延遲


3

Battalio等人最近發現了這篇論文。"經紀人能擁有全部嗎?關於收取費用和限價單執行質量之間的關係",並且意識到我對連接交易所的"管道"了解得很少(我正在考慮美國股票。)

閱讀本文後,我一直在思考在多個交易所同時進行交易時會出現什麼樣的延遲/延遲。假設我已經確定了限價單的大小和想要發送給他們的目標地點,並假設我同時將限價單發送到各個交易所,那麼我預計在時間戳中會有什麼樣的延遲各個交易所的數據饋送中報告的"提交限價單"消息是什麼?我認為這取決於我與交易所之間的聯繫,一天中的時間,股票的類型等等,但是我仍然對可能的價值範圍感到好奇。

0

Assuming you are not doing HFT, seconds scale, then you could measure it. By placing a limit order and then monitoring its appearance in Level 2 market depth quotes. During quiet market, with limit price away from spread and not crowded.


0

I'm no expert in this topic, but I'm not sure people will be willing to share this kind of data openly, given a lot of HFT shops use such "trade secrets" to gain a competitive edge. Incidentally, I've been reading the book "Flash Boys" and there are some numbers related to your query in there. For instance, when you submit a trade from downtown Manhattan, it reaches BATS first, before going elsewhere...


5

The round-trip latency from point A to a matching engine at point B can be thought of being comprised of two components:

$RTT_{total,A \rightarrow B} = RTT_{network\_transit,A \rightarrow B} + MPL_{matching\_engine,B}$

Where $RTT$ is the round-trip time and $MPL$ is the message processing latency (how long it takes to receive a message and produce an event). The total round trip time, $RTT_{total}$ would be measured from the moment it leaves your network interface (i.e., excluding whatever internal processing day your software may have) to the moment your network interface receives the message indicating the action has been processed. You might find my other answer regarding latency informative: HFT - How to define and measure latency?.

Now, with respect to $RTT_{total}$: when communicating with an exchange in an different data center transit latency will almost always dominate. Matching engines are very fast, but moving packets can be relatively slow in comparison. For example, from Carteret (where NASDAQ hosts their matching engines) to Mahwah (where NYSE hosts theirs) is approximately 45 miles (rough measurement from Google Maps). The best case round-trip transit latency is about 500$us$ (approximately 250$us$ one way), but this is not achievable due to frictions related to network transit: (1) fiber doesn't run as the crow flies; (2) switches and routers add latency along the way; and, (3) packets don't travel at the speed of light through fiber (I believe it is about 70% of $c$).

Now, Mahwah and Carteret are the two extreme cases as they are the two data centers furtherest away from each other. The BATS data center is located in Weehawken at NY5. This data center is about at the half-way point between Mahwah and Carteret so you would be looking at a best-case (unachievable) round-trip latency of about 250$us$.

Transit latency is also the only component directly under the control of the participant. $MPL_{matching\_engine}$ is largely a constant for all participants (although, not always due to matching engine architecture and loading of order entry gateways). $RTT_{network\_transit}$, on the other hand, can be manipulated by reducing 2 of the 3 frictions above. The revelation made in "Flash Boys" was well known throughout The Street (it seems Brad and the IEX guys were somewhat late to the party): there was a lot of low-hanging fruit in minimizing the impact of the first friction: the path that the fiber takes. By doing this savvy participants can be closer on a relative-basis to other matching engines even if they are located directly next to their competitors in real-space.

Remember, this discussion is almost exclusively limited to cross-data center messaging. There is very little ability to minimize the impact of $RTT_{network\_transit}$ within a single data center because: (1) the length of fiber interconnects are largely normalized for all cross-connected participants; and, (2) there are very few network elements between participant and matching engine. This means that internal latency to a specific participant is the most important (in other words, faster software).

All that being said, to answer your question specifically is not something many folks will do as latency numbers are held close to the vest. However, the reason for all of the above is to demonstrate to you that you can easily estimate it by simply estimating $RTT_{network\_transit}$. I think you'll find that, for example, $RTT_{total,NASDAQ \rightarrow ARCA}$ is going to be between 600$us$ and 1.5$ms$ assuming that packets travel through fiber at $0.7c$ and then depending on what penalty you want to add for the path of the fiber.