FAQ

A layer 1 switch takes data in from one source and serves it up to multiple servers. In the exchange datacenter, you typically get one exchange handoff (physically a cable dropping into your cabinet) delivering market data. But you’ve probably got anywhere upwards of five servers that you want to get that data into. You can use a layer 1 switch to electrically replicate the market data arriving on your handoff to deliver it to all of the servers that need it.

In short: xPort gives you vastly reduced latency and vastly increased determinism. Full service (layer 2/3) switches can accomplish this task of course. But they are built to solve much bigger problems than simple, straight data replication. Full service switches put a processor between data in and data out. This has a terrible impact on latency. The fastest full service switches on the market right now can switch at around 300ns, and that’s in quiet times. The processor runs at a fixed speed, and it can’t cope with the full speed at which data arrive on a 10G link. When data is arriving faster than the processor can consume it, a queue of messages builds up waiting to be processed. This in turn has a terrible impact on determinism. Your switch might be quite capable of 300ns latency at lunch time. But at the open, xCelor has measured switching delays alone at 6 microseconds or more – on a very high-end switch.

The xCelor xPort series of switches take data in and copy it, electrically, to where it is needed. No processor. No smarts. Just simplicity. And simplicity equals speed: we disseminate the data at exactly the speed at which it arrives. If it’s arriving at a 10G speed, we send it out to all of your servers at 10G speed. It’s just pulses of electricity ultimately – we just get those pulses where they need to be as they arrive.

That’s a good analogy. If you have a fiber handoff, you could use an optical splitter to accomplish something very similar.

There are a few problems with an optical splitter, though: firstly each fan-out necessarily decreases the light power being transmitted. If you’ve a large number of servers, you won’t be able to split the light and maintain the power you need for data integrity. Secondly, the problem with an optical splitter is you have no scope for monitoring or manageability. If you’re splitting out eight ways, and seven of your servers have link but the eighth doesn’t, you’re getting on a plane to New Jersey and you’re physically disconnecting some fiber and using your eye to see if you can see any red light. With an xPort switch, you can connect up to the CLI and ask the device the light strength from all inserted modules. And the xPort switch will send you an SNMP trap if something goes down.

Finally, if your exchange handoff is singlemode fiber, you need to deliver singlemode fiber to all your servers. And in case you’ve forgotten, single mode fiber SFP+ modules are expensive. xPort series switches let you deliver the single mode handoff into the device, and use multimode or even twinax modules and media to deliver the data to your servers. More on this later.

Just like with an optical splitter, the communication is one-way. Any data sent down the handoff is replicated exactly and completely. But it’s not possible to talk back to the handoff. To accomplish this, xPort series allow you to designate one of the replicated ports as acrossover port. Traffic arriving on this port will be delivered to the exchange on the handoff. So you put your existing switch between your servers and the xPort crossover port. Your servers place orders through the switch, which handles the multiplexing, getting all traffic to the xPort switch. The xPort gets it on to the handoff. An example setup can be seen on theXPM product page.

While all of the xPort series devices do similar things, they’re aimed at different use cases, port densities and price points.

The XPR and XPP are the basic devices in the family. The XPM is the daddy.

The XPR (xPort Replicator) can be used to replicate data with a fixed mesh. That is, if you have fifteen servers or less and one handoff, and your handoff is (and always will be) connected to port 1, and your switch (crossover) is (and always will be) in port 2, and your servers are (and always will be) in ports 3-16, then the XPR 16 is for you. It’s simpler and therefore priced very competitively. You can’t reconfigure the mesh (you can’t suddenly decide you want your handoff in another port), and you can only replicate one stream. But the replication, monitoring and manageability are all there. The XPR 32 and 48 are similar: the location of your handoff, switch and servers are fixed.

The XPP (xPort Patch Panel) is slightly different. It is what it says on the tin – an electronic patch panel. Let’s take an XPP 32 for example. It has 16 ingress ports, each mapped to one and only one egress port. Let’s say that ports 1, 3, 5, 7…15 are the ingress ports. And ports 2, 4, 6, 8…16 are egress. Port 1 is mapped (bidirectionally) to port 2, port 3 to port 4, etc. Just like with a physical patch panel, you can decide that instead of port 1 mapping to port 2, you might like to map it to port 5 instead. You can do this through the CLI, without having to get up and physically move any cabling (handy if the device is in Tokyo and you’re in Chicago). You can also isolate any given pair through the CLI, which means it can be used as an ultra-low latency kill switch. So if port 1 is connected to one of your OSE handoffs and port 2 is connected to one of your switches, you can trade with a typical 2.5ns penalty (excluding media/modules). Regulations state that it must be possible to isolate your trading at the flick of a switch, and your broker must be in charge of the procedure. You probably care less about giving your broker CLI access to your XPP than you do your full service switch. Your broker can kill your connection if necessary.

The XPM is the superset of all of these devices. You can create any mesh you like: multiple port groups mapping a single ingress port to one or more egress ports; each group having one nominated crossover for talk back to the exchange, and zero or more ports for straight replication. So you can handle any combination of multiple exchange handoffs and multiple servers on a single device. Each port group can run at different speeds. So if in one datacenter, you have a three handoffs: one 10G, one 1G and one 100M, you can create three port groups on the single device, with each group running at a different speed.

We offer XPRs in 16, 32 and 48 port flavors. On an XPR 48, 32 ports are SFP(+) connections, and 16 are made up using fan-out from the four QSFP+ ports on the unit.

We offer XPPs in a 32 port density.

We offer XPMs in a 48 port density. This comprises 32 SFP(+) ports and four QSFP+ ports. The QSFP+ ports can be used to replicate a full 40G feed to three destinations, or they can be used as sixteen regular 10G ports with a fan-out cable.

On the XPM, yes. The CLI allows you to experiment with different port group configurations. The changes take effect atomically after entering a single command to commit.

 

That’s right. In order to achieve the latency and determinism profile that the devices exhibit, there is no processing between ingress and egress. All that we ask is that the port speeds match. If you have a 10G handoff, we need to deliver to 10G ports. If you have a 100M handoff, we need to deliver to 100M ports.

No. The device is completely media agnostic. As long as speeds match, a singlemode handoff can deliver to twinax or multimode. Indeed, different replication ports can use different media and modules in the same group. We don’t care. This means you can use cost-effective cabling and modules of your choice within your cabinet, without having to be at the mercy of whatever expensive media the exchange chose to deliver your data.

We’ve put a lot of thought into redundancy. We offer dual fully-redundant power supplies in all devices. The CLI is running on a Linux kernel (which of course you can drop into from the CLI and vice-versa). Should the motherboard fail, the CLI will stop working, but the device will not. Port configurations will remain the same; no connections will be dropped. If you’re on our 4 hour support plan you’ll have a new device at the end of the day with no interruption to your trading.

Absolutely not! We’ve tested against a great many different branded transceivers and they all work just fine. Even the cheap ones.

In short, yes it will. It’s a little complicated, but as long as a device on the crossover port is acking those packets as quickly as they arrive, and as long as the exchange is not waiting for you to ack each packet before it sends on the next, it will work just fine. Give us a call. The answer may confound you, but you’ll be happy with the results.

 

MFH is our range of FPGA feed handler products. Physically it is a PCIE card that you insert into your trading servers. There’s a very simple C++ API for integrating with your software.

It’s a problem of “Big Data”. Take the NASDAQ TotalView ITCH feed. It’s a 10G feed broadcast on a single multicast group. There are around 10,000 symbols in there. It’s unlikely that you’re interested in all 10,000. More likely you’re interested in perhaps 50 ETFs or 100 stocks. The traditional approach to processing market data is to write a software feed handler that works through all packets as they arrive, discarding updates on symbols that are not interesting. The problem with this is that all of the data must be delivered up to userspace before your software can get its hands on it. That’s a lot of data. And you just can’t jam that much data up to your software at 10G rates. A typical server configuration (with or without kernel bypass) will start buffering at various points from network and userspace at a sustained rate anywhere north of about 1.5G. When things start buffering, determinism goes out of the window. Not to mention that your CPU is spending precious cycles in a constant loop of “am I interested in IBM? No. Am I interested in MSFT? No. Am I interested in GOOG? No. Am I interested in BAC?…” That’s all CPU time better spent running your trading strategy.

FPGA can eat the data at link speed. Our FPGA cards can run at 10G maxed out all day long, with spare cycles on the card. When you start the card, you tell it what symbols you’re interested in, and how many levels of depth. The massively-parallel nature of FPGA makes it perfect for eating big data at the rate at which it comes in, building its own book on the card and delivering downstream only the data that the application is interested in. That means a fraction of the data of the full feed has to be transported to userspace, and that means less buffering, which means fantastic determinism even right at the open and right at the close. It also means your CPU isn’t wasting precious cycles filtering the data itself.

We pride ourselves on making the optimal use of the hardware. We do as much as is sensible on the card – subscribing, normalizing, filtering, even re-requesting missing packets – all on the card. The only thing that travels across the PCIE bus is normalized data that you are interested in.

Many solutions on the market do use a hardware card for some of these operations. But some also install kernel drivers that just move that same old processing from userspace into the kernel. It still needs to be done by the CPU that your algorithm is trying to use. And it cannot be done deterministically.

We currently offer cards for NASDAQ, NASDAQ BX, NYSE Arca, BATS BZX, BYX, EDGX and EDGA and CME.

We can typically turn around a new exchange in around 90 days from customer commitment. Contact us to find out more.

 

 

One card can handle two sub-exchanges of a single “exchange family”. For example, a single BATS card can simultaneously handle any two of BZX, BYX, EDGX and EDGA.

 

One card can handle 12,000 symbols, split however you like across exchanges. For example, you could have a BATS card with 10,000 symbols allocated to BZX and 2,000 symbols allocated to BYX. Or 6,000 symbols on each of BZX and BYX.

Yes. We build our own book and honestly we’re pretty proud of our performance. So you can consume that fully normalized in a C++ array if you wish. But if you want to build your own book, we also offer an incremental interface where we deliver order add, order modify and order delete in the usual way.