We’ve recently joined at AMS-IX (Amsterdam Internet Exchange) and LINX (London Internet Exchange) allowing our network to connect directly to hundreds of other networks. This means you can enjoy even better network performance for your BHost services.
What is peering?
Until now, most of the traffic to and from BHost’s network connects to other networks across the internet through a transit provider. Transit providers charge us money to take responsibility for ensuring packets arrive at their destination network. We use numerous tier 1 transit providers for resiliency.
Peering with another network means traffic destined for that network gets handed directly over to that network – by avoiding any intermediary networks we reduce latency and increase reliability. Essentially it means a better quality network for both sides.
Peering between two networks could involve a physical interconnect – we could run a physical optical cable between BHost’s network equipment and that of the other network. Thus packets can pass across this link, getting between the two networks at the speed of light. Thousands of links like this between thousands of networks make up the internet.
However, establishing physical interconnects is costly and complicated. Hence Internet Exchange Points (IXPs) emerged. These are huge layer 2 networks where multiple networks exchange traffic (the smaller IXPs could consist of a single network switch – like what you might find in a home or office. The larger IXPs may have multiple interconnected, high capacity switches spread out across multiple buildings in a city). BHost plugs in to the switch – as do other networks. We can then exchange traffic over this very extensive peering LAN. To give you context, AMS-IX and LINX are some of the biggest IXPs in the world by traffic. AMS-IX transfers around 5500 Gigabits per second (Gbps) at peak and LINX about 3800 Gbps.
So what’s the benefit for me?
All BHost services in London and Amsterdam rely on our extensive and highly resilient network. Traffic passes to and from your services through our core router in each city. Now our core router in London is connected to LINX and our core router in Amsterdam is connected to AMS-IX. We then establish network peering sessions with other networks on those IXPs.
For example, if you want to connect to Google from your BHost VPS, it’s just one hop between BHost’s router and Google’s router:
traceroute to 18.104.22.168 (22.214.171.124), 30 hops max, 60 byte packets
1 hv305.nl1.bhost.net (126.96.36.199) 0.063 ms 0.018 ms 0.029 ms
2 core1.ams.net.google.com (188.8.131.52) 0.521 ms 0.506 ms 0.692 ms
Or if you use our content distribution and DDOS protection service from Cloudflare, it’s just one hop from Cloudflare’s edge location to our network. This means traffic gets between BHost and Cloudflare very quickly and efficiently:
traceroute to www.cloudflare.com (184.108.40.206), 30 hops max, 60 byte packets
1 hv100.uk1.bhost.net (220.127.116.11) 0.050 ms 0.017 ms 0.016 ms
2 linx-lon1.as13335.net (18.104.22.168) 4.541 ms 4.482 ms 4.464 ms
3 22.214.171.124 (126.96.36.199) 4.158 ms 4.155 ms 4.113 ms
DNS is a critical part of internet infrastructure. Many of the root name servers (E, F, K and L) are just one network away over peering:
For example K.root-servers.net (hosted by RIPE) is extremely close:
traceroute to k.root-servers.net (188.8.131.52), 30 hops max, 60 byte packets
1 hv305.nl1.bhost.net (184.108.40.206) 0.066 ms 0.017 ms 0.037 ms
2 router.nl-ams.k.ripe.net (220.127.116.11) 3.813 ms 3.762 ms 3.749 ms
3 k.root-servers.net (18.104.22.168) 3.585 ms 3.511 ms 3.454 ms
As is F root (hosted by Cloudflare):
traceroute to f.root-servers.net (22.214.171.124), 30 hops max, 60 byte packets
1 hv100.uk1.bhost.net (126.96.36.199) 0.048 ms 0.017 ms 0.016 ms
2 linx-lon1.as13335.net (188.8.131.52) 0.614 ms 0.595 ms 0.575 ms
3 f.root-servers.net (184.108.40.206) 0.576 ms 0.557 ms 0.538 ms
How to peer with BHost
If your network is present on AMS-IX or LINX, we operate an open peering policy. Find out more on our peering page. We have exciting plans for new data centers around the world, so follow us on Twitter and Facebook to be sure to receive the news first.