Why the public internet is no longer fit for purpose


10 May 2016

Vincent in’t Veld, strategy and marketing director, cloud segment, Interxion. Photo via Robert Aberman

Interxion’s Vincent in’t Veld challenges the security and performance capabilities of the public internet.

In early March 2015, something strange happened at the Atomic Weapons Establishment. Instead of making its normal journey from Houston, Texas to the UK, data intended for this manufacturer of nuclear warheads was re-routed via the Ukraine.

You’d be forgiven for missing the news. Sure, the Belfast Telegraph picked it up and the US business site Quartz ran the story under the banner headline, ‘The mysterious internet mishap that sent data for the UK’s nuclear program to Ukraine’. But beyond that there was little coverage. Which is odd given the sensitivity of the organisation involved and the instability in, and the geo-political significance of, Ukraine.

More generally, the story tells us something about the public internet. Namely that it is no longer fit for purpose, no longer capable of supporting business-critical – to say nothing of defence-critical – workloads.

The public internet doesn’t just lack security guarantees; performance will always be inconsistent at best. The public internet provides no latency commitments, no bandwidth reservation and no control over network paths.

Dyn Research, the organisation that discovered the Atomic Weapons Establishment re-routing, continuously monitors internet outages. At any one time, its scrutiny suggests that there are around 16,000 outages across the net.

When senior IT professionals were asked to share their biggest concerns about cloud computing, 66pc of them cited security and another 60pc cited performance. This helps explain why 41pc of organisations are already planning to bypass the public internet and why there would be an estimated 69pc increase in the IT workloads moved to the cloud if network issues were resolved.

Lionel Marie is a network architect at Schneider Electric and he speaks for many of his peers when he says: “As a network guy, I have a problem. Moving workloads to the cloud can create problems.”

Describing the issues he faces at a networking summit last year, he said: “On the internet we have no control. Even if we choose Tier 1 ISPs, you still don’t control the way the packet will go upstream and downstream. Asymmetry is a huge problem on the internet. For a large company like ours, the internet is just a commodity. [It’s not there] to host business critical applications.”

Does this mean the public internet is now defunct for business use? Of course not. Less security and performance-sensitive workloads lend themselves to be connected via the public internet. But it does mean thinking differently when it comes to business-critical applications.

So, what’s the alternative? It lies in using private, dedicated connections to access public cloud, or multiple public clouds. By combining services that provide this with the performance uptime offered by colocation centres, companies can begin to tackle one of the biggest challenges of migrating enterprise-grade workloads to the cloud – the public internet.

After all, if the UK’s nuclear program can fall foul of the public internet, so can the rest of us.

By Vincent in’t Veld

Vincent in’t Veld is strategy and marketing director for the cloud segment of Interxion’s business. He has 15 years of experience in the international telecommunications industry, and has driven and overseen the implementation of market segmentation strategy at Interxion.

This article originally appeared on the Interxion blog.