KNect365 is part of the Knowledge and Networking Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 3099067.

Informa

Raising the standards in content delivery networks

DomRobinsonIn something of a continuing count down toward Content Delivery World next week (which I am proud to say I will be chairing for the 8th year!) I am extracting some sections from my recent book Content Delivery Networks - Fundamentals, Design and Evolution, which I think will help to get the grey cells warmed up for the conference discussions.

So far, in parts one and two, we have looked at Formats, and Service Velocity as icebreaking topics. This means we have a feel for some of the issues relating to the ‘what’ of delivery and of the ‘how’ that may be delivered.

In this third extract from my book I want to bring up the topic of Standards. With numerous emerging standards having fundamental effects on the market in which the CDNs operate I think it is useful to reflect on the value standards bring, and on the problems they may introduce.

We live in an ever more agile world, where technology which requires interoperation can be updated in seconds. Even with hardware,  firmware updates offer the option to replace the core functionality at-will. So are standards still important?

Standards, Standards, Standards ...

As is so often repeated, “the great thing about standards is that there are so many of them”! It is a very true statement. However, that is more the case in the application space than in the network space.

While the complex and varied nature of applications talking over a network is essentially infinite, two ends of a network have to be able to communicate and send binary data, and since we moved to the commoditized IP world, all layers (of the network stack) lead to doing that transfer ultimately using IP these days. I think it is useful to understand some macroeconomic conditions that influenced the emergence of the Internet, and to use this to help underpin what has helped the evolution of the successful protocols we see in use today.

There are an infinite number of ways to set up a private or managed network, and this heterogeneity is a central facet to the success of the network in so many use cases. Successful network protocols are often highly optimized for one particular function and relatively transparent to all other use cases of the network. But when something is highly tuned for communications over its own signaling system, that setup (configuration/protocol) is most likely to not be something that can be “copy and pasted” to other network setups – simply because no two networks are identical.

Function‐led proprietary network design was a natural evolution of the specific telegraphy and voice communications business models of telecoms net‐ works over the century or so since they had emerged. Deal making between the InternationalTelecommunicationsUnionmembersin1988sawtheInternational Telecommunications Regulations changed so that the classification of information services was determined as “data,” rather than “telecoms.” By doing this, the ITU members protected their then‐primary and strong voice‐telecoms market. At the time this voice‐telecoms market was quantified largely in “minutes” – a quotient that was a legacy from the pre–packet‐switched and circuit‐switched networks era, where the importance in network provisioning was primarily concerned with how long a circuit would be tied up exclusively between two points and one that at the time was what the sales reps knew how to sell.

Even in the much more efficient packet network era, those same sales teams carried on selling minutes as if data didn’t exist, since in some ways, back in 1988, it didn’t: unrecognized for its potential, the “information baby” was thrown out with the 1998 “deal‐water,” and general sentiment at the time was that “information services” were just a small sideshow, barely worth putting any investment into. Frankly if opening networks up to transit, a small amount of third party information services, without charging, was the price they had to pay to maintain their own autonomy in pricing their valuable voice minutes, which then it seemed worth, at the time, giving away.

With all these proprietary Telco networks now secure in their core voice businesses, a few small Internet service provider start‐ups came to them seeking to buy wholesale quantities of minutes to then offer dial up Internet services to consumers, and indeed many of my generation will recall that the 1990s was taken up with engaged landlines, and huge phone bills for long calls to information services.

Some of these ISPs grew significantly, and as the subscriber revenues gave them more negotiating power, many of them moved to deploy their own termination technology, meaning that the dial up connection was terminated on their own switches, and then aggregated behind a routing infrastructure, which itself was connected into not one but several other backbone networks, ensuring that by having some vendor diversity the ISP could maintain cost pressure on its largest suppliers. Indeed some of these ISPs grew so significantly that they took over their own telecoms infrastructure entirely.

This in turn has meant that traditional switchgear vendors providing technology to the telecoms sector have found new customers in the ISPs. This is where the standards story comes back on track!

Switch vendors such as Bell, Alcatel, Nortel, Nokia, Motorola, and Huawei have long produced technology stacks for the traditional frame relay and ATM telecoms networks. They have a long history of finding a niche (often historically with nationalized/state telecoms companies as customers, in highly regulated (easy to monopolize) markets. Combining patent law, huge capital resources, and complex licensing and regulatory frameworks (which undeniably created cartels), the voice‐ telecoms operators, and their supplier ecosystem, were supercharged as we entered the 1990s.

Small, relatively misunderstood start‐ups like Cisco and Juniper were seen as operating outside of the Telcos’ core business. They were thought (by the larger Telcos) to service ISPs and a few large enterprises. Indeed VoIP took until the second half of the 1990s to emerge in any usable sense, so these “Internet protocol” focused vendors were considered to be a nonthreatening minor eccentricity by the rest of the telecoms sector for a long time.

Now IP itself had been around since 1973 (see the earlier chapters on history of the sector), and had superceded as the main network protocol adopted in initially academic and research information services (I first used a Starlink IP node on Sussex University’s VAX machines in 1991). However, with economic and regulatory conditions right (or more to the point, the monopolies looking the wrong way), and the widespread and low‐cost availability of IP capable technologies, coupled with rapid commoditization of the personal computer, a variety of unstoppable standard ways to do things emerged. As so many competing stakeholders created “versions” of email systems or of webpage servers, the movements such as WWW3C and the IETF became increasingly important for helping customers of vendors ensure that technologies would interoperate – ensuring that there was no vendor lock‐in (something that ensures price pressure/control).

However, as those committees and groups evolved (and not just in the net‐ work stack but also in the application space – think about CoDecs such as mp3 and h.264 and document formats such as.doc, etc.), there mounted an interesting, unending battle between open and proprietary standards.

There is no right answer. Most technology stacks require an ability to work with third party technologies, and for this reason standards are critical to interoperability. Developing proprietary interfaces between parties extends the lock‐in, complicating sales and service velocity and requiring continuous redevelopment each time a client mandates a different third party interface.

Of course, standards are not a panacea. As any implementer will tell you, each implementation of a “standard” will have nuances and variations that will vary with the implementer’s skills, and the approach to programming to some extent. Certainly in some cases this can render the standard a failure in its own right.

Content Delivery Networks: Fundamentals, Design and Evolution

Dom Robinson – Chair of Content Delivery World

Co-Founder, Director and Creative Firestarter @ www.id3as.co.uk

Virtualised, Orchestrated and Automated Video Workflows

For Carriers, Broadcasters and Enterprise IP Network Operators

http://www.linkedin.com/in/domrobinson

 

 

 

Get articles like this by email