With audiences continuing to grow network traffic (some say by as much as 35-40% CAGR) the challenges continue for those providing the technology infrastructure underpinning the delivery of that network traffic.
Meanwhile the decision making for those publishing the content and driving the traffic is splitting into two lines of thought centred on choosing the most scalable, highest availability providers across a range of very similar commoditized services, and at the same time ensuring they can meet the demand for new features, higher quality, lower latency and better economics that innovation offers and audiences demand.
‘Edge’ is a very topical term in the sector, but of course every operators ‘edge’ is defined by where that operator wants to scope the boundary of their role, so few ‘edges’ are truly equivalent. One thing is certain and that is that the edge is a key component in the Content Delivery chain, but for those tasked with the specific challenge of Content Delivery a singular focus on ‘edge’ has the downside of not connecting the end to end function of those services.
Arguably the only real ‘edge’ is at the end of a workflow. Continuity of service and operation from source to destination is key. The boundaries are increasingly movable and so resilient that from the CSPs point of view that the only ‘edge’ they want to know about on a day to day basis is their audiences’ target device of choice.
Virtualisation is a central strategy to ensure those boundaries are movable, and with huge validation as exhibited by the success of Public Cloud service providers, who have without a doubt disrupted the Content Delivery ecosystem over the past decade. They have, however, found a strong place in the Content Delivery ecosystem and today there is rarely a simple binary choice between a Public Cloud and CDN for the final delivery of content to an audience.
The ‘right’ strategy today is almost universally emerging as a fully redundant hybrid of multiple providers at all stages of the delivery workflow. The resulting diversity of infrastructure providers is also increasing the availability of service.
Most Content Delivery strategy is ‘mission critical’ to some business imperative or another - be that to honour an SLA, comply with regulations or protect life with critical communications - and so you have to calculate the risk of failure in the workflow. It is that risk which drives multi Cloud and multi CDN strategies, and given that many of these are essentially pay-as-you-go service models (where costs vary with use) they are changing the economics both for de-risking redundancy/availability strategies, and also for innovation.
Content Service Providers (CSPs) who are delivering through Online Video Platforms (OVPs) and Application Service Providers (ASPs) - who are sometimes termed PaaS and SaaS operators respectively - are increasingly freed up from considerations about the delivery infrastructure in most ways other than compute resource availability to run the platforms and applications (‘Cloud’), and connectivity required to deliver the data (CDN).
The separation between these groups and the roles they can play alongside each other, while also addressing other markets such as AI, content management, security and encoding are enriching the ecosystem, and this brings with it further innovation opportunities that the CSPs and publishers incorporate into their Content Delivery strategies.
Of course, the physical layer of carrier telecoms and physical compute resources are probably the most important scaling factor for any aggressive CSP to be mindful of. While Clouds and CDNs may deal with the detailed operation and deployment challenges here, their costs will be a factor of scaling, and these costs are heavily interrelated.
While many talk about the ‘Internet’ as having a finite capacity, one which is increasingly congested by Internet content (particularly video), it is often easier to increase the capacity of a network link than it is for a telecoms carrier to host an application which makes the efficiency of that network link’s utilisation optimal.
While it may seem logical to put a cache at the remote end of a bandwidth constrained Internet connection, it may seem far less logical to put a cache at the remote end of a dark fibre which can easily have more wavelengths lit up, without needing to provide any new power or rack any more hardware.
And of course, that leads us to raise the spectre of power. Power is today a major limiting factor for Public Clouds and CDNs alike. Finding that confluence of dark fibre landing station, power, demand for service, and often water (for cooling processes) is a telecoms challenge that any network architect will face today, and the economics of the solutions all directly impinge on the economics and geopolitics (in terms of regulatory issues) of modern Content Delivery.
The above range of thinking forms the backbone of the narrative that the various presentations, fireside chats and panel sessions will explore at this year's Content Delivery World conference in London (November 27th - 28th). Join us - take part in the discussion.