In something of a continuing count down toward Content Delivery World this autumn (which I am proud to say I will be chairing for the 8th year!) I am extracting some sections from my recent book ‘Content Delivery Networks - Fundamentals, Design and Evolution’ which I think will help to get the grey cells warmed up for the conference discussions.
Last week, I touched on Formats. Debates rage about formats and codecs. Ultimately most technology available is ‘good enough’ to achieve most practical use-cases today. However, the eternal search for differentiation continues to drive the emergence of new formats and codecs.
It will be interesting to see if, for example, Apple’s decision to support HEVC shakes the market in their favour or, for example, drives more diversification toward other codecs or even formats such as DASH. One thing is for sure – that debate will likely never come to conclusion.
So in my second extract I want to look at my favourite term. No its not AI, or IoT or any of the other buzzword-bingo hi-scorers: the term is ‘Service Velocity’.
For me, Service Velocity is the argument to give to the CFO of your company for virtualization. And it is not in increasing revenue that Service Velocity has its main impact, but in increasing profitability by controlling costs.
Here is an extract from the book:
2.9 “Service Velocity” and the Operator
Service velocity is explored in depth in a recent StreamingMedia.com article I wrote and have included in Chapter 5. However, as a conclusion to the context and orientation section I want to stress that all these technical solutions will only find success where they address commercial strategies for users who deploy them. For this reason it is important to note that service velocity is key to understanding why one should adopt the techniques I have been advocating above.
Essentially service velocity refers to the speed with which a new service can be provisioned across an operator network in response to either a customer or a business requirement to innovate and bring something new to market.
In the traditional Gen1 appliance‐led technology mode, service velocity could be measured to account for the time taken to order and supply the appliances, to train installers how to install them, to test the appliances, and to activate the service. In an extreme example a satellite operator may measure its service velocity in units of years, or possibly even decades. The planning for such rollouts have to be meticulous, since once a rocket is launched, there is little chance to change the satellite’s design!
As Gen2 arrived, it was assumed that a Gen1 network of routers and servers based on IP and COTS would still be in place, but from that stage it became possible to commission infrastructure within minutes and deploy services in the time it took to distribute a virtual machine to the commissioned servers within the infrastructure. If “hot‐spares” were setup in a redundant mode, then failover for disaster recovery was possible, and this meant that SaaS operators could deploy new services to customers or add new services to their marketing relatively quickly. Typically the business continued to plan and execute much as before, but without needing to wait for physical installation every time a new service was introduced. This meant SaaS operators could measure their service velocity in days or hours. (An interesting legacy of this is that Amazon EC2 still typically measures their IaaS service utilization by the hour.)
Gen3 shrunk the size of the virtual processes that delivered the services once again, and this means that complete networks can be delivered “just in time.”
Indeed it is now possible to instantiate a service in response to a request; for example, a user could request a chunk of HTTP‐delivered video from a server that doesn’t exist at the time the request is made, but that HTTP service can be deployed and respond to the user without the user being aware. This is a heady concept and leads to all sorts of conjecture about the future of computing as a whole; however, more important, it means that service velocity in a Gen3 world can be measured in milliseconds, and makes it possible to always say yes to clients, provide disaster recovery on the fly, and scale or more interesting, moving entire SaaS platforms “on‐the‐fly” while there may be millions of clients using the service.
It is through this architecture that my company ensured continuity to Nasdaq while delivering hundreds of live financial news broadcasts online through a well‐known public cloud infrastructure even when it failed and all the other Gen2 operators suffered a significant outage. We automatically and instantly relocated the service orchestration to an entirely different part of the cloud, and did so between chunks of video. Indeed we only discovered the outage when we saw it reported in the news: we did not receive a support call.
Service velocity obviously changes the competitive landscape – using the right technology for the task in hand means that small agile companies can deliver service levels and times to market that have traditionally been the preserve of very large capital‐rich companies. This increases the pace of innovation significantly and will continue to transform not only the content delivery market but many other sectors as well.
I will be interested to hear comments and thoughts on this in the social feedback attached to the blog posts – so please do feed in your input and we can use this to fuel the panel sessions and Q&A during the conference!
Dom Robinson – Chair of Content Delivery World
Co-Founder, Director and Creative Firestarter @ www.id3as.co.uk
Virtualised, Orchestrated and Automated Video Workflows
For Carriers, Broadcasters and Enterprise IP Network Operators