Following on from Content Delivery World, which I am proud to say I chaired for the eighth year, I am continuing to extract sections from my recent book Content Delivery Networks – Fundamentals, Design and Evolution.
Obviously CDNs can only output content to a Quality of Service that equals the input – in older computer speak they are ‘Garbage In/Garbage out’ systems. So contribution feeds (particularly for live webcasts/streaming) are very important, and for larger live events can be very stressful both in terms of technology and of people!
In my next extract I want to give some context to the discussions we will have at the conference and to look at Backhaul/Contribution and Acquisition. If you are in the field, producing a live video, or if you are in a studio preparing for a linear playout on your IP platform, this section should touch on some of the issues that will concern you. I wrote this from the experience of a decade of experience on the front line producing live webcasts and it touches on some of the human aspects when in that role as well as technical considerations (the rest of the chapter from which it is extracted goes into much more detail!)
3.2 Backhaul/Contribution and Acquisition
Let us look in some detail at the most important link in the live webcasting workflow: the link between the event and the core origin of the distribution network. Depending on your own operational role, the terminology for this link varies a little (it’s all about perspective), but generally field engineers will talk about “backhaul” (hauling the signal back to the origin) where the network operations teams will talk about the “contribution feed.” Those overseeing the event may talk about “signal acquisition from the field” too.
The most important thing to be aware of is that if you have problems on this link, those problems will be replicated throughout the distribution network, affecting all viewers. But, if it works, you probably won’t need to think beyond provisioning it at the start of the event; then, if it starts to introduce problems, it will take all your energy, and your stress level will begin to rise.
There are two key bits of practical advice for webcasting live events:
First, ensure that you have several ways to log the quality of the line setup and run through the day (and make sure that these systems are not at risk of causing problems). This accountability is extremely important when you have an outage or need to provide a postmortem. Simple tools like tracert and ping (or GUI‐driven versions like Pingplotter) are extremely valuable – they simply repeatedly log the data about the link, and assuming that can be correlated to the causes, any one of these tools can really help when – as does happen – faults need to be accounted for.
Second, and most important, keep calm. Seriously – when the link begins to fail, there may be tens of thousands of people watching. Everyone will turn to you, be they onsite or remote. You will be the only person who can in any way “tell” what is going on. There will be many people clamoring to know what is going on, when the link is going to be fixed, and this can be extremely stressful – particularly if you are trying to read packet traces or diagnose deep network problems. You will have to politely assert that you will provide periodic updates every 30 minutes but for now you need to be left alone. Then put your headphones on, call someone offsite to help (they are not caught up in the stress onsite and this can be very focusing), and try to ignore all the commotion. It may even be worth establishing that this is your working protocol before the event so that other members of the production team can ring‐fence you and ensure that you can concentrate on trying to fix the issue.
Particularly in the early days of webcasting, when we were bonding multiple ISDN lines in order to get a high bandwidth feedback to the distribution origin, or using prototype satellite IP connections, it was frequent to have issues. Sometimes the issues were simply due to the transient nature of the Internet, sometimes it was third party operations making changes midstream within the IP route, and sometimes it was a mixture of many variables changing uncontrollably. The conditions were almost impossible to recreate, and naturally enough they appeared during the live event and not during the testing.
Over the past decade most of these issues were smoothed away. As we will see in the sections below, which look at various different types of contribution feed in some more detail, many are now commoditized, the Internet service providers are experienced in supporting live streams, and with video now being central to every IP providers operations, most networks are well provisioned to handle live streaming – particularly on a single contribution link.
That said, there is nothing so complex as trying to debug a fault while a live event is counting on you. For this reason I also have one key piece of advice to help you fault‐find quickly: work inward from both ends!
When debugging a live stream problem try to work from both the video origin and the end users player toward the middle until you have a continuous picture of what is going on. More traditional debug methodology of working from one end through to the other tends to inevitably start at the wrong end, meaning you have to check nearly the whole link before you find the fault. If you start at each end at the same time, and check a little from one end, then the other, then return, and so on, you are far more likely to isolate the fault quickly. This may seem scattergun to those around you – particularly the production team, but they forget that as a webcaster you are often managing more offsite kit than onsite kit, and while, in the hour of panic, this may seem to onlookers like you are not doing much more than sitting at a terminal window, you actually may be debugging across an infrastructure far larger than they are managing.
Webcasting well is a much more broad skill than simply turning up with a box and pressing go. Managing those around you assertively and clearly is a key skill. The Internet and IP networks in general are volatile, and in a sector where broadcasters and those used to working with broadcasters expect the network service to be private, secure, and extremely high level, acclimatizing them to the variability and adaptability they need in the IP space can be tooth‐sucking and tedious, but nonetheless essential, and if done well, it can be extremely rewarding.
I am interested to hear comments and thoughts on contributing streams into CDNs and of course on Standards, Service Velocity and Formats in the social feedback attached to the blog posts – so please do feed in your input!
Dom Robinson – Chair of Content Delivery World
Co-Founder, Director and Creative Firestarter @ www.id3as.co.uk
Virtualised, Orchestrated and Automated Video Workflows
For Carriers, Broadcasters and Enterprise IP Network Operators