Top Video Predictions in 2016: Codecs and Standards

Codecs and Standards

Part 4 of  The Top 16 Predictions for the Video World in 2016.

There will be a flurry of video codec and standards activity in 2016.

HEVC: Are All Passengers on Board?
You may have read about the HEVC Advance initial licensing terms that were rejected massively by the industry at IBC 2015. This has forced HEVC Advance to propose new terms, much more in line with what the market was expecting, in particular no fees for FTA and free Internet streaming and caps for paid usage. There are still licensing companies that have not yet joined either MPEG LA or HEVC Advance, and we believe that 2016 will see more clarification in licensing terms. This will enable HEVC to take off, which is good news for UHD, HD adaptive streaming on mobile networks, OTT delivery, terrestrial broadcast, DTH transmission in emerging countries and LTE broadcast.

The Codec War in Perspective
Following the HEVC Advance situation we saw the creation of the Alliance for Open Media. Formed by Netflix, Google, Amazon, Microsoft, Cisco, Intel and Mozilla, the group acts as a counter measure to the pay toll requested by HEVC Advance. The Alliance has stated that it would be ready by 2017, so we should see some elements of the technology in 2016. Since the creation of the Alliance, HEVC Advance has offered free licensing for Internet streaming (even ad-based), as well as a cap on the licenses for pay services, so this might make the Alliance’s work less relevant. Note that this will only apply to Internet delivery and will not cover the broadcast space. 2016 will be full of activities.

The Rise of MPEG-DASH
When the DASH Industry Forum was created in 2011, many skeptics asked why we would need yet another ABR format. Fast-forward to 2015 and DASH is now imposed as the adaptive streaming format and has successfully been federated (thanks to the DASH Industry Forum). Google, Microsoft and Adobe have all adopted DASH (ISO BMFF format) as a replacement of their own technologies WebM, Smooth and HDS, respectively. DASH is now specified in HbbTV (connected TV standard), in CableLabs (TS version), in all adaptive streaming standards such as DECE for the streaming of HD and UHD, for eMBMS/LTE broadcast, as well for ATSC 3.0 (ROUTE mechanism). DASH is now the standard-based solution for adaptive streaming, sitting next to HLS that is still an IETF Internet draft (version 18!). 2016 should see more commercial deployments of DASH, and the question outside of the iOS ecosystem is, who will still use HLS?

Stay tuned for the last installment of predictions for 2016: IP Technologies.

– Thierry Fautier, Vice President, Video Strategy at Harmonic and President of the Ultra HD Forum

Top Video Predictions in 2016: Adaptive Streaming Technologies

Adaptive Streaming Technologies

Part 3 of  The Top 16 Predictions for the Video World in 2016.

Adaptive streaming has certainly been a popular method of video consumption over the last few years. For 2016 we see a few areas of enhancement.

The Adaptive Streaming World is Getting Smarter

We all have implemented (stupidly?) the recommended profiles provided either by Apple or Microsoft. Those encoding profiles were the same for any type of content. After studying the problem, Netflix has revisited the issue and has defined profiles that can vary depending on the content. At stake is a 20 percent savings versus the conventional approach. This is in the works for offline encoding and we can expect a similar scheme to be applied to live in 2016. It will reduce the traffic on the network, the storage on the cloud DVR, and improve the user experience overall.


Multicast is Not Dead, Even Less With OTT

The power of IP multicast has been demonstrated, particularly how it could help to scale live delivery of video over DSL and fiber networks to support IPTV. New methods of delivery have appeared, and some say adaptive streaming is so strong in terms of market adoption that it might eventually replace IP multicast on telco networks. This works fine until you reach the scalability of first screen and mass events. The so-called “World Cup or Olympics effect,” where there are several hundred million viewers, can significantly overload the network, given that it is not provisioned for that purpose, especially since these events only take place once every four years. In order to address that issue, CableLabs released an IP Multicast Server Client Interface Specification in 2015, and DVB has started the standardization effort for live adaptive streaming over managed and unmanaged networks. U.S. MSOs have already conducted several trials using the CableLabs specification, and Broadpeak has developed its own nanoCDN technology that is now commercially deployed, so we should see a wide adoption of ABR multicast in the next two years.


Cloud DVR is Getting More Economical

If you’re entrenched in the video industry you may suffer from “copy per subscriber” syndrome, in which you have to write a separate copy for each subscriber. This results in a price per sub higher than the HDD drive in the STB, to which the cost of multicast traffic needs to be added. We now see operators getting smarter and asking for ways to work around the problem. Of course, this starts by convincing the content providers that a shared copy, properly managed (i.e., you can only watch what you have recorded) is a win-win solution. The second way to decrease the cost of cloud DVR is to use “transcoding on the fly” (TOTF), in which after a period of time, say seven days, the highest profile is recorded and transcoded on the fly, with the lowest profiles from the top stored profile. How is this possible? With the new Intel i7 processor we can now achieve massive scalability affordably. Cloud DVR is a necessary evil to complement the VOD and catchup offering, especially in a connected-device world where there will be no storage for recording on the client.


OTT Invites Itself to the First Screen

In recent years OTT services have appeared in Europe, first with Zattoo in Switzerland, Magine in Sweden, and MolotovTV in France. In 2015, the United States saw the launch of new OTT services such Vue TV by Sony. Skinny bundles launched by OTT operators wanting to address a new audience have also emerged, including Sling TV in the U.S. (owned by Dish), Sky Now in the UK and Canal+ OTT in France. In 2016 we expect to see more of those live OTT services, with the most anticipated one being from Apple; however, it has been put on pause for the time being. The delivery of live TV to mass audiences is not without challenges from a content licensing perspective, but also from a business model perspective (i.e., someone has to pay for the traffic). What’s more, the user experience should never be forgotten. Buffering or pixelated video during a live TV event will not attract consumers for very long.


Targeted Advertisements Are Coming

We have now witnessed the success of Sky Ad Smart, which should not shadow the work done by Invidi with both Dish and DirecTV in the United States. In the last 24 months Comcast has been franticly acquiring Visible World, an addressable and programmatic advertising business, This Technology (dynamic ad insertion technology) and Freewheel (online ad network). As content providers can now directly reach consumers and efficiently sell ads in an OTT environment, service providers are now under threat and are starting to collaborate with broadcasters to share analytics to serve better ads. This is a big shift, as the current system is antiquated (inherited from the analog world) and quite inefficient. 2016 should see the world of unicast and targeted advisement emerge and provide more ROI for advertisers, as well as the consumer who will eventually start to see relevant ads.

Stay tuned for the next edition of predictions: Codecs and Standards.

– Thierry Fautier, Vice President, Video Strategy at Harmonic and President of the Ultra HD Forum

Top Video Predictions in 2016: Cloud Technologies

Cloud Technology

Part 2 of The Top 16 Predictions for the Video World in 2016.

You’d have to be living under a rock if you haven’t heard about the benefits and possible applications for cloud technology in the video world. Here are a couple ways we see cloud being used in 2016.

Overcast Clouds

If you liked virtualization of the whole video processing chain, from middleware to DRM, encoding and distribution, then you will love the next move to a full cloud-based architecture. What are the benefits? First, you have a much lighter and more optimized system, as it natively runs on Linux, so there is no expensive and complex IT/VM to support. Everything is optimized for the cloud (public or private) and you can launch your service … virtually — from your mobile phone! On the compression side, think of more servers readily available for encoding complex content and always offering the best quality at the lowest bitrate. Cloud will play a more prominent role in the video space in 2016, with more private clouds being used for live services on managed networks and public clouds for OTT services as an add-on to existing broadcast services.

TV as a Cloud Service

Recently technology and business models have evolved from white label video services with a classical brick and mortar approach (e.g., Quickplay, MobiTV, Abertis-Nagra, Alpha Networks, Divitel, etc.), while OVPs (e.g., Ooyala, Brightcove, Kaltura, etc.) have moved from the enterprise to the service provider space with private cloud offerings (e.g., Accenture). And let’s not forget Amazon’s acquisition of Elemental for its cloud technology.

In 2016 there will be more TV cloud services running either on public cloud (e.g., AWS, Azure, etc.) or on the operator’s private cloud. These services will be limited to OTT, but nothing prevents them from being scaled to the first screen.

– Thierry Fautier, Vice President, Video Strategy at Harmonic and President of the Ultra HD Forum

The Top 16 Predictions for the Video World in 2016

Top 16 Predictions for the Video World in 2016

Now that 2016 is underway, there’s no better time than the present to start thinking about where our industry is headed in terms of technology advancements. Below are 16 topics we believe will illuminate Harmonic’s world this year. Each topic will be discussed in more detail via a series of 5 blog posts, with this week’s focusing on future technologies.

Future Technologies

  • The Dynamic of High Dynamic Range
  • Virtual Reality Becomes Reality
  • The Rise of the Forums

Cloud Technologies

  • Overcast Clouds
  • TV as a Cloud Service

Adaptive Streaming Technologies

  • The Adaptive Streaming World is Getting Smarter
  • Multicast is Not Dead, Even Less With OTT
  • Cloud DVR is Getting More Economical
  • OTT Invites Itself to the First Screen
  • Targeted Advertisement is Coming Near You

Codecs and Standards

  • HEVC: Are All Passengers on Board?
  • The Codec War in Perspective
  • The Rise of MPEG-DASH

IP Technologies

  • LTE Broadcast Coming Eventually?
  • Next-Generation Broadcast is Coming to Life
  • The Broadcast World Goes IP


Future Technologies

The first changes we see having a major impact on the video industry in 2016 are high dynamic range, virtual reality, and forums.

The Dynamic of High Dynamic Range

The road to high dynamic range (HDR) is starting to get clearer: Dolby Vision has deployed HDR for OTT, HDR 10 is available in all HDR TV sets and Hybrid Log-Gamma (HLG) is making its way onto the market in light of the news it’s already been adopted by ARIB and is a strong contender for the ITU-R. With chipsets supporting more and more HDR flavors, we believe 2016 is the year of commercial deployments of UHD with HDR for BDA/DECE, OTT VOD and for some broadcasts. We will also see some great new content in UHD this year, including the Olympic Games in Rio (only NHK will shoot in UHD) and the UEFA Euro 2016 in France, where eight matches will be transmitted in UHD. You can count on the Ultra HD Forum to line up the best-of-breed technology at those events, and we are all looking forward to seeing live HDR production and transmission to TVs.

Virtual Reality Becomes Reality

Everyone has heard of Oculus, HTC Vive and Sony Morpheus. Well, this year you will be able to see them in action, as we expect they will all be commercially available in the first half of 2016. This is going to be big. We’ve gotten a taste of virtual reality (VR) already with Samsung GearVR and Google Cardboard in the unwired category (you can watch just with your phone). The wired category assumes a powerful PC or console that will be connected to the head-mounted device, which is the case with Oculus, Vive and Morpheus. As many analysts say, this is the first year that VR can be compared with the first postage-stamp-sized video streaming at 100 kbps on a corner of your desktop 10 years ago. Applications will be games, of course, and on the video side we have seen already documentary, live sports, short-form movies, adult content, etc. Expect more of this along with a new way of communicating video and consuming content. Millennials and game addicts will be the early adopters.

The Rise of the Forums

Whenever there is a significant new technology there seems to be industry chaos. In these situations little coordination exists, and there is no clear industry drive. I would point to 3D, adaptive streaming, and targeted advertisements as a few examples. In 2015 we saw the creation of several major organizations designed to reduce the disarray that usually surrounds a new advancement. Some of these include the UHD Alliance, whose main goal is to define the best UHD experience; the Ultra HD Forum, which advocates an industry consensus around common technical standards; the Streaming Video Alliance, which develops, publishes and promotes open standards to allow the video streaming ecosystem to flourish; and, more recently, the Alliance for IP Media Solutions, which fosters the adoption of standards for the broadcast and media industry as it transitions from SDI to IP. This is proof that the industry wants to take hold of its own future, and we should expect tangible deliverables from all those groups in 2016.

– Thierry Fautier, Vice President, Video Strategy at Harmonic and President of the Ultra HD Forum

Codec du Jour: What’s Next?

Video Codecs

You may have noticed that video codecs are a hot topic these days. It all started when Google announced that its VP9 codec would be licensed royalty-free as an alternative to the MPEG HEVC standard for Internet use. VP9 has not gained much traction outside of the YouTube ecosystem, but this did not stop Google from starting work on its next codec iteration, VP10.

At the 2015 NAB Show we saw a new type of codec, Perseus, from V-Nova, which claims to fit HD in SD bandwidth and UHD in HD bandwidth. The V-Nova website says that Perseus has 3x better compression efficiency than state-of-the-art codecs such as HEVC, which sounds really revolutionary. We have seen a few promising trade show demonstrations of it, a few awards, and some industry endorsements, but no broadcast or Internet deployments for Perseus yet.

Then came the HEVC Advance announcement, in which users of the HEVC/H.265 patent pool would be charged a steep license fee. This led to the founding of the Alliance for Open Media, formed by Netflix, Google, Amazon, Microsoft, Cisco, Intel and Mozilla as a counter measure to the pay toll requested by HEVC Advance. The DNA of the Alliance founders is clearly from the web, and the codec they are developing will solely cover the needs of Internet delivery, meaning we still need a codec for the good old broadcast world.

Is that all? Not really. Tveon, a Canadian TV everywhere company, came out of the blue with a brand new codec that it claims can deliver UHD at 2 Mbps and HD at 200 kbps, or about a 10x gain vs. any current technology, including HEVC. Too good to be true? Perhaps.

Now, some of the questions you might have are: Where does MPEG go from here, and is there still room for a video compression standard? First, MPEG began working a few years ago on a “royalty-free” codec (as much as it can be before being thoroughly reviewed by patent experts) called IVC (Internet Video Codec). It is today at the Committee Draft stage, and we can expect the standard in the 2016 time frame.

The second challenge MPEG is addressing is more about software-based codec and compression efficiency, which is why it’s launching the Future Video Coding initiative, aimed at delivering a new codec before 2020. As a result, we might see a lot of new tools that will not only improve compression efficiency, but will also enable highly scalable cloud-based encoding. In addition, as we see more software-based decoders on the market, we can also expect more flexible schemes for codec upgrades (e.g., not having to wait 10 years to get a new codec).

What is Harmonic’s position in this new codec world? Our company has always followed standards and has already deployed several HEVC services in OTT, DTT and DTH applications. On the legacy codec side, such as MPEG-4 AVC, we continue to improve encoding efficiency with our software-based Harmonic PURE Compression Engine. Using PURE, we can now demonstrate a gain of 25-30% better efficiency than with our hardware-based Electra 9200 encoder. This is, of course, at the same video quality (subjective testing) and density (number of channels/RU) levels.

If we utilize PURE in a cloud-based architecture, such as with our VOS virtualized media processing platform, even more compute capacity can be made available, resulting in even greater compression gains. The ultimate result will be that “AVC by Harmonic” may soon challenge HEVC as the codec of choice.

Of course, Harmonic will continue to monitor the progress of any new video codec standard, and with our cloud-based VOS architecture, we’re confident that we will be at the forefront of compression innovation, just as we’ve been for the last 25 years. The game has changed from the old days, when a single SD MPEG-2 channel could barely fit in a complete rack. Times are changing!

– Thierry Fautier, VP, Video Strategy

VR 360 Video: An Immersive Video Experience

VR 360 or virtual reality video has been in the limelight with the Oculus acquisition by Facebook for $2B, and more recently with the HTC Vive and Sony Morpheus announcements, all of which are wired experiences where a second device (PC or Game station) is needed to power the experience. The alternative approach is the Samsung and Google one, which favors a powerful smartphone and a head-mounted device ($99 for Samsung, $20 for Google) to enable a full VR 360 video experience. Analyst Piper Jaffray estimates that in 2020, the unwired virtual reality market will be 10x the size of the wired one.

IBC2015 saw several demonstrations of VR 360 video technologies:

  • GoPro showed its new camera rig together with the stitching capability of its Kolor acquisition
  • JauntVR (who has since received a $65M investment lead by Disney) was demonstrating its new 360 NEO camera rig, used for cinematic content
  • Elemental was showing a live Oculus Rift demonstration shooting its own booth in VR 360
  • Fraunhofer was demonstrating capture, stitching and encoding with soccer footage
  • The BBC illustrated how immersive VR 360 can be for events like the migrant crisis in Europe

But the most noticeable demonstration, because it was the only E/E demo with an immersive UHD experience, was the Viaccess Orca, Harmonic and VideoStitch E/E system that showed a 360-degree, live recording, while navigation was driven by the movement of the eyes on a Samsung Gear VR Innovator and Galaxy S6. The power of this platform is that the demo was not pushing the limits of the technology, and therefore, will be technologically valid for some time to come.

In terms of the workflow, the content is captured with multiple cameras and stitched in real time by VideoStitch. It is then encoded at UHD resolution (3840 x 2160) using the HEVC Main 10 codec by the Harmonic VOS platform. The stream is then sent to a Galaxy S6 that can decode natively, which by itself is impressive, given that last year we could barely decode HD HEVC on a high end tablet! The result is a truly immersive experience that was never before demonstrated on a consumer device.

Therefore, this innovative platform provides an end-to-end solution as it serves all the elements of a VR 360 video chain: Capture, stitching (VideoStich), encoding live, VOD, encryption, streaming (Harmonic), secured player and 360 UI (Viaccess Orca).

The possibilities are endless, as the demonstration has caught the interest of leading content & service providers looking at new ways to deliver immersive content to their customers. A representative from Samsung who saw the demo remarked: “If this is what a live feed will look like, then this will be a breakthrough for live sports”.

The Viaccess Orca, Harmonic and VideoStitch solution will continue to participate in live trials in 2015.

Harmonic would like to thank the Viaccess Orca and VideoStitch teams for their active contribution to this endeavor.

-Thierry Fautier, VP, Video Strategy

VidTech InFocus: ProView 7100 IRD and an End-to-End HEVC Capable Ecosystem

HEVC is really just starting to ramp up. One of the reasons is the availability of professional quality HEVC-capable receivers, such as the ProView 7100 integrated receiver-decoder, transcoder and stream processor. In this episode of VidTech InFocus, the team talk about industry adoption of HEVC and Harmonic’s end-to-end HEVC capable ecosystem.

Want to learn more about the future of video encoding? Feel free to download our white paper on encoding with the HEVC standard or guidelines for HEVC deployment.


The Top 4 Topics at IBC2015 – HDR, IP, COTS and HEVC


And so we return from another busy IBC, a show that was mostly consolidating previously launched technology and lots of rain!

My time was divided between the Media over IP showcase in the Harmonic theatre and various 4K / UHD presentations, amongst the wealth of customers trying to make sense of a very complex media landscape. Good content always wins though, and having compelling Ultra HD NASA material encoded by Harmonic, certainly attracted a lot of attention.

Another topic generating huge interest was High Dynamic Range (HDR). With many IBC attendees cautiously endorsing the picture quality, the concern is what is practical on consumer grade screens and how such a feature will co-exist alongside existing HD services and the already significant Standard Dynamic Range (SDR) UHD install base.

Backwards compatibility is the trickiest of issues and certainly exercising the best brains in the business. This issue deserves a dedicated explanation of the latest thinking. Stand by for my next blog when I’ll try and scope out the key issues!

The BBC’s Hybrid Log-Gamma paper quite rightly won the Best Conference Paper Award and a whole host of interest from broadcasters who are viewing this as the solution to a dilemma that has dimmed enthusiasm for HDR amongst broadcasters contemplating launching a UHD channel.

The almost universal support for SMPTE 2022 amongst vendors prompted lots of discussion about how IP will emerge in a production environment. To date IP has dominated in the distribution and file storage arenas, tackling synchronous switching in a COTs network domain will herald IP being universally applied across media workflows. COTs based switching sounds easy and has obvious appeal, but is a tall order, especially if you want attractive TCO comparisons with existing SDI infrastructures. For now, proof of concept demonstrations tantalized forward thinking visitors to IBC, but expect to see these transition to full blown software based products, ticking all the Software Defined Network, Virtualization and layer based processing boxes at future shows.

Solutions were split between a true COTs based video switch, albeit probably high-end multilayer network gear, an SDI switch reworked with IP inputs and a hybrid interim solution for those needing to purchase now. Timing and control differentiated the various methods, with some adopting network based Precision Timing Protocol feeding SMPTE 2059 epoch/profile while others are distinctly old school, with Black and Burst!

What was clear from IBC was that once in the IP domain, processing audio, video and metadata of individually time stamped streams makes for a superior solution than dealing with an MPEG multiplexed stream. This is not to decry SMPTE2022, which comprehensively addresses IP carriage of compressed and uncompressed video away from legacy ASI and SDI connectivity. The point here is that once in the asynchronous IP domain the need to support MPEG TS and SDI techniques lessens to such an extent that alternative methods, more appropriate for a production environment, are actively been considered. Of course any workflow will have to eventually interface with existing SDI infrastructure, as legacy equipment cannot be ignored in the short-term. However, there is significant momentum behind a drive towards all IP workflows and so, in the fullness of time, IP from ingest to screen will become a reality in the long term.

Clarification on HEVC licensing and royalty payments was sought by many at IBC2015. I did expect to see at least the beginning of the end for Quad SDI interfacing and the emergence of 25 or 40G interfacing for baseband UHD. 10G was definitely the emphasis for IP interfacing, allowing for multiple HD or lightly compressed UHD, hardly surprising given the cost associated with these next generation high bandwidth Ethernet interfaces.

For those wishing to see more about media over IP, feel free to view the Theatre presentation I used on the Harmonic Booth.

– Ian Trow, Sr. Director, Emerging Technology & Strategy, Harmonic

What Will be Featured at IBC2015?

Media over IP, the state of the emerging 4K/UHD market and workflow optimization will continue to be the headline issues. Undoubtedly many attending the show will be trying to assess to what extent IP has continued to advance on the few remaining broadcast specific islands of functionality and how they’ll influence future purchasing decisions. To those with file-based workflows it’s clear that IP already dominates, where capture and ingest are the last stand for SDI.

The main areas of interest for those involved with video infrastructure at IBC this year concern live or hybrid scenarios which require IP networks specifically configured for non-blocking behavior with a reasonable solution density and percentage bandwidth utilization, to truly make the CAPEX and OPEX arguments stack. There’ll be many approaches demonstrated showing significant variations in terms of implementation, reliance on existing SDI technologies and use of COTs network infrastructure. There’s some way to go in terms of a workflow devoid of a bespoke broadcast kit, but SMPTE 2022, Precision Timing Protocol (PTP) and significant thought given to organizing the various control, media and timing planes brings the end goal much closer.

As well as the move from SDI to IP, the industry is absorbing the need to separate video applications from underlying processing capability. This manifests itself in Virtualization and Software Defined Networks (SDNs) being actively considered for inclusion in future video workflows. The approaches shown will range from evolutionary to revolutionary, where the suitability of specific Virtualization and SDN techniques depends on how much legacy infrastructure exists, the adherence to open standards and expertise in commissioning and running datacenters.

The pace of 4K/UHD adoption is running at different rates depending on where you are within a video workflow. For production, the desire to commission in 4K is key in order to future proof content. This is driving the industry to rapidly reassess network storage and the role of more widespread compression usage in an environment where quality, editability and visually lossless post production are essential. For 4K storage those seeking solutions are holding back until a truly network-based solution exists. This has as much to do with which high bandwidth Ethernet variant triumphs in the long term for uncompressed video interfacing as it concerns how current 10G transport lightly compresses content.

25, 40 and 100G may be regarded by some as too far removed from what’s realistic, given the 10G restriction imposed by current networks. Many IBC attendees will be keenly evaluating TICO and LLVC as challengers to AVC and HEVC. The criteria for such comparisons will undoubtedly be suitability to process and encapsulate within IP networks, availability of both software and hardware codec solutions as well as widespread industry adoption. Fundamental to all the technology on show at IBC will be the desire to further rationalize workflows and evaluate when UHD will transition from broadcast novelty to mainstream viewer expectation. I look forward to seeing you at IBC, my next blog will provide a post-IBC analysis!

Harmonic will be at IBC2015, Stand 1.B20.

– Ian Trow, Sr. Director, Emerging Technology & Strategy, Harmonic