There are many challenges when making the shift to UHD in the production environment. While you may not change much if anything for your UHD workflows compared to HD, the gains offered by UHD are considerable. The increase in resolution, bit depth and the ability to leverage high dynamic range (HDR) results in a format that places much greater strain on bandwidth and storage capacity requirements of a media facility’s infrastructure and its workflows. Therefore, it is important to consider the impact of much larger codec data rates, demands for higher data throughput for read and write operations, and the need to keep performance levels consistent regardless of storage capacity used, or whether drive expansion or drive rebuilds are taking place, when users are working on the storage system.
Andy Warman, Director, Production and Playout Strategy and Market Development at Harmonic, provides an overview of what is needed on the storage front to enable practical UHD production in his recent Building Out Storage for UHD: Challenges and Considerations article in KitPlus – The TV-Bay Magazine.
While HDR and UHD might have grabbed all the headlines at NAB 2016, for those commissioning video solutions, the real fundamental issues concerned virtualization, cloud and increased adoption of IP networking. Put like this, the two issues appear totally disconnected, when in fact the move to UHD with HDR is quite likely to be the move that will increase widespread adoption of IP methods and techniques. The reality is that while IP networking has been embraced for SD, HD and multiscreen infrastructure, UHD is still at the proof-of-concept stage and accommodating the bandwidth required, especially for live, is a significant challenge. IP has had its part to play, but mostly in 4K streaming OTT services, which to date have been the only distribution application to be moderately deployed, but harboring ambitions to scale soon.
The benefits of IP have been embraced over the last 10 years to such an extent that the infrastructure relies on a sea of IP network gear with the occasional bespoke broadcast island, a significant reversal from the last time most video delivery solutions were refreshed. This now increases the pressure for the remaining islands to conform and adopt IP. NAB this year certainly reflected this trend with most interest given over to frame-accurate switching and seamless redundancy. Visitors to Vegas were certainly keen to get to the bottom of the respective switching approaches and how they’d be implemented on standard network hardware. Very much related to COTS-based switching is the issue of seamless redundancy. To date, the majority of interest in SMPTE 2022 has been the -6 variant dealing with uncompressed carriage, but it was noticeable the number of visitors who wished to explore the -7 variant, which adds seamless redundancy to the functionality catered for by SMPTE 2022.
Solutions for switching exist and are a lot more viable as applications hosted on COTS-based hardware, especially if the most stringent requirements for live source switching are relaxed to being scheduled. Combined with seamless redundancy, COTS-based switching advances now pave the way for network infrastructure to further diminish the size of broadcast interfaces and approaches deployed in video infrastructure. The debate at NAB was to what extent traditional broadcast approaches could be eradicated in favor of an all IP-based approach. The most ardent IP fans would have you believe that all is done and dusted in this area and SDI has no place in a video workflow. In the long term this is probably the case, especially for delivery infrastructure, that have the scale and demand to warrant video applications to be developed. However, there will always be niche video applications requiring functionality that cannot be economically implemented on COTS-based infrastructure or applications where demand is not yet strong enough to justify the investment. An example of which takes us back to live UHD workflows, where SDI baseband interfacing has been given a new lease on life, albeit through unwieldy quad interfaces or enhanced throughput variants of SDI with limited cable length.
For many, the viability of UHD is inextricably linked with a push towards IP infrastructure. No doubt there are hurdles ahead and in the short-term proof of concepts will retain a broadcast feel, adopting interfacing techniques that are more reliant on SDI than IP in the short-term. However, for me the key issue is whether light compression will be embraced further up a production workflow, and so relegate baseband video to ingest acquisition. Certainly this was the approach favored by those who wish to see 10G network infrastructure cater for video production needs, including mezzanine future proofed UHD production. For this to become commonplace mild compression schemes like TICO, LLVC and VC2 are needed and these were certainly the discussion points for future CODECS at the show. The alternative is adoption of Ethernet interfacing above 10G, previously only seen at enterprise cores enabling backbone connectivity of 40 or 100G. There was awareness of this possibility at the show, but for most, 10G with light compression was the only feasible option in the short-term for UHD production.
Putting IP interfacing aside, this leaves virtualization and cloud-based processing. At NAB virtualization appeared a given with many attendees wanting to discuss this technology in the context of cloud-based services, and the level of safety required to relinquish responsibility to a third party. For the bigger video distribution operators, the advantages lie in service and support, commonality of inventory and the benefits of being able to abstract video-specific applications from the underlying server-based infrastructure. For emerging distribution operators or those seeking service provision on an ad hoc basis, commissioning fully formed services was of prime interest. In both instances Harmonic’s VOS technology stood out, undoubtedly a main draw at the show outside of the glare created by UHD.
– Ian Trow, Sr. Director, Emerging Technology & Strategy, Harmonic
We would like to answer “definitely, yes” but it is not as simple as it appears…
Introducing Videomenthe, a France-based company that started eight years ago as a Value Added Reseller specializing in media processing solutions, which developed a long-standing partnership with Harmonic over the years, particularly around file-based transcoding.
Three years ago we decided to launch our own solution, Eolementhe™ a multi-vendor cloud-based portal for media processing that included Harmonic file-based transcoding solutions – the WFS™ workflow engine controlling ProMedia® Carbon or ProMedia Xpress transcoding nodes.
When we launched Eolementhe, we understood that new players were about to rush into media processing operations, but many broadcasters were wondering how and why outsourcing transcoding operations to a cloud facility would work. Does it solve today’s challenges, such as the multiplication of formats and standards and the fast flow of file-based content, with new storage needs? As we all know, providing video for the wide array of platforms is a challenge itself, making scalability a leitmotif for all content video providers.
We designed Eolementhe as a solution to address these new market requirements. From a single and easy-to-use interface, whether a journalist, technician, post-production professional or broadcaster, customers can automate their media file workflows in the cloud.
However, moving all file-based transcoding operations to a cloud platform is not always what our customers need. By offering a hybrid approach thanks to the SaaS model, Elomenthe lets customers keep their in-house solutions and add a plug-in with ready-to-use presets for scalability. The benefit of this process is the ability to shut the plug-in off when not needed, providing all the agility and the elasticity needed for any business size.
In the end, cloud platforms used with on-premise file-based transcoding platforms help video owners to maintain their workflow and achieve a common and vital goal – to provide high-quality video whatever the screen.
This time last year, our ViBE 4K HEVC encoder was brand-new – having just been launched at the 2015 NAB Show – and was already in use for groundbreaking 4K coverage of the French Open tennis championship. For this year’s French Open, the ViBE 4K, alongside other equipment, provided terrestrial viewers with an even more astonishing experience through a powerful combination of high dynamic range (HDR) and hybrid log-gamma (HLG) technologies. It was also used to provide a backup for the satellite broadcast.
In partnership with Eutelsat and TDF, the French Tennis Federation and France Télévisions used the ViBE 4K to broadcast the French Open live in UHD with HDR according to HLG specifications. The UHD broadcast was available on channel 81 over the TDF terrestrial network to Paris residents equipped with 4K televisions compatible with DVB-T2 and HEVC. The UHD channel was also available on Fransat. The channel was live during the last four days of the tournament to cover the semi-final and final matches for the men’s and women’s singles, as well as the final matches for the men’s and women’s doubles.
The ViBE 4K’s HLG-HDR decoding configuration has been successfully tested with Samsung and LG’s latest-model UHD televisions. The French Open UHD channel was broadcast at around 20 Mb/s on terrestrial network and at 30 Mb/s on satellite.
Once again, the ViBE 4K has proven its power not only as the industry’s most compact live 4K encoder, but as the most leading-edge encoding solution available for terrestrial UHD broadcasts. In addition to support for main HDR standards including HLG, the ViBE 4K offers HEVC compression efficiency that enables UHD encoding at a bit rate compatible with terrestrial networks. Broadcasters are able to encode two channels in a terrestrial mux and take advantage of a truly impressive audio feature set, including support for Dolby AC-4. Consumers are the real winners, with the ability to experience their favorite sports in stunning UHD and HDR.
The cloud. We’re hearing a lot about its applications for media processing, but is a cloud-based service ready for prime time? Can it really help you launch a new broadcast-quality live streaming channel in a matter of hours, at a fraction of the cost of building out a new on-premise data center? With Harmonic’s new VOS 360, the answer is definitely yes.
In this short episode of VidTech Insider, we cover the benefits VOS 360 brings to content creators and owners.
Simple user interface controls the complete video workflow
Easy content contribution from sources
Transcoding, origination, packaging and encryption on the fly
Global content delivery to consumer connected devices
Agility and a Quality
Build or remove services in minutes while compute, network, storage and delivery schemas are automatically and seamlessly provisioned
Best-of-breed video quality with Harmonic’s PURE Compression Engine
Subscription and usage-based pricing to manage market volatility
24/7 hosted and maintained by Harmonic’s globally distributed DevOps team
RESTful APIs for rapid integration with technology partners and the introduction of video workflows to existing applications
As mentioned in part 1, High Dynamic Range (HDR) featured large at both CES and NAB. Not surprisingly CES concentrated purely on the latest screens, with most demos staged to emphasize HDR capability. Even from a consumer perspective, results were variable with many screens displaying content that certainly did show HDR, but failed to remain faithful to the original scene. Artifacts a plenty were evident, particularly when content not intended for HDR was pushed to the limit! Highlighting such shortcomings isn’t meant to detract from the potential of HDR, but does illustrate the significant challenge those seeking early adoption face.
Roll on a few months to NAB, and the same HDR screens featured, though this time in a different context. Attendees wanted to probe how practicable a workflow could be deployed and how this would co-exist with SDR services. The good news is that it is certainly possible to rollout out HDR, particularly for Video On Demand (VOD) services, but how viable this approach would be for other types of services highlights the aspects demanding attention, that allow the debutant UHD channels to scale in quality beyond 4K resolution! VOD services consist predominantly of cinematic content that allows for HDR grading as part of the workflow, current DTH schedules consist of material from a wide variety of sources and that is crucially often live.
HDR 10 is now established for cinematic content and has the advantage of being a scheme implemented in first generation HDR capable screens. But how will SDR content or HDR material graded by a different scheme be catered for? This is where the difference between staged demo at a trade show and viable workflow with a coherent metadata progression from ingest to display becomes apparent. The vast majority of NAB demonstrations consisted of beautiful content graded for a specific screen, all signaled by static metadata, not at all representative of a typical channel workflow, let alone the mixed variety of screen technologies on show at CES.
The solution to this signaling dilemma partially exists in the production domain, but anyone with demo responsibility will be only to aware that signaling, support for schemes other than HDR 10 and the vagaries of tone mapping have yet to be commonly available on the current crop of HDR capable 4K screens. Tone mapping is supposed to be the magic ingredient to allow content graded for the maximum luminance, yet displayed correctly on all screen types, i.e. graded for the greater luminance range of quantum dot and yet correctly displayed on OLED!
In so many of these scenarios I found the HDR content look ghostly with odd color shifts and lacking in detail. Sure it was HDR, but lacking faithful reproduction of the original scene! Another interim iteration of HDMI is surely needed, particularly to correctly allow alternate HDR schemes like Hybrid Log Gamma and Dolby Vision to be implemented alongside HDR 10, truly opening the prospect of HDR for live events! Even though the oddities of tone mapping made HDR occasionally look surreal, much of the really impressive material owed much to the skill of colorists and graders who more than ever prove that while the engineers have signaling issues to address, in the end content post production is a really craft that’ll make or break HDR material!
So having comprehensively dealt with HDR, for my next blog it’s time for the final post-NAB installment that is aimed at issues pivotal to the business-end of our industry, namely IP, virtualization, cloud services and compression tuned for broadband delivery.
– Ian Trow, Sr. Director, Emerging Technology & Strategy, Harmonic
The recently launched video “Aurora Borealis from Space,” was praised by Al Roker on NBCs The Today Show as “amazing” while the BBC called it “spectacular,” and CNN claimed that the video “may be the most beautiful thing you’ll see today.” But that was just the tip of the iceberg of how this video came to be.
The “Aurora from Space” video was launched simultaneously at the Harmonic booth at NAB 2016, on NASA’S YouTube channel as well as on the NASA TV UHD channel.
The video was produced by Harmonic exclusively for NASA TV UHD, the first non-commercial consumer UHD channel in North America, which was launched at IBC last September. Leveraging the resolution of ultra high definition video (UHD 2160p60), the channel provides viewers a front row seat to gorgeous views captured from the International Space Station (ISS) in addition to other current and classic NASA missions.
Harmonic provides the end-to-end UHD video delivery system and post-production services while also managing operations. Creating this much high-quality content and working exclusively in uncompressed UHD workflows created the ideal challenge for Harmonic’s digital media team, in order to use the company’s own equipment and solutions throughout the production and delivery workflow.
Joel Marsden, the Executive Producer of NASA TV UHD for Harmonic, supervised the construction of the channel infrastructure and content from scratch.
“We had to set up a veritable factory to create ongoing new episodes for the eight original new series, “ISS Life”, “Liftoff”, “Earth View”, “NASA Classics”, “Solar System”, “Development”, “Deep Space” and “Mars” that are featured on the channel, said Marsden. “This meant that we had four edit and render stations working non-stop from the same shared media storage solution.”
“We were extremely fortunate to get our hands on the new Harmonic MediaGrid 5840, which gave us an instant “half-a-petabyte-in-a-box” with its distributed, scale-out architecture, something you can’t live without if you are generating over 8 TB a day of new media and files during our peak production times.”
Between editing, archiving and heavy renders the pipeline was pushed to the very limit with rock-solid results. But MediaGrid is not the only aspect of Harmonic’s portfolio that was instrumental in bringing NASA TV UHD to the public.
All the finished shows are transferred to the Harmonic technical team in Atlanta led by Scott Woods, where the content is loaded onto Harmonic’s revolutionary Spectrum X playout server and interfaces with the Electra X3 encoder and ProStream processors hosted at a NASA facility, managed by Encompass Digital Media, home to the agency’s satellite and NASA TV hubs.
As the “Aurora in Space” video went live on NASA TV UHD and lit up Harmonic’s booth at NAB 2016, it also went onto NASA’s YouTube channel where it became one of the agency’s top ten watched videos in under a week.
The video and accompanying stories have been published in over 550 online publications in every region of the planet so far, to rave reviews. The Aurora video and NASA TV UHD prove that when the best combine, amazing video happens!
As expected, 4K / UHD was rejuvenated by the addition of High Dynamic Range (HDR) and a host of carefully crafted demos to show the latest crop of debutant screens, unveiled earlier at CES in all their glory. The results were certainly impressive, but needed considerable technical insight to unravel what was behind “the screen” and how applicable it would be to real world services. At its best, HDR content shown on the latest 2016 screens looked amazing, but most attending NAB this year were trying to visualize what the route to an HDR upgrade would be?
Fundamental to understanding the likely workflow is an appreciation of the kind of service being developed, Video on Demand (VoD) or Live, as well as the source and format of the content. Addressing these issues was the primary focus of my presentation at the Harmonic theatre this year, always an interesting litmus test of what is challenging the industry. For those not at NAB, a copy can be downloaded using the following link.
So, what are the main conclusions now that we’ve all returned from the show and had time to digest the news and events? It needs to be made clear from that outset that 4K / UHD is here to stay, certainly no flash in the pan like 3D! With such strong consumer adoption of 4K / UHD screens, and an industry keen to future-proof content, the onus of responsibility is for delivery ecosystems to match the confidence in 4K / UHD shown upstream and downstream within the overall workflow.
While bandwidth is becoming more readily available, we are not yet at the stage where a full scale conversion to 4K / UHD delivery is viable. Firstly, the recent investment in HD infrastructure means this is the starting point for many contemplating meeting consumers expectations, to justify consumer’s investment in UHD screens. Consequently, demos showing the benefits of acquiring content in 4K / UHD, down-converting to HD, and then relying on a Set Top Box or 4K / UHD screen to up-convert are very convincing. Many visitors asked whether HDR should be applied to HD? In my view, native 4K / UHD delivery will eventually happen, but in the short-term HD delivery has to be factored in, especially if it is 1080p. This necessitates dealing with awkward backwards compatibility issues concerning signaling and metadata. To date this has not been a strong point of current workflows.
The most convincing demos at NAB and CES all consisted of highly engaging content, displayed on the most recent screens, which while showing HDR at it’s best, in no way paved the way for an HDR service being universally rolled out in the near term. Certainly, there will continue to be 4K VoD streaming services, for those fortunate enough to have top-end broadband provision, but for these to trigger a widespread shift to native UHD services requires the latest standardization decisions to be commonly available on consumer screens, viable signaling to exist throughout the production and delivery workflow and a clear lift in quality compared to current HD services.
I’ll drill down into what this means in reality for 4K / UHD workflows in part 2 and then move from all the hype surrounding HDR, to deal with the business end of service deployment, namely IP, virtualization, cloud services and compression tuned for broadband delivery in part 3.
Can’t wait until then? Download our primer to HDR and why it’s a necessary part of the 4K ecosystem.
– Ian Trow, Sr. Director, Emerging Technology & Strategy, Harmonic