Hands working on tablet
   Back to main
HP Anyware is the enterprise software that IT needs to keep people and teams productive with secure access to their digital workspaces from any mix of infrastructure (cloud, data center, edge, workstation) and end-user device (PC, Mac, laptop, tablet, thin and zero client), anywhere.

Updated August 6, 2021: Auto-Offload offers up to 80% bandwidth saving over CPU Offload for CAS 2020.10, 2021.01 and 2021.03. CAS 2021.07 offers Auto-Offload at increased image quality for video workloads and typically 50% bandwidth saving. Additional bandwidth saving is accomplished by setting a lower image quality if needed.

I’m extremely excited at this latest update to Teradici Cloud Access Software – as of release 2020.10 (October 2020), PCoIP Ultra protocol enhancements introduces “Auto-Offload,” a powerful new feature for many enterprises, with particular relevance to media and entertainment studios grappling with work-from-home configuration challenges.

PCoIP Ultra previously offered two codec offload modes, as described in the first posting of my PCoIP Ultra Technical series:

  • CPU-Offload relies on the Intel AVX2 instructions to accelerate display pixel compression across multi-core CPU architectures while preserving the kingpin Teradici technology benefits – namely build-to-lossless color accuracy and unrivalled image fidelity, even at 4K/UHD display resolutions and high frame rates.
  • GPU-Offload leverages NVIDIA NVENC technology as a system efficient mode optimized around network bandwidth and CPU savings, especially for high frame rate content.

The challenge many of our high performance customers faced in the past has been the static nature of these two modes, requiring IT admins to pre-configure user sessions on a case-by-case basis dependent on factors such as workload, infrastructure specs and, since the onset of the pandemic, individual home network capabilities too.

Auto-Offload makes that extra configuration work unnecessary. With the new Auto-Offload mode, PCoIP Ultra uses CPU-Offload by default but dynamically switches to GPU-Offload in-session during periods of high display activity. Auto-Offload optimizes performance by switching between the highest quality mode needed for content accuracy and the system efficient H.264 mode supported by NVENC.

Auto Offload diagram

The result is that tasks like reading text, using a Wacom device, making color decisions or doing delicate CAD work are executed on accurate display representations, eliminating user frustration, guesswork and eye fatigue related to blurry text, while tasks such as video editorial work and manipulation of CAD structures remain highly responsive even over home network connections.

I’ve run some benchmarks on my RTX Server (Intel Xeon 6248 24-core CPU, configured with eight vCPUs, paired up with an NVIDIA RTX6000 GPU) and I have to say, the efficiency benefits are pretty awesome!

High performance LAN diagram and CPU saving

For a Big Buck Bunny video playback benchmark (Looping intro scene, 1080p, 24 fps, default image quality), we observe Auto-Offload using ~5 Mbps, compared to CPU-Offload using ~30 Mbps, which amounts to about 80% less network bandwidth compared to CPU-Offload mode alone. This is great for home network use cases in which users get the benefits of lossless text and fine lines when reading or reviewing such content but also get to play full frame rate video over a wide range of last-mile networks without loss of frame rate or suppression of image quality. The benefits extend to LAN use cases too – when playing high frame rate video content, PCoIP Ultra automatically switches to GPU-Offload which, in the case of the 1080p example, used 3.7% CPU compared to 9% using CPU-Offload alone, a saving of over 50%.

The first release of PCoIP Ultra Auto-Offload has been optimized for Linux desktops but will benefit video editorial workloads under Windows too. Expect additional benefits for GL-based Windows applications such as Autodesk 3DS Max and others slated for our Cloud Access Software 21.01 release.  I plan to run additional benchmarks in conjunction with our development in team fine-tuning that upcoming release – so stay tuned for an update on that!