Optimizing video streaming with lossy compression

November 18, 2024
7 Min
Video Education
Jump to
Share
This is some text inside of a div block.

To demonstrate the power of compression, we started with an uncompressed video file of 70.2 MB. By applying advanced lossy compression techniques, the file size dropped to an impressive 31.6 MB. Despite reducing the bitrate and bit depth, the visual quality remains nearly identical to the original, enabling efficient streaming with reduced buffering and faster load times. The comparison video above highlights the visual impact, showing how lossy compression can drastically optimize video quality and bandwidth usage without compromising the viewing experience.

Why  compress video files?

Uncompressed video data, while original in quality, is enormous in size and practically unmanageable for streaming, storage, and everyday use. To understand the scale, let’s look at a standard 1080p video running at 60 frames per second:

  • Resolution: 1920 x 1080 pixels
  • Frame Rate: 60 frames per second
  • Data per Pixel: 3 bytes (one each for red, green, and blue)

This translates to a data rate of about 370 MB per second:

A number and x on a white backgroundDescription automatically generated

Even high-speed storage like SSDs would struggle to continuously write this volume of data, and a 50GB Blu-ray disc could hold only about 2 minutes of footage at this rate! Without compression, the sheer data volume limits how and where these videos can be stored, moved, and streamed.

A Developer’s guide to using FFmpeg for lossy video compressiona

FFmpeg is a powerful open-source tool for processing video and audio files. Here’s a basic example of how to compress a video using FFmpeg:

FFmpeg COMMAND :

1ffmpeg -i Video.mp4 -vcodec libx264 -crf 30 -preset medium Video _compressed.mp4 

Here’s a breakdown of the command:

  • -i Video.mp4: This specifies your input file.
  • -vcodec libx264: This sets the video codec to H.264, which is efficient for lossy compression.
  • -crf 23: This sets the Constant Rate Factor. A lower value (like 23) will yield better quality and larger file size; you can adjust this to your preference (common values range from 18 to 28).
  • -preset medium: This option controls the encoding speed. Medium is a good balance between speed and compression efficiency.
  • Video_compressed.mp4: This is the name of your output file, where the compressed video will be saved.

Tips:

The CRF scale typically ranges from 0 to 51, where:

  • Lower CRF values (0-18) yield higher quality and larger file sizes.
  • Higher CRF values (25-35) produce lower quality and smaller file sizes.

Key results

The efficient H.264 codec keeps video quality while reducing file size significantly, making it ideal for streaming.

Attribute Original Compressed
Format H264 H264
File Size 70.2 MB 31.6 MB
Overall Bitrate 19.6 Mb/s 8.8 Mb/s
Encoding Settings - crf=30
VMAF Score - 90.459
Resolution 3840 x 2160 3840 x 2160
Frame Rate 50 FPS 50 FPS
Bit Depth 10 bits 10 bits
Chroma Subsampling 4:02:00 4:02:00
Bits/(Pixel*Frame) 0.047 0.021
Duration 30 s 30 s
Format MPEG-4, Base Media MPEG-4, Base Media
Codec ID isom (isom/iso2/avc1/mp41) isom (isom/iso2/avc1/mp41)

Understanding lossy compression

Lossy compression reduces file size by permanently removing some data from the original video. This technique takes advantage of our visual perception, allowing for the elimination of less significant information. For instance, subtle details that our eyes might overlook can be discarded without affecting the viewing experience.

  • Transform coding: Transform coding, often using the Discrete Cosine Transform (DCT), converts spatial data into frequency data, making it easier to identify and discard less important components.
  • Quantization: By reducing the precision of frequency components, quantization reduces data but results in a slight loss of detail.
  • Entropy coding: Using methods like Huffman coding, entropy coding reduces redundancy in the data, further compressing the file size without additional quality loss.

Lossy vs. Lossless compression: Which is best for your streaming needs?

Lossy compression: data is permanently removed to reduce file size, making it ideal for streaming where bandwidth efficiency is more important than perfect quality. Lossy compression techniques like reducing resolution or frames may slightly degrade quality, but they ensure smoother playback, quicker load times, and reduced buffering.

  • Smaller file sizes, reduced bandwidth usage, faster loading times.
  • Permanent quality loss, potential for artifacts with repeated compressions

Lossless compression, there is no loss of data or quality, but the file size remains large. It’s mainly used in professional environments, where maintaining original quality is essential. Techniques like eliminating temporal and spatial redundancies achieve smaller file sizes but cannot compete with lossy methods in terms of size reduction.

  • Retains original quality, ideal for professional editing.
  • Larger file sizes, requiring more bandwidth for streaming

Top codecs for lossy video compression: H.264, HEVC, and AV1

Several codecs are widely used for lossy compression, each with its unique benefits:

  • H.264/AVC (Advanced video coding): Efficient and compatible across devices, balancing quality and file size effectively, making it a go-to for streaming services.
  • H.265/HEVC (High efficiency video coding): Offers superior compression compared to H.264, allowing higher quality at lower bitrates—ideal for 4K content.
  • AV1: An open-source codec designed for web use, AV1 provides excellent compression efficiency but demands more processing power.

How to balance quality and compression for seamless video streaming

Achieving the right balance between video quality and file size involves several strategies:

  • Bitrate management: Select a bitrate that maintains quality while minimizing data usage. Variable bitrate (VBR) encoding optimizes quality by adjusting based on content.
  • Resolution and frame rate: Reducing these parameters can significantly lower file size. Adaptive streaming techniques adjust quality based on user bandwidth.
  • Testing and optimization: Use A/B testing to evaluate how different compression settings impact perceived quality, with tools like FFmpeg to automate the process.

Benefits of lossy compression on user experience

Lossy compression influences user experience in various ways:

  • Buffering and latency: Smaller file sizes lead to quicker loading times and less buffering, enhancing satisfaction.
  • Quality perception: Excessive compression can introduce artifacts like blurriness or pixelation, which can frustrate viewers.
  • Adaptive streaming: Implementing adaptive bitrate streaming helps adjust video quality in real-time based on bandwidth conditions, ensuring a smoother experience.

Key methods in lossy compression: Resolution, frames, and audio optimization

Lossy compression achieves smaller file sizes by permanently removing less significant information. While the quality is reduced to some extent, it’s done in a way that minimizes the impact on human perception. Below are the different methods used in lossy compression:

  • Reduced resolution: The resolution of a video refers to the number of pixels used to display the image. Reducing resolution is one of the most common ways to shrink file size in lossy compression. For example, converting a 4K video to 1080p reduces the number of pixels, making the file smaller while still offering decent quality for many users. This technique is often used in adaptive streaming, where the video quality is adjusted based on the user's bandwidth.
  • Reduced RGB/Pixels: Each pixel in a video contains color information, typically represented in RGB (Red, Green, Blue) channels. Reducing the number of pixels or the precision of the RGB values can significantly lower the file size. This might result in a loss of some fine details or colors, but in many cases, the difference is barely noticeable to viewers. For example, less vibrant colors or fewer gradients might be used in areas that aren't the focus of attention.
  • Reduced frames: The frame rate refers to the number of frames displayed per second (fps). Reducing the frame rate (e.g., from 60 fps to 30 fps) can lead to a more compact file, but it might make fast motion scenes look choppy. Lossy compression can also drop some frames that are considered redundant or blend them together, thus saving space while maintaining fluid motion.
  • Reduced audio samples: Audio compression works similarly by reducing the precision of audio data or removing frequencies that are less noticeable to human ears. In lossy video compression, lower bitrates for audio (e.g., reducing from 320 kbps to 128 kbps) or sample rates can reduce the file size. While it lowers audio fidelity, the impact on the average listener is often negligible.
  • Reduced motion vectors & metadata: Motion vectors are part of the temporal compression in video, used to describe the movement of objects between frames. In lossy compression, some precision in these vectors can be sacrificed, resulting in smaller file sizes, but possibly causing some minor inaccuracies in motion rendering. Additionally, unnecessary metadata (information about the video file such as editing history, camera settings, etc.) can be stripped, further reducing file size.
  • Reduced metadata: Metadata includes information like subtitles, captions, codecs used, or video tags. While metadata can be essential for editing or organizing videos, it increases file size without affecting the visual or auditory quality of the video itself. Lossy compression may reduce or remove excess metadata to further shrink the file.

Exploring lossless compression: Temporal, spatial, and color redundancy explained

Lossless compression aims to reduce file size without losing any data or quality. It focuses on removing redundancies in video content while preserving every bit of information, making it ideal for professional editing or archival purposes.

  • Temporal redundancy elimination: Temporal redundancy refers to the repetitive content across consecutive video frames. For example, in a video of a person talking, the background remains the same for several frames, while only the person's face or mouth might be moving. By detecting these similarities between frames, lossless compression algorithms store only the differences (motion vectors) instead of each frame in full, significantly reducing file size without losing quality.
  • Spatial redundancy elimination: Spatial redundancy refers to redundancies within a single video frame. For example, a large part of a video frame might have the same color or texture, such as a clear blue sky. Compression algorithms use spatial redundancy elimination techniques, like run-length encoding (RLE) or block-based encoding, to reduce the size by grouping similar areas of the frame and storing them more efficiently.
  • Color redundancy elimination: Most video frames contain a vast range of colors, but many of them are not significantly different. Lossless compression uses color redundancy elimination by limiting the number of unique colors stored in the video data. It can reduce color data to a minimal set, often without affecting the perceived quality. Techniques like chroma subsampling also help to store less color information in regions where the human eye is less sensitive to color changes.

How FastPix can help optimize compression for video streaming  

FastPix simplifies the implementation of compression and ensures efficient streaming with its video processing features:

  1. Context-aware encoding: FastPix uses context-aware encoding to dynamically adjust compression settings based on content type. For example, high-motion sports footage and static lecture videos can have tailored compression for optimal quality and size.
  1. Support for modern codecs: FastPix supports H.264, H.265 (HEVC), and AV1 codecs, allowing developers to achieve higher compression ratios without compromising visual quality.
  1. Adaptive bitrate streaming: Lossy compression works hand-in-hand with adaptive bitrate streaming, a feature built into FastPix. It ensures smooth playback across varying network conditions, dynamically adjusting video quality to reduce buffering and latency.
  1. Batch processing and automated encoding: With FastPix, developers can automate video compression workflows for large libraries, saving time and ensuring consistent results across all content.
  1. Real-Time compression for live streaming: FastPix offers instant live encoding, allowing for real-time compression of live video feeds. This ensures that live events reach viewers quickly and efficiently, even under bandwidth constraints.

By using FastPix's API-driven solutions, developers can implement these advanced techniques with ease, reducing file sizes, improving quality, and delivering seamless streaming experiences to users.

Conclusion

By utilizing advanced techniques such as transform coding, quantization, and entropy coding, developers can optimize video for seamless playback and minimal bandwidth consumption.

Striking the right balance between compression and quality is key to ensuring an exceptional viewing experience. With the right tools and configurations, lossy compression can deliver smooth, fast-loading streams that meet the demands of modern audiences without compromising too much on quality.

Know more

Enjoyed reading? You might also like

Try FastPix today!

FastPix grows with you – from startups to growth stage and beyond.