NAVA: A Network-Adaptive View-Aware Volumetric Video Streaming Framework
Volumetric video is the emerging format for representing real-world dynamic objects such as humans in Extended Reality (XR) applications. However, real-time streaming of volumetric video to user devices is challenging due to the extremely high data rate and low latency requirements. This paper intro...
Saved in:
Main Authors: | , , |
---|---|
Format: | Article |
Language: | English |
Published: |
IEEE
2025-01-01
|
Series: | IEEE Access |
Subjects: | |
Online Access: | https://ieeexplore.ieee.org/document/10870276/ |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Volumetric video is the emerging format for representing real-world dynamic objects such as humans in Extended Reality (XR) applications. However, real-time streaming of volumetric video to user devices is challenging due to the extremely high data rate and low latency requirements. This paper introduces NAVA, a novel network-adaptive view-aware volumetric video streaming framework for XR scenes consisting of multiple volumetric sequences. The proposed framework dynamically adapts the quality of individual volumetric sequences based on network conditions and the user’s viewpoint to optimize streaming performance under network constraints. In our framework, multiple versions with different quality of individual volumetric video are prepared and stored on the server in advance. The rate allocation problem is formulated as a optimization problem by taking into account the visible area of individual sequences as well as the network constraint. We then present two solutions to decide the quality of each volumetric video in real-time. Extensive evaluation shows that the proposed framework can increase the viewport quality by <inline-formula> <tex-math notation="LaTeX">$0.5\sim 1.1$ </tex-math></inline-formula>dB compared to existing methods. The outcome of this study is expected to accelerate the adoption of real-time interactive XR applications, enabling users to experience and interact with dynamic virtual environments seamlessly. |
---|---|
ISSN: | 2169-3536 |