1-2hit |
Zhi LIU Heng WANG Yuan LI Hongyun LU Hongyuan JING Mengmeng ZHANG
In video-based point cloud compression (V-PCC), the partitioning of the Coding Unit (CU) has ultra-high computational complexity. Just Noticeable Difference Model (JND) is an effective metric to guide this process. However, in this paper, it is found that the performance of traditional JND model is degraded in V-PCC. For the attribute video, due to the pixel-filling operation, the capability of brightness perception is reduced for the JND model. For the geometric video, due to the depth filling operation, the capability of depth perception is degraded in the boundary area for depth based JND models (JNDD). In this paper, a joint JND model (J_JND) is proposed for the attribute video to improve the brightness perception capacity, and an occupancy map guided JNDD model (O_JNDD) is proposed for the geometric video to improve the depth difference estimation accuracy of the boundaries. Based on the two improved JND models, a fast V-PCC Coding Unit (CU) partitioning algorithm is proposed with adaptive CU depth prediction. The experimental results show that the proposed algorithm eliminates 27.46% of total coding time at the cost of only 0.36% and 0.75% Bjontegaard Delta rate increment under the geometry Point-to-Point (D1) error and attribute Luma Peak-signal-Noise-Ratio (PSNR), respectively.
Mengmeng ZHANG Zeliang ZHANG Yuan LI Ran CHENG Hongyuan JING Zhi LIU
Point cloud video contains not only color information but also spatial position information and usually has large volume of data. Typical rate distortion optimization algorithms based on Human Visual System only consider the color information, which limit the coding performance. In this paper, a Coding Tree Unit (CTU) level quantization parameter (QP) adjustment algorithm based on JND and spatial complexity is proposed to improve the subjective and objective quality of Video-Based Point Cloud Compression (V-PCC). Firstly, it is found that the JND model is degraded at CTU level for attribute video due to the pixel filling strategy of V-PCC, and an improved JND model is designed using the occupancy map. Secondly, a spatial complexity detection metric is designed to measure the visual importance of each CTU. Finally, a CTU-level QP adjustment scheme based on both JND levels and visual importance is proposed for geometry and attribute video. The experimental results show that, compared with the latest V-PCC (TMC2-18.0) anchors, the BD-rate is reduced by -2.8% and -3.2% for D1 and D2 metrics, respectively, and the subjective quality is improved significantly.