Full Text Views
52
3D video contents depend on the shooting condition, which is camera positioning. Depth range control in the post-processing stage is not easy, but essential as the video from arbitrary camera positions must be generated. If light field information can be obtained, video from any viewpoint can be generated exactly and post-processing is possible. However, a light field has a huge amount of data, and capturing a light field is not easy. To compress data quantity, we proposed the visually equivalent light field (VELF), which uses the characteristics of human vision. Though a number of cameras are needed, VELF can be captured by a camera array. Since camera interpolation is made using linear blending, calculation is so simple that we can construct a ray distribution field of VELF by optical interpolation in the VELF3D display. It produces high image quality due to its high pixel usage efficiency. In this paper, we summarize the relationship between the characteristics of human vision, VELF and VELF3D display. We then propose a method to control the depth range for the observed image on the VELF3D display and discuss the effectiveness and limitations of displaying the processed image on the VELF3D display. Our method can be applied to other 3D displays. Since the calculation is just weighted averaging, it is suitable for real-time applications.
Munekazu DATE
Nippon Telegraph and Telephone Corporation
Shinya SHIMIZU
Nippon Telegraph and Telephone Corporation
Hideaki KIMATA
Nippon Telegraph and Telephone Corporation
Dan MIKAMI
Nippon Telegraph and Telephone Corporation
Yoshinori KUSACHI
Nippon Telegraph and Telephone Corporation
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Munekazu DATE, Shinya SHIMIZU, Hideaki KIMATA, Dan MIKAMI, Yoshinori KUSACHI, "Depth Range Control in Visually Equivalent Light Field 3D" in IEICE TRANSACTIONS on Electronics,
vol. E104-C, no. 2, pp. 52-58, February 2021, doi: 10.1587/transele.2020DII0003.
Abstract: 3D video contents depend on the shooting condition, which is camera positioning. Depth range control in the post-processing stage is not easy, but essential as the video from arbitrary camera positions must be generated. If light field information can be obtained, video from any viewpoint can be generated exactly and post-processing is possible. However, a light field has a huge amount of data, and capturing a light field is not easy. To compress data quantity, we proposed the visually equivalent light field (VELF), which uses the characteristics of human vision. Though a number of cameras are needed, VELF can be captured by a camera array. Since camera interpolation is made using linear blending, calculation is so simple that we can construct a ray distribution field of VELF by optical interpolation in the VELF3D display. It produces high image quality due to its high pixel usage efficiency. In this paper, we summarize the relationship between the characteristics of human vision, VELF and VELF3D display. We then propose a method to control the depth range for the observed image on the VELF3D display and discuss the effectiveness and limitations of displaying the processed image on the VELF3D display. Our method can be applied to other 3D displays. Since the calculation is just weighted averaging, it is suitable for real-time applications.
URL: https://global.ieice.org/en_transactions/electronics/10.1587/transele.2020DII0003/_p
Copy
@ARTICLE{e104-c_2_52,
author={Munekazu DATE, Shinya SHIMIZU, Hideaki KIMATA, Dan MIKAMI, Yoshinori KUSACHI, },
journal={IEICE TRANSACTIONS on Electronics},
title={Depth Range Control in Visually Equivalent Light Field 3D},
year={2021},
volume={E104-C},
number={2},
pages={52-58},
abstract={3D video contents depend on the shooting condition, which is camera positioning. Depth range control in the post-processing stage is not easy, but essential as the video from arbitrary camera positions must be generated. If light field information can be obtained, video from any viewpoint can be generated exactly and post-processing is possible. However, a light field has a huge amount of data, and capturing a light field is not easy. To compress data quantity, we proposed the visually equivalent light field (VELF), which uses the characteristics of human vision. Though a number of cameras are needed, VELF can be captured by a camera array. Since camera interpolation is made using linear blending, calculation is so simple that we can construct a ray distribution field of VELF by optical interpolation in the VELF3D display. It produces high image quality due to its high pixel usage efficiency. In this paper, we summarize the relationship between the characteristics of human vision, VELF and VELF3D display. We then propose a method to control the depth range for the observed image on the VELF3D display and discuss the effectiveness and limitations of displaying the processed image on the VELF3D display. Our method can be applied to other 3D displays. Since the calculation is just weighted averaging, it is suitable for real-time applications.},
keywords={},
doi={10.1587/transele.2020DII0003},
ISSN={1745-1353},
month={February},}
Copy
TY - JOUR
TI - Depth Range Control in Visually Equivalent Light Field 3D
T2 - IEICE TRANSACTIONS on Electronics
SP - 52
EP - 58
AU - Munekazu DATE
AU - Shinya SHIMIZU
AU - Hideaki KIMATA
AU - Dan MIKAMI
AU - Yoshinori KUSACHI
PY - 2021
DO - 10.1587/transele.2020DII0003
JO - IEICE TRANSACTIONS on Electronics
SN - 1745-1353
VL - E104-C
IS - 2
JA - IEICE TRANSACTIONS on Electronics
Y1 - February 2021
AB - 3D video contents depend on the shooting condition, which is camera positioning. Depth range control in the post-processing stage is not easy, but essential as the video from arbitrary camera positions must be generated. If light field information can be obtained, video from any viewpoint can be generated exactly and post-processing is possible. However, a light field has a huge amount of data, and capturing a light field is not easy. To compress data quantity, we proposed the visually equivalent light field (VELF), which uses the characteristics of human vision. Though a number of cameras are needed, VELF can be captured by a camera array. Since camera interpolation is made using linear blending, calculation is so simple that we can construct a ray distribution field of VELF by optical interpolation in the VELF3D display. It produces high image quality due to its high pixel usage efficiency. In this paper, we summarize the relationship between the characteristics of human vision, VELF and VELF3D display. We then propose a method to control the depth range for the observed image on the VELF3D display and discuss the effectiveness and limitations of displaying the processed image on the VELF3D display. Our method can be applied to other 3D displays. Since the calculation is just weighted averaging, it is suitable for real-time applications.
ER -