In this paper, we present a novel photometric compensation network named CASEformer, which is built upon the Swin module. For the first time, we combine coordinate attention and channel attention mechanisms to extract rich features from input images. Employing a multi-level encoder-decoder architecture with skip connections, we establish multiscale interactions between projection surfaces and projection images, achieving precise inference and compensation. Furthermore, through an attention fusion module, which simultaneously leverages both coordinate and channel information, we enhance the global context of feature maps while preserving enhanced texture coordinate details. The experimental results demonstrate the superior compensation effectiveness of our approach compared to the current state-of-the-art methods. Additionally, we propose a method for multi-surface projection compensation, further enriching our contributions.
Yuqiang ZHANG
Changchun University of Science and Technology
Huamin YANG
Changchun University of Science and Technology
Cheng HAN
Changchun University of Science and Technology
Chao ZHANG
Changchun University of Science and Technology
Chaoran ZHU
Jilin University
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Yuqiang ZHANG, Huamin YANG, Cheng HAN, Chao ZHANG, Chaoran ZHU, "CASEformer — A Transformer-Based Projection Photometric Compensation Network" in IEICE TRANSACTIONS on Information,
vol. E107-D, no. 1, pp. 13-28, January 2024, doi: 10.1587/transinf.2023MUP0001.
Abstract: In this paper, we present a novel photometric compensation network named CASEformer, which is built upon the Swin module. For the first time, we combine coordinate attention and channel attention mechanisms to extract rich features from input images. Employing a multi-level encoder-decoder architecture with skip connections, we establish multiscale interactions between projection surfaces and projection images, achieving precise inference and compensation. Furthermore, through an attention fusion module, which simultaneously leverages both coordinate and channel information, we enhance the global context of feature maps while preserving enhanced texture coordinate details. The experimental results demonstrate the superior compensation effectiveness of our approach compared to the current state-of-the-art methods. Additionally, we propose a method for multi-surface projection compensation, further enriching our contributions.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2023MUP0001/_p
Copy
@ARTICLE{e107-d_1_13,
author={Yuqiang ZHANG, Huamin YANG, Cheng HAN, Chao ZHANG, Chaoran ZHU, },
journal={IEICE TRANSACTIONS on Information},
title={CASEformer — A Transformer-Based Projection Photometric Compensation Network},
year={2024},
volume={E107-D},
number={1},
pages={13-28},
abstract={In this paper, we present a novel photometric compensation network named CASEformer, which is built upon the Swin module. For the first time, we combine coordinate attention and channel attention mechanisms to extract rich features from input images. Employing a multi-level encoder-decoder architecture with skip connections, we establish multiscale interactions between projection surfaces and projection images, achieving precise inference and compensation. Furthermore, through an attention fusion module, which simultaneously leverages both coordinate and channel information, we enhance the global context of feature maps while preserving enhanced texture coordinate details. The experimental results demonstrate the superior compensation effectiveness of our approach compared to the current state-of-the-art methods. Additionally, we propose a method for multi-surface projection compensation, further enriching our contributions.},
keywords={},
doi={10.1587/transinf.2023MUP0001},
ISSN={1745-1361},
month={January},}
Copy
TY - JOUR
TI - CASEformer — A Transformer-Based Projection Photometric Compensation Network
T2 - IEICE TRANSACTIONS on Information
SP - 13
EP - 28
AU - Yuqiang ZHANG
AU - Huamin YANG
AU - Cheng HAN
AU - Chao ZHANG
AU - Chaoran ZHU
PY - 2024
DO - 10.1587/transinf.2023MUP0001
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E107-D
IS - 1
JA - IEICE TRANSACTIONS on Information
Y1 - January 2024
AB - In this paper, we present a novel photometric compensation network named CASEformer, which is built upon the Swin module. For the first time, we combine coordinate attention and channel attention mechanisms to extract rich features from input images. Employing a multi-level encoder-decoder architecture with skip connections, we establish multiscale interactions between projection surfaces and projection images, achieving precise inference and compensation. Furthermore, through an attention fusion module, which simultaneously leverages both coordinate and channel information, we enhance the global context of feature maps while preserving enhanced texture coordinate details. The experimental results demonstrate the superior compensation effectiveness of our approach compared to the current state-of-the-art methods. Additionally, we propose a method for multi-surface projection compensation, further enriching our contributions.
ER -