1-3hit |
Jinhua WANG Xuewei LI Hongzhe LIU
At present, the generative adversarial network (GAN) plays an important role in learning tasks. The basic idea of a GAN is to train the discriminator and generator simultaneously. A GAN-based inverse tone mapping method can generate high dynamic range (HDR) images corresponding to a scene according to multiple image sequences of a scene with different exposures. However, subsequent tone mapping algorithm processing is needed to display it on a general device. This paper proposes an end-to-end multi-exposure image fusion algorithm based on a relative GAN (called RaGAN-EF), which can fuse multiple image sequences with different exposures directly to generate a high-quality image that can be displayed on a general device without further processing. The RaGAN is used to design the loss function, which can retain more details in the source images. In addition, the number of input image sequences of multi-exposure image fusion algorithms is often uncertain, which limits the application of many existing GANs. This paper proposes a convolutional layer with weights shared between channels, which can solve the problem of variable input length. Experimental results demonstrate that the proposed method performs better in terms of both objective evaluation and visual quality.
Jinhua WANG Weiqiang WANG Guangmei XU Hongzhe LIU
In this paper, we describe the direct learning of an end-to-end mapping between under-/over-exposed images and well-exposed images. The mapping is represented as a deep convolutional neural network (CNN) that takes multiple-exposure images as input and outputs a high-quality image. Our CNN has a lightweight structure, yet gives state-of-the-art fusion quality. Furthermore, we know that for a given pixel, the influence of the surrounding pixels gradually increases as the distance decreases. If the only pixels considered are those in the convolution kernel neighborhood, the final result will be affected. To overcome this problem, the size of the convolution kernel is often increased. However, this also increases the complexity of the network (too many parameters) and the training time. In this paper, we present a method in which a number of sub-images of the source image are obtained using the same CNN model, providing more neighborhood information for the convolution operation. Experimental results demonstrate that the proposed method achieves better performance in terms of both objective evaluation and visual quality.
Hongzhe LIU Ningwei WANG Xuewei LI Cheng XU Yaze LI
In the neck part of a two-stage object detection network, feature fusion is generally carried out in either a top-down or bottom-up manner. However, two types of imbalance may exist: feature imbalance in the neck of the model and gradient imbalance in the region of interest extraction layer due to the scale changes of objects. The deeper the network is, the more abstract the learned features are, that is to say, more semantic information can be extracted. However, the extracted image background, spatial location, and other resolution information are less. In contrast, the shallow part can learn little semantic information, but a lot of spatial location information. We propose the Both Ends to Centre to Multiple Layers (BEtM) feature fusion method to solve the feature imbalance problem in the neck and a Multi-level Region of Interest Feature Extraction (MRoIE) layer to solve the gradient imbalance problem. In combination with the Region-based Convolutional Neural Network (R-CNN) framework, our Balanced Feature Fusion (BFF) method offers significantly improved network performance compared with the Faster R-CNN architecture. On the MS COCO 2017 dataset, it achieves an average precision (AP) that is 1.9 points and 3.2 points higher than those of the Feature Pyramid Network (FPN) Faster R-CNN framework and the Generic Region of Interest Extractor (GRoIE) framework, respectively.