1-8hit |
Yuma KINOSHITA Sayaka SHIOTA Hitoshi KIYA
This paper proposes a novel pseudo multi-exposure image fusion method based on a single image. Multi-exposure image fusion is used to produce images without saturation regions, by using photos with different exposures. However, it is difficult to take photos suited for the multi-exposure image fusion when we take a photo of dynamic scenes or record a video. In addition, the multi-exposure image fusion cannot be applied to existing images with a single exposure or videos. The proposed method enables us to produce pseudo multi-exposure images from a single image. To produce multi-exposure images, the proposed method utilizes the relationship between the exposure values and pixel values, which is obtained by assuming that a digital camera has a linear response function. Moreover, it is shown that the use of a local contrast enhancement method allows us to produce pseudo multi-exposure images with higher quality. Most of conventional multi-exposure image fusion methods are also applicable to the proposed multi-exposure images. Experimental results show the effectiveness of the proposed method by comparing the proposed one with conventional ones.
Yuma KINOSHITA Sayaka SHIOTA Masahiro IWAHASHI Hitoshi KIYA
A number of successful tone mapping operators (TMOs) for contrast compression have been proposed due to the need to visualize high dynamic range (HDR) images on low dynamic range devices. This paper proposes a novel inverse tone mapping (TM) operation and a new remapping framework with the operation. Existing inverse TM operations require either the store of some parameters calculated in forward TM, or data-depended operations. The proposed inverse TM operation enables to estimate HDR images from LDR ones mapped by the Reinhard's global operator, not only without keeping any parameters but also without any data-depended calculation. The proposed remapping framework with the inverse operation consists of two TM operations. The first TM operation is carried out by the Reinhard's global operator, and then the generated LDR one is stored. When we want different quality LDR ones, the proposed inverse TM operation is applied to the stored LDR one to generate an HDR one, and the second TM operation is applied to the HDR one to generate an LDR one with desirable quality, by using an arbitrary TMO. This framework allows not only to visualize an HDR image on low dynamic range devices at low computing cost, but also to efficiently store an HDR one as an LDR one. In simulations, it is shown that the proposed inverse TM operation has low computational cost, compared to the conventional ones. Furthermore, it is confirmed that the proposed framework allows to remap the stored LDR one to another LDR one whose quality is the same as that of the LDR one remapped by the conventional inverse TMO with parameters.
Chihiro GO Yuma KINOSHITA Sayaka SHIOTA Hitoshi KIYA
This paper proposes a novel multi-exposure image fusion (MEF) scheme for single-shot high dynamic range imaging with spatially varying exposures (SVE). Single-shot imaging with SVE enables us not only to produce images without color saturation regions from a single-shot image, but also to avoid ghost artifacts in the producing ones. However, the number of exposures is generally limited to two, and moreover it is difficult to decide the optimum exposure values before the photographing. In the proposed scheme, a scene segmentation method is applied to input multi-exposure images, and then the luminance of the input images is adjusted according to both of the number of scenes and the relationship between exposure values and pixel values. The proposed method with the luminance adjustment allows us to improve the above two issues. In this paper, we focus on dual-ISO imaging as one of single-shot imaging. In an experiment, the proposed scheme is demonstrated to be effective for single-shot high dynamic range imaging with SVE, compared with conventional MEF schemes with exposure compensation.
Yuma KINOSHITA Sayaka SHIOTA Hitoshi KIYA
This paper proposes a new inverse tone mapping operator (TMO) with estimated parameters. The proposed inverse TMO is based on Reinhard's global operator which is a well-known TMO. Inverse TM operations have two applications: generating an HDR image from an existing LDR one, and reconstructing an original HDR image from the mapped LDR image. The proposed one can be applied to both applications. In the latter application, two parameters used in Reinhard's TMO, i.e. the key value α regarding brightness of a mapped LDR one and the geometric mean $overline{L}_w$ of an original HDR one, are generally required for carrying out the Reinhard based inverse TMO. In this paper, we show that it is possible to estimate $overline{L}_w$ from α under some conditions, while α can be also estimated from $overline{L}_w$, so that a new inverse TMO with estimated parameter is proposed. Experimental results show that the proposed method outperforms conventional ones for both applications, in terms of high structural similarities and low computational costs.
Kouki SEO Chihiro GO Yuma KINOSHITA Hitoshi KIYA
We propose a novel hue-correction scheme for multi-exposure image fusion (MEF). Various MEF methods have so far been studied to generate higher-quality images. However, there are few MEF methods considering hue distortion unlike other fields of image processing, due to a lack of a reference image that has correct hue. In the proposed scheme, we generate an HDR image as a reference for hue correction, from input multi-exposure images. After that, hue distortion in images fused by an MEF method is removed by using hue information of the HDR one, on the basis of the constant-hue plane in the RGB color space. In simulations, the proposed scheme is demonstrated to be effective to correct hue-distortion caused by conventional MEF methods. Experimental results also show that the proposed scheme can generate high-quality images, regardless of exposure conditions of input multi-exposure images.
Ayana KAWAMURA Yuma KINOSHITA Takayuki NAKACHI Sayaka SHIOTA Hitoshi KIYA
We propose a privacy-preserving machine learning scheme with encryption-then-compression (EtC) images, where EtC images are images encrypted by using a block-based encryption method proposed for EtC systems with JPEG compression. In this paper, a novel property of EtC images is first discussed, although EtC ones was already shown to be compressible as a property. The novel property allows us to directly apply EtC images to machine learning algorithms non-specialized for computing encrypted data. In addition, the proposed scheme is demonstrated to provide no degradation in the performance of some typical machine learning algorithms including the support vector machine algorithm with kernel trick and random forests under the use of z-score normalization. A number of facial recognition experiments with are carried out to confirm the effectiveness of the proposed scheme.
Yuma KINOSHITA Kouki SEO Artit VISAVAKITCHAROEN Hitoshi KIYA
We propose a novel hue-preserving tone mapping scheme. Various tone mapping operations have been studied so far, but there are very few works on color distortion caused in image tone mapping. First, LDR images produced from HDR ones by using conventional tone mapping operators (TMOs) are pointed out to have some distortion in hue values due to clipping and rounding quantization processing. Next,we propose a novel method which allows LDR images to have the same maximally saturated color values as those of HDR ones. Generated LDR images by the proposed method have smaller hue degradation than LDR ones generated by conventional TMOs. Moreover, the proposed method is applicable to any TMOs. In an experiment, the proposed method is demonstrated not only to produce images with small hue degradation but also to maintain well-mapped luminance, in terms of three objective metrics: TMQI, hue value in CIEDE2000, and the maximally saturated color on the constant-hue plane in the RGB color space.
Hitoshi KIYA Ryota IIJIMA Aprilpyone MAUNGMAUNG Yuma KINOSHITA
In this paper, we propose a combined use of transformed images and vision transformer (ViT) models transformed with a secret key. We show for the first time that models trained with plain images can be directly transformed to models trained with encrypted images on the basis of the ViT architecture, and the performance of the transformed models is the same as models trained with plain images when using test images encrypted with the key. In addition, the proposed scheme does not require any specially prepared data for training models or network modification, so it also allows us to easily update the secret key. In an experiment, the effectiveness of the proposed scheme is evaluated in terms of performance degradation and model protection performance in an image classification task on the CIFAR-10 dataset.