The search functionality is under construction.

Keyword Search Result

[Keyword] image fusion(17hit)

1-17hit
  • Prohibited Item Detection Within X-Ray Security Inspection Images Based on an Improved Cascade Network Open Access

    Qingqi ZHANG  Xiaoan BAO  Ren WU  Mitsuru NAKATA  Qi-Wei GE  

     
    PAPER

      Pubricized:
    2024/01/16
      Vol:
    E107-A No:5
      Page(s):
    813-824

    Automatic detection of prohibited items is vital in helping security staff be more efficient while improving the public safety index. However, prohibited item detection within X-ray security inspection images is limited by various factors, including the imbalance distribution of categories, diversity of prohibited item scales, and overlap between items. In this paper, we propose to leverage the Poisson blending algorithm with the Canny edge operator to alleviate the imbalance distribution of categories maximally in the X-ray images dataset. Based on this, we improve the cascade network to deal with the other two difficulties. To address the prohibited scale diversity problem, we propose the Re-BiFPN feature fusion method, which includes a coordinate attention atrous spatial pyramid pooling (CA-ASPP) module and a recursive connection. The CA-ASPP module can implicitly extract direction-aware and position-aware information from the feature map. The recursive connection feeds the CA-ASPP module processed multi-scale feature map to the bottom-up backbone layer for further multi-scale feature extraction. In addition, a Rep-CIoU loss function is designed to address the overlapping problem in X-ray images. Extensive experimental results demonstrate that our method can successfully identify ten types of prohibited items, such as Knives, Scissors, Pressure, etc. and achieves 83.4% of mAP, which is 3.8% superior to the original cascade network. Moreover, our method outperforms other mainstream methods by a significant margin.

  • Infrared and Visible Image Fusion via Hybrid Variational Model Open Access

    Zhengwei XIA  Yun LIU  Xiaoyun WANG  Feiyun ZHANG  Rui CHEN  Weiwei JIANG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/12/11
      Vol:
    E107-D No:4
      Page(s):
    569-573

    Infrared and visible image fusion can combine the thermal radiation information and the textures to provide a high-quality fused image. In this letter, we propose a hybrid variational fusion model to achieve this end. Specifically, an ℓ0 term is adopted to preserve the highlighted targets with salient gradient variation in the infrared image, an ℓ1 term is used to suppress the noise in the fused image and an ℓ2 term is employed to keep the textures of the visible image. Experimental results demonstrate the superiority of the proposed variational model and our results have more sharpen textures with less noise.

  • Fusion-Based Edge and Color Recovery Using Weighted Near-Infrared Image and Color Transmission Maps for Robust Haze Removal

    Onhi KATO  Akira KUBOTA  

     
    PAPER

      Pubricized:
    2023/05/23
      Vol:
    E106-D No:10
      Page(s):
    1661-1672

    Various haze removal methods based on the atmospheric scattering model have been presented in recent years. Most methods have targeted strong haze images where light is scattered equally in all color channels. This paper presents a haze removal method using near-infrared (NIR) images for relatively weak haze images. In order to recover the lost edges, the presented method first extracts edges from an appropriately weighted NIR image and fuses it with the color image. By introducing a wavelength-dependent scattering model, our method then estimates the transmission map for each color channel and recovers the color more naturally from the edge-recovered image. Finally, the edge-recovered and the color-recovered images are blended. In this blending process, the regions with high lightness, such as sky and clouds, where unnatural color shifts are likely to occur, are effectively estimated, and the optimal weighting map is obtained. Our qualitative and quantitative evaluations using 59 pairs of color and NIR images demonstrated that our method can recover edges and colors more naturally in weak haze images than conventional methods.

  • Single Image Dehazing Based on Sky Area Segmentation and Image Fusion

    Xiangyang CHEN  Haiyue LI  Chuan LI  Weiwei JIANG  Hao ZHOU  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/04/24
      Vol:
    E106-D No:7
      Page(s):
    1249-1253

    Since the dark channel prior (DCP)-based dehazing method is ineffective in the sky area and will cause the problem of too dark and color distortion of the image, we propose a novel dehazing method based on sky area segmentation and image fusion. We first segment the image according to the characteristics of the sky area and non-sky area of the image, then estimate the atmospheric light and transmission map according to the DCP and correct them, and then fuse the original image after the contrast adaptive histogram equalization to improve the details information of the image. Experiments illustrate that our method performs well in dehazing and can reduce image distortion.

  • Image Adjustment for Multi-Exposure Images Based on Convolutional Neural Networks

    Isana FUNAHASHI  Taichi YOSHIDA  Xi ZHANG  Masahiro IWAHASHI  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2021/10/21
      Vol:
    E105-D No:1
      Page(s):
    123-133

    In this paper, we propose an image adjustment method for multi-exposure images based on convolutional neural networks (CNNs). We call image regions without information due to saturation and object moving in multi-exposure images lacking areas in this paper. Lacking areas cause the ghosting artifact in fused images from sets of multi-exposure images by conventional fusion methods, which tackle the artifact. To avoid this problem, the proposed method estimates the information of lacking areas via adaptive inpainting. The proposed CNN consists of three networks, warp and refinement, detection, and inpainting networks. The second and third networks detect lacking areas and estimate their pixel values, respectively. In the experiments, it is observed that a simple fusion method with the proposed method outperforms state-of-the-art fusion methods in the peak signal-to-noise ratio. Moreover, the proposed method is applied for various fusion methods as pre-processing, and results show obviously reducing artifacts.

  • Semantic Guided Infrared and Visible Image Fusion

    Wei WU  Dazhi ZHANG  Jilei HOU  Yu WANG  Tao LU  Huabing ZHOU  

     
    LETTER-Image

      Pubricized:
    2021/06/10
      Vol:
    E104-A No:12
      Page(s):
    1733-1738

    In this letter, we propose a semantic guided infrared and visible image fusion method, which can train a network to fuse different semantic objects with different fusion weights according to their own characteristics. First, we design the appropriate fusion weights for each semantic object instead of the whole image. Second, we employ the semantic segmentation technology to obtain the semantic region of each object, and generate special weight maps for the infrared and visible image via pre-designed fusion weights. Third, we feed the weight maps into the loss function to guide the image fusion process. The trained fusion network can generate fused images with better visual effect and more comprehensive scene representation. Moreover, we can enhance the modal features of various semantic objects, benefiting subsequent tasks and applications. Experiment results demonstrate that our method outperforms the state-of-the-art in terms of both visual effect and quantitative metrics.

  • Hue-Correction Scheme Considering Non-Linear Camera Response for Multi-Exposure Image Fusion

    Kouki SEO  Chihiro GO  Yuma KINOSHITA  Hitoshi KIYA  

     
    PAPER-Image

      Vol:
    E103-A No:12
      Page(s):
    1562-1570

    We propose a novel hue-correction scheme for multi-exposure image fusion (MEF). Various MEF methods have so far been studied to generate higher-quality images. However, there are few MEF methods considering hue distortion unlike other fields of image processing, due to a lack of a reference image that has correct hue. In the proposed scheme, we generate an HDR image as a reference for hue correction, from input multi-exposure images. After that, hue distortion in images fused by an MEF method is removed by using hue information of the HDR one, on the basis of the constant-hue plane in the RGB color space. In simulations, the proposed scheme is demonstrated to be effective to correct hue-distortion caused by conventional MEF methods. Experimental results also show that the proposed scheme can generate high-quality images, regardless of exposure conditions of input multi-exposure images.

  • An Image Fusion Scheme for Single-Shot High Dynamic Range Imaging with Spatially Varying Exposures

    Chihiro GO  Yuma KINOSHITA  Sayaka SHIOTA  Hitoshi KIYA  

     
    PAPER-Image

      Vol:
    E102-A No:12
      Page(s):
    1856-1864

    This paper proposes a novel multi-exposure image fusion (MEF) scheme for single-shot high dynamic range imaging with spatially varying exposures (SVE). Single-shot imaging with SVE enables us not only to produce images without color saturation regions from a single-shot image, but also to avoid ghost artifacts in the producing ones. However, the number of exposures is generally limited to two, and moreover it is difficult to decide the optimum exposure values before the photographing. In the proposed scheme, a scene segmentation method is applied to input multi-exposure images, and then the luminance of the input images is adjusted according to both of the number of scenes and the relationship between exposure values and pixel values. The proposed method with the luminance adjustment allows us to improve the above two issues. In this paper, we focus on dual-ISO imaging as one of single-shot imaging. In an experiment, the proposed scheme is demonstrated to be effective for single-shot high dynamic range imaging with SVE, compared with conventional MEF schemes with exposure compensation.

  • Cauchy Aperture and Perfect Reconstruction Filters for Extending Depth-of-Field from Focal Stack Open Access

    Akira KUBOTA  Kazuya KODAMA  Asami ITO  

     
    PAPER

      Pubricized:
    2019/08/16
      Vol:
    E102-D No:11
      Page(s):
    2093-2100

    A pupil function of aperture in image capturing systems is theoretically derived such that one can perfectly reconstruct all-in-focus image through linear filtering of the focal stack. The perfect reconstruction filters are also designed based on the derived pupil function. The designed filters are space-invariant; hence the presented method does not require region segmentation. Simulation results using synthetic scenes shows effectiveness of the derived pupil function and the filters.

  • A Pseudo Multi-Exposure Fusion Method Using Single Image

    Yuma KINOSHITA  Sayaka SHIOTA  Hitoshi KIYA  

     
    PAPER-Image

      Vol:
    E101-A No:11
      Page(s):
    1806-1814

    This paper proposes a novel pseudo multi-exposure image fusion method based on a single image. Multi-exposure image fusion is used to produce images without saturation regions, by using photos with different exposures. However, it is difficult to take photos suited for the multi-exposure image fusion when we take a photo of dynamic scenes or record a video. In addition, the multi-exposure image fusion cannot be applied to existing images with a single exposure or videos. The proposed method enables us to produce pseudo multi-exposure images from a single image. To produce multi-exposure images, the proposed method utilizes the relationship between the exposure values and pixel values, which is obtained by assuming that a digital camera has a linear response function. Moreover, it is shown that the use of a local contrast enhancement method allows us to produce pseudo multi-exposure images with higher quality. Most of conventional multi-exposure image fusion methods are also applicable to the proposed multi-exposure images. Experimental results show the effectiveness of the proposed method by comparing the proposed one with conventional ones.

  • Multi-Focus Image Fusion Based on Multiple Directional LOTs

    Zhiyu CHEN  Shogo MURAMATSU  

     
    LETTER-Image

      Vol:
    E98-A No:11
      Page(s):
    2360-2365

    This letter proposes an image fusion method which adopts a union of multiple directional lapped orthogonal transforms (DirLOTs). DirLOTs are used to generate symmetric orthogonal discrete wavelet transforms and then to construct a union of unitary transforms as a redundant dictionary with a multiple directional property. The multiple DirLOTs can overcome a disadvantage of separable wavelets to represent images which contain slant textures and edges. We analyse the characteristic of local luminance contrast, and propose a fusion rule based on interscale relation of wavelet coefficients. Relying on the above, a novel image fusion method is proposed. Some experimental results show that the proposed method is able to significantly improve the fusion performance from those with the conventional discrete wavelet transforms.

  • Multi-Modality Image Fusion Using the Nonsubsampled Contourlet Transform

    Cuiyin LIU  Shu-qing CHEN  Qiao FU  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E96-D No:10
      Page(s):
    2215-2223

    In this paper, an efficient multi-modal medical image fusion approach is proposed based on local features contrast and bilateral sharpness criterion in nonsubsampled contourlet transform (NSCT) domain. Compared with other multiscale decomposition analysis tools, the nonsubsampled contourlet transform not only can eliminate the “block-effect” and the “pseudo-effect”, but also can represent the source image in multiple direction and capture the geometric structure of source image in transform domain. These advantages of NSCT can, when used in fusion algorithm, help to attain more visual information in fused image and improve the fusion quality. At the same time, in order to improve the robustness of fusion algorithm and to improve the quality of the fused image, two selection rules should be considered. Firstly, a new bilateral sharpness criterion is proposed to select the lowpass coefficient, which exploits both strength and phase coherence. Secondly, a modified SML (sum modified Laplacian) is introduced into the local contrast measurements, which is suitable for human vision system and can extract more useful detailed information from source images. Experimental results demonstrate that the proposed method performs better than the conventional fusion algorithm in terms of both visual quality and objective evaluation criteria.

  • Multi-Layer Virtual Slide Scanning System with Multi-Focus Image Fusion for Cytopathology and Image Diagnosis Open Access

    Hiroyuki NOZAKA  Tomisato MIURA  Zhongxi ZHENG  

     
    PAPER-Diagnostic Systems

      Vol:
    E96-D No:4
      Page(s):
    856-863

    Objective: The virtual slides are high-magnification whole digital images of histopathological tissue sections. The existing virtual slide system, which is optimized for scanning flat and smooth plane slides such as histopathological paraffin-embedded tissue sections, but is unsuitable for scanning irregular plane slides such as cytological smear slides. This study aims to develop a virtual slide system suitable for cytopathology slide scanning and to evaluate the effectiveness of multi-focus image fusion (MF) in cytopathological diagnosis. Study Design: We developed a multi-layer virtual slide scanning system with MF technology. Tumors for this study were collected from 21 patients diagnosed with primary breast cancer. After surgical extraction, smear slide for cytopathological diagnosis were manufactured by the conventional stamp method, fine needle aspiration method (FNA), and tissue washing method. The stamp slides were fixed in 95% ethanol. FNA and tissue washing samples were fixed in CytoRich RED Preservative Fluid, a liquid-based cytopathology (LBC). These slides were stained with Papanicolaou stain, and scanned by virtual slide system. To evaluate the suitability of MF technology in cytopathological diagnosis, we compared single focus (SF) virtual slide with MF virtual slide. Cytopathological evaluation was carried out by 5 pathologists and cytotechnologists. Results: The virtual slide system with MF provided better results than the conventional SF virtual slide system with regard to viewing inside cell clusters and image file size. Liquid-based cytology was more suitable than the stamp method for virtual slides with MF. Conclusion: The virtual slide system with MF is a useful technique for the digitization in cytopathology, and this technology could be applied to tele-cytology and e-learning by virtual slide system.

  • An Illumination Invariant Bimodal Method Employing Discriminant Features for Face Recognition

    JiYing WU  QiuQi RUAN  Gaoyun AN  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E92-D No:2
      Page(s):
    365-368

    A novel bimodal method for face recognition under low-level lighting conditions is proposed. It fuses an enhanced gray level image and an illumination-invariant geometric image at the feature-level. To further improve the recognition performance under large variations in attributions such as poses and expressions, discriminant features are extracted from source images using the wavelet transform-based method. Features are adaptively fused to reconstruct the final face sample. Then FLD is used to generate a supervised discriminant space for the classification task. Experiments show that the bimodal method outperforms conventional methods under complex conditions.

  • A New Fusion Based Blind Logo-Watermarking Algorithm

    Gui XIE  Hong SHEN  

     
    PAPER-Application Information Security

      Vol:
    E89-D No:3
      Page(s):
    1173-1180

    We propose a novel blind watermarking algorithm, called XFuseMark, which can hide a small, visually meaningful, grayscale logo in a host image instead of using a random-noise-like sequence based on the multiresolution fusion principles, and extract a recognizable version of the embedded logo even without reference to the original host data at the receiving end. XFuseMark is not only secure, i.e., only authorized users holding a private key are able to conduct the logo extraction operation, but also robust against noise addition and image compression. Experiments verify the practical performance of XFuseMark.

  • Automated Detection and Removal of Clouds and Their Shadows from Landsat TM Images

    Bin WANG  Atsuo ONO  Kanako MURAMATSU  Noboru FUJIWARA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E82-D No:2
      Page(s):
    453-460

    In this paper, a scheme to remove clouds and their shadows from remotely sensed images of Landsat TM over land has been proposed. The scheme uses the image fusion technique to automatically recognize and remove contamination of clouds and their shadows, and integrate complementary information into the composite image from multitemporal images. The cloud regions can be detected on the basis of the reflectance differences with the other regions. Based on the fact that shadows smooth the brightness changes of the ground, the shadow regions can be detected successfully by means of wavelet transform. Further, an area-based detection rule is developed in this paper and the multispectral characteristics of Landsat TM images are used to alleviate the computational load. Because the wavelet transform is adopted for the image fusion, artifacts are invisible in the fused images. Finally, the performance of the proposed scheme is demonstrated experimentally.

  • Monochromatic Visualization of Multimodal Images by Projection Pursuit

    Seiji HOTTA  Kiichi URAHAMA  

     
    LETTER-Image Theory

      Vol:
    E81-A No:12
      Page(s):
    2715-2718

    A method of visualization of multimodal images by one monochromatic image is presented on the basis of the projection pursuit approach of the inverse process of the anisotropic diffusion which is a method of image restoration enhancing contrasts at edges. The extension of the projection from a linear one to nonlinear sigmoidal functions enhances the contrast further. The deterministic annealing technique is also incorporated into the optimization process for improving the contrast enhancement ability of the projection. An application of this method to a pair of MRI images of brains reveals its promising performance of superior visualization of tissues.