The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] illumination(50hit)

1-20hit(50hit)

  • A Night Image Enhancement Algorithm Based on MDIFE-Net Curve Estimation

    Jing ZHANG  Dan LI  Hong-an LI  Xuewen LI  Lizhi ZHANG  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2022/11/04
      Vol:
    E106-D No:2
      Page(s):
    229-239

    In order to solve the low-quality problems such as low brightness, poor contrast, noise interference and color imbalance in night images, a night image enhancement algorithm based on MDIFE-Net curve estimation is presented. This algorithm mainly consists of three parts: Firstly, we design an illumination estimation curve (IEC), which adjusts the pixel level of the low illumination image domain through a non-linear fitting function, maps to the enhanced image domain, and effectively eliminates the effect of illumination loss; Secondly, the DCE-Net is improved, replacing the original Relu activation function with a smoother Mish activation function, so that the parameters can be better updated; Finally, illumination estimation loss function, which combines image attributes with fidelity, is designed to drive the no-reference image enhancement, which preserves more image details while enhancing the night image. The experimental results show that our method can not only effectively improve the image contrast, but also make the details of the target more prominent, improve the visual quality of the image, and make the image achieve a better visual effect. Compared with four existing low illumination image enhancement algorithms, the NIQE and STD evaluation index values are better than other representative algorithms, verify the feasibility and validity of the algorithm, and verify the rationality and necessity of each component design through ablation experiments.

  • Improved Resolution Enhancement Technique for Broadband Illumination in Flat Panel Display Lithography Open Access

    Kanji SUZUKI  Manabu HAKKO  

     
    INVITED PAPER

      Pubricized:
    2021/08/17
      Vol:
    E105-C No:2
      Page(s):
    59-67

    In flat panel display (FPD) lithography, a high resolution and large depth of focus (DOF) are required. The demands for high throughput have necessitated the use of large glass plates and exposure areas, thereby increasing focal unevenness and reducing process latitude. Thus, a large DOF is needed, particularly for high-resolution lithography. To manufacture future high-definition displays, 1.0μm line and space (L/S) is predicted to be required, and a technique to achieve this resolution with adequate DOF is necessary. To improve the resolution and DOF, resolution enhancement techniques (RETs) have been introduced. RETs such as off-axis illumination (OAI) and phase-shift masks (PSMs) have been widely used in semiconductor lithography, which utilizes narrowband illumination. To effectively use RETs in FPD lithography, modification for broadband illumination is required because FPD lithography utilizes such illumination as exposure light. However, thus far, RETs for broadband illumination have not been studied. This study aimed to develop techniques to achieve 1.0μm L/S resolution with an acceptable DOF. To this end, this paper proposes a method that combines our previously developed RET, namely, divided spectrum illumination (DSI), with an attenuated PSM (Att. PSM). Theoretical observations and simulations present the design of a PSM for broadband illumination. The transmittance and phase shift, whose degree varies according to the wavelength, are determined in terms of aerial image contrast and resist loss. The design of DSI for an Att. PSM is also discussed considering image contrast, DOF, and illumination intensity. Finally, the exposure results of 1.0μm L/S using DSI and PSM techniques are shown, demonstrating that a PSM greatly improves the resist profile, and DSI enhances the DOF by approximately 30% compared to conventional OAI. Thus, DSI and PSMs can be used in practical applications for achieving 1.0μm L/S with sufficient DOF.

  • Entropy Based Illumination-Invariant Foreground Detection

    Karthikeyan PANJAPPAGOUNDER RAJAMANICKAM  Sakthivel PERIYASAMY  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2019/04/18
      Vol:
    E102-D No:7
      Page(s):
    1434-1437

    Background subtraction algorithms generate a background model of the monitoring scene and compare the background model with the current video frame to detect foreground objects. In general, most of the background subtraction algorithms fail to detect foreground objects when the scene illumination changes. An entropy based background subtraction algorithm is proposed to address this problem. The proposed method adapts to illumination changes by updating the background model according to differences in entropy value between the current frame and the previous frame. This entropy based background modeling can efficiently handle both sudden and gradual illumination variations. The proposed algorithm is tested in six video sequences and compared with four algorithms to demonstrate its efficiency in terms of F-score, similarity and frame rate.

  • Pedestrian Detectability Estimation Considering Visual Adaptation to Drastic Illumination Change

    Yuki IMAEDA  Takatsugu HIRAYAMA  Yasutomo KAWANISHI  Daisuke DEGUCHI  Ichiro IDE  Hiroshi MURASE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/02/20
      Vol:
    E101-D No:5
      Page(s):
    1457-1461

    We propose an estimation method of pedestrian detectability considering the driver's visual adaptation to drastic illumination change, which has not been studied in previous works. We assume that driver's visual characteristics change in proportion to the elapsed time after illumination change. In this paper, as a solution, we construct multiple estimators corresponding to different elapsed periods, and estimate the detectability by switching them according to the elapsed period. To evaluate the proposed method, we construct an experimental setup to present a participant with illumination changes and conduct a preliminary simulated experiment to measure and estimate the pedestrian detectability according to the elapsed period. Results show that the proposed method can actually estimate the detectability accurately after a drastic illumination change.

  • Illumination Normalization for Face Recognition Using Energy Minimization Framework

    Xiaoguang TU  Feng YANG  Mei XIE  Zheng MA  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/03/10
      Vol:
    E100-D No:6
      Page(s):
    1376-1379

    Numerous methods have been developed to handle lighting variations in the preprocessing step of face recognition. However, most of them only use the high-frequency information (edges, lines, corner, etc.) for recognition, as pixels lied in these areas have higher local variance values, and thus insensitive to illumination variations. In this case, information of low-frequency may be discarded and some of the features which are helpful for recognition may be ignored. In this paper, we present a new and efficient method for illumination normalization using an energy minimization framework. The proposed method aims to remove the illumination field of the observed face images while simultaneously preserving the intrinsic facial features. The normalized face image and illumination field could be achieved by a reciprocal iteration scheme. Experiments on CMU-PIE and the Extended Yale B databases show that the proposed method can preserve a very good visual quality even on the images illuminated with deep shadow and high brightness regions, and obtain promising illumination normalization results for better face recognition performance.

  • A Novel Illumination Estimation for Face Recognition under Complex Illumination Conditions

    Yong CHENG  Zuoyong LI  Yuanchen HAN  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/01/06
      Vol:
    E100-D No:4
      Page(s):
    923-926

    After exploring the classic Lambertian reflectance model, we proposed an effective illumination estimation model to extract illumination invariants for face recognition under complex illumination conditions in this paper. The estimated illumination by our method not only meets the actual lighting conditions of facial images, but also conforms to the imaging principle. Experimental results on the combined Yale B database show that the proposed method can extract more robust illumination invariants, which improves face recognition rate.

  • An Efficient Soft Shadow Mapping for Area Lights in Various Shapes and Colors

    Youngjae CHUN  Kyoungsu OH  

     
    LETTER-Computer Graphics

      Pubricized:
    2016/11/11
      Vol:
    E100-D No:2
      Page(s):
    396-400

    Shadow is an important effect that makes virtual 3D scenes more realistic. In this paper, we propose a fast and correct soft shadow generation method for area lights of various shapes and colors. To conduct efficient as well as accurate visibility tests, we exploit the complexity of shadow and area light color.

  • Proposal of Multiscale Retinex Using Illumination Adjustment for Digital Images

    Yi RU  Go TANAKA  

     
    LETTER-Image

      Vol:
    E99-A No:11
      Page(s):
    2003-2007

    In this letter, we propose a method for obtaining a clear and natural output image by tuning the illumination component in an input image. The proposed method is based on the retinex process and it is suitable for the image quality improvement of images of which illumination is insufficient.

  • Illumination-Invariant Face Representation via Normalized Structural Information

    Wonjun KIM  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2016/06/21
      Vol:
    E99-D No:10
      Page(s):
    2661-2663

    A novel method for illumination-invariant face representation is presented based on the orthogonal decomposition of the local image structure. One important advantage of the proposed method is that image gradients and corresponding intensity values are simultaneously used with our decomposition procedure to preserve the original texture while yielding the illumination-invariant feature space. Experimental results demonstrate that the proposed method is effective for face recognition and verification even with diverse lighting conditions.

  • A Novel Lambertian-RBFNN for Office Light Modeling

    Wa SI  Xun PAN  Harutoshi OGAI  Katsumi HIRAI  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2016/04/18
      Vol:
    E99-D No:7
      Page(s):
    1742-1752

    In lighting control systems, accurate data of artificial light (lighting coefficients) are essential for the illumination control accuracy and energy saving efficiency. This research proposes a novel Lambertian-Radial Basis Function Neural Network (L-RBFNN) to realize modeling of both lighting coefficients and the illumination environment for an office. By adding a Lambertian neuron to represent the rough theoretical illuminance distribution of the lamp and modifying RBF neurons to regulate the distribution shape, L-RBFNN successfully solves the instability problem of conventional RBFNN and achieves higher modeling accuracy. Simulations of both single-light modeling and multiple-light modeling are made and compared with other methods such as Lambertian function, cubic spline interpolation and conventional RBFNN. The results prove that: 1) L-RBFNN is a successful modeling method for artificial light with imperceptible modeling error; 2) Compared with other existing methods, L-RBFNN can provide better performance with lower modeling error; 3) The number of training sensors can be reduced to be the same with the number of lamps, thus making the modeling method easier to apply in real-world lighting systems.

  • Dense Light Transport for Relighting Computation Using Orthogonal Illumination Based on Walsh-Hadamard Matrix

    Isao MIYAGAWA  Yukinobu TANIGUCHI  

     
    PAPER

      Pubricized:
    2016/01/28
      Vol:
    E99-D No:4
      Page(s):
    1038-1051

    We propose a practical method that acquires dense light transports from unknown 3D objects by employing orthogonal illumination based on a Walsh-Hadamard matrix for relighting computation. We assume the presence of color crosstalk, which represents color mixing between projector pixels and camera pixels, and then describe the light transport matrix by using sets of the orthogonal illumination and the corresponding camera response. Our method handles not only direct reflection light but also global light radiated from the entire environment. Tests of the proposed method using real images show that orthogonal illumination is an effective way of acquiring accurate light transports from various 3D objects. We demonstrate a relighting test based on acquired light transports and confirm that our method outputs excellent relighting images that compare favorably with the actual images observed by the system.

  • Analysis of Noteworthy Issues in Illumination Processing for Face Recognition

    Min YAO  Hiroshi NAGAHASHI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E98-D No:3
      Page(s):
    681-691

    Face recognition under variable illumination conditions is a challenging task. Numbers of approaches have been developed for solving the illumination problem. In this paper, we summarize and analyze some noteworthy issues in illumination processing for face recognition by reviewing various representative approaches. These issues include a principle that associates various approaches with a commonly used reflectance model and the shared considerations like contribution of basic processing methods, processing domain, feature scale, and a common problem. We also address a more essential question-what to actually normalize. Through the discussion on these issues, we also provide suggestions on potential directions for future research. In addition, we conduct evaluation experiments on 1) contribution of fundamental illumination correction to illumination insensitive face recognition and 2) comparative performance of various approaches. Experimental results show that the approaches with fundamental illumination correction methods are more insensitive to extreme illumination than without them. Tan and Triggs' method (TT) using L1 norm achieves the best results among nine tested approaches.

  • Light Source Estimation in Mobile Augmented Reality Scenes by Using Human Face Geometry

    Emre KOC  Selim BALCISOY  

     
    PAPER-Augmented Reality

      Vol:
    E97-D No:8
      Page(s):
    1974-1982

    Light source estimation and virtual lighting must be believable in terms of appearance and correctness in augmented reality scenes. As a result of illumination complexity in an outdoor scene, realistic lighting for augmented reality is still a challenging problem. In this paper, we propose a framework based on an estimation of environmental lighting from well-defined objects, specifically human faces. The method is tuned for outdoor use, and the algorithm is further enhanced to illuminate virtual objects exposed to direct sunlight. Our model can be integrated into existing mobile augmented reality frameworks to enhance visual perception.

  • Illumination Normalization-Based Face Detection under Varying Illumination

    Min YAO  Hiroshi NAGAHASHI  Kota AOKI  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E97-D No:6
      Page(s):
    1590-1598

    A number of well-known learning-based face detectors can achieve extraordinary performance in controlled environments. But face detection under varying illumination is still challenging. Possible solutions to this illumination problem could be creating illumination invariant features or utilizing skin color information. However, the features and skin colors are not sufficiently reliable under difficult lighting conditions. Another possible solution is to do illumination normalization (e.g., Histogram Equalization (HE)) prior to executing face detectors. However, applications of normalization to face detection have not been widely studied in the literature. This paper applies and evaluates various existing normalization methods under the framework of combining the illumination normalization and two learning-based face detectors (Haar-like face detector and LBP face detector). These methods were initially proposed for different purposes (face recognition or image quality enhancement), but some of them significantly improve the original face detectors and lead to better performance than HE according to the results of the comparative experiments on two databases. Meanwhile, we propose a new normalization method called segmentation-based half histogram stretching and truncation (SH) for face detection under varying illumination. It first employs Otsu method to segment the histogram (intensities) of the input image into several spans and then does the redistribution on the segmented spans. In this way, the non-uniform illumination can be efficiently compensated and local facial structures can be appropriately enhanced. Our method obtains good performance according to the experiments.

  • A New Face Relighting Method Based on Edge-Preserving Filter

    Lingyu LIANG  Lianwen JIN  

     
    LETTER-Computer Graphics

      Vol:
    E96-D No:12
      Page(s):
    2904-2907

    We propose a new face relighting method using an illuminance template generated from a single reference portrait. First, the reference is wrapped according to the shape of the target. Second, we employ a new spatially variant edge-preserving smoothing filter to remove the facial identity and texture details of the wrapped reference, and obtain the illumination template. Finally, we relight the target with the template in CIELAB color space. Experiments show the effectiveness of our method for both grayscale and color faces taken from different databases, and the comparisons with previous works demonstrate a better relighting effect produced by our method.

  • Spatially Adaptive Logarithmic Total Variation Model for Varying Light Face Recognition

    Biao WANG  Weifeng LI  Zhimin LI  Qingmin LIAO  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E96-D No:1
      Page(s):
    155-158

    In this letter, we propose an extension to the classical logarithmic total variation (LTV) model for face recognition under variant illumination conditions. LTV treats all facial areas with the same regularization parameters, which inevitably results in the loss of useful facial details and is harmful for recognition tasks. To address this problem, we propose to assign the regularization parameters which balance the large-scale (illumination) and small-scale (reflectance) components in a spatially adaptive scheme. Face recognition experiments on both Extended Yale B and the large-scale FERET databases demonstrate the effectiveness of the proposed method.

  • A Composite Illumination Invariant Color Feature and Its Application to Partial Image Matching

    Masaki KOBAYASHI  Keisuke KAMEYAMA  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E95-D No:10
      Page(s):
    2522-2532

    In camera-based object recognition and classification, surface color is one of the most important characteristics. However, apparent object color may differ significantly according to the illumination and surface conditions. Such a variation can be an obstacle in utilizing color features. Geusebroek et al.'s color invariants can be a powerful tool for characterizing the object color regardless of illumination and surface conditions. In this work, we analyze the estimation process of the color invariants from RGB images, and propose a novel invariant feature of color based on the elementary invariants to meet the circular continuity residing in the mapping between colors and their invariants. Experiments show that the use of the proposed invariant in combination with luminance, contributes to improve the retrieval performances of partial object image matching under varying illumination conditions.

  • A Contrast Enhancement Method for HDR Image Using a Modified Image Formation Model

    Byoung-Ju YUN  Hee-Dong HONG  Ho-Hyoung CHOI  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E95-D No:4
      Page(s):
    1112-1119

    Poor illumination and viewing conditions have negativeinfluences on the quality of an image, especially the contrast of the dark and bright region. Thus, captured and displayed images usually need contrast enhancement. Histogram-based or gamma correction-based methods are generally utilized for this. However, these methods are global contrast enhancement method, and since the sensitivity of the human eye changes locally according to the position of the object and the illumination in the scene, the global contrast enhancement methods have a limit. The spatial adaptive method is needed to overcome these limitations and it has led to the development of an integrated surround retinex (ISR), and estimation of dominant chromaticity (EDC) methods. However, these methods are based on Gray-World Assumption, and they use a general image formation model, so the color constancy is known to get poor results, shown through graying-out, halo-artifacts (ringing effects), and the dominated color. This paper presents a contrast enhancement method using a modified image formation model in which the image is divided into three components: global illumination, local illumination and reflectance. After applying the power constant value to control the contrast in the resulting image, the output image is obtained from their product to avoid or minimize a color distortion, based on the sRGB color representation. The experimental results show that the proposed method yields better performances than conventional methods.

  • Psychological Effects of Ambient Illumination Control and Illumination Layout While Viewing Various Video Images

    Takuya IWANAMI  Ayano KIKUCHI  Keita HIRAI  Toshiya NAKAGUCHI  Norimichi TSUMURA  Yoichi MIYAKE  

     
    PAPER-Vision

      Vol:
    E94-A No:2
      Page(s):
    493-499

    Recently enhancing the visual experience of the user has been a new trend for TV displays. This trend comes from the fact that changes of ambient illuminations while viewing a Liquid Crystal Display (LCD) significantly affect human impressions. However, psychological effects caused by the combination of displayed video image and ambient illuminations have not been investigated. In the present research, we clarify the relationship between ambient illuminations and psychological effects while viewing video image displayed on the LCD by using a questionnaire based semantic differential (SD) method and a factor analysis method. Six kinds of video images were displayed under different colors and layouts of illumination conditions and rated by 15 observers. According to the analysis, it became clear that the illumination control around the LCD with displayed video image, the feeling of 'activity' and 'evaluating' were rated higher than the feeling of fluorescent ceiling condition. In particular, simultaneous illumination control around the display and the ceiling enhanced the feeling of 'activity,' and 'evaluating' with keeping 'comfort.' Moreover, the feeling of 'activity' under the illumination control around the LCD and the ceiling condition while viewing music video image was rated clearly higher than that with natural scene video image.

  • Block-Based Bag of Words for Robust Face Recognition under Variant Conditions of Facial Expression, Illumination, and Partial Occlusion

    Zisheng LI  Jun-ichi IMAI  Masahide KANEKO  

     
    PAPER-Processing

      Vol:
    E94-A No:2
      Page(s):
    533-541

    In many real-world face recognition applications, there might be only one training image per person available. Moreover, the test images may vary in facial expressions and illuminations, or may be partially occluded. However, most classical face recognition techniques assume that multiple images per person are available for training, and they are difficult to deal with extreme expressions, illuminations and occlusions. This paper proposes a novel block-based bag of words (BBoW) method to solve those problems. In our approach, a face image is partitioned into multiple blocks, dense SIFT features are then calculated and vector quantized into different visual words on each block respectively. Finally, histograms of codeword distribution on each local block are concatenated to represent the face image. Our method is able to capture local features on each block while maintaining holistic spatial information of different facial components. Without any illumination compensation or image alignment processing, the proposed method achieves excellent face recognition results on AR and XM2VTS databases. Experimental results show that only using one neutral expression frame per person for training, our method can obtain the best performance ever on face images of AR database with extreme expressions, variant illuminations, and partial occlusions. We also test our method on the standard and darkened sets of XM2VTS database, and achieve the average rates of 100% and 96.10% on the standard and darkened sets of XM2VTS database, respectively.

1-20hit(50hit)