The search functionality is under construction.

Keyword Search Result

[Keyword] image quality(44hit)

1-20hit(44hit)

  • No Reference Quality Assessment of Contrast-Distorted SEM Images Based on Global Features

    Fengchuan XU  Qiaoyue LI  Guilu ZHANG  Yasheng CHANG  Zixuan ZHENG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/07/28
      Vol:
    E106-D No:11
      Page(s):
    1935-1938

    This letter presents a global feature-based method for evaluating the no reference quality of scanning electron microscopy (SEM) contrast-distorted images. Based on the characteristics of SEM images and the human visual system, the global features of SEM images are extracted as the score for evaluating image quality. In this letter, the texture information of SEM images is first extracted using a low-pass filter with orientation, and the amount of information in the texture part is calculated based on the entropy reflecting the complexity of the texture. The singular values with four scales of the original image are then calculated, and the amount of structural change between different scales is calculated and averaged. Finally, the amounts of texture information and structural change are pooled to generate the final quality score of the SEM image. Experimental results show that the method can effectively evaluate the quality of SEM contrast-distorted images.

  • Evaluating the Stability of Deep Image Quality Assessment with Respect to Image Scaling

    Koki TSUBOTA  Hiroaki AKUTSU  Kiyoharu AIZAWA  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2022/07/25
      Vol:
    E105-D No:10
      Page(s):
    1829-1833

    Image quality assessment (IQA) is a fundamental metric for image processing tasks (e.g., compression). With full-reference IQAs, traditional IQAs, such as PSNR and SSIM, have been used. Recently, IQAs based on deep neural networks (deep IQAs), such as LPIPS and DISTS, have also been used. It is known that image scaling is inconsistent among deep IQAs, as some perform down-scaling as pre-processing, whereas others instead use the original image size. In this paper, we show that the image scale is an influential factor that affects deep IQA performance. We comprehensively evaluate four deep IQAs on the same five datasets, and the experimental results show that image scale significantly influences IQA performance. We found that the most appropriate image scale is often neither the default nor the original size, and the choice differs depending on the methods and datasets used. We visualized the stability and found that PieAPP is the most stable among the four deep IQAs.

  • Energy Efficient Approximate Storing of Image Data for MTJ Based Non-Volatile Flip-Flops and MRAM

    Yoshinori ONO  Kimiyoshi USAMI  

     
    PAPER

      Pubricized:
    2021/01/06
      Vol:
    E104-C No:7
      Page(s):
    338-349

    A non-volatile memory (NVM) employing MTJ has a lot of strong points such as read/write performance, best endurance and operating-voltage compatibility with standard CMOS. However, it consumes a lot of energy when writing the data. This becomes an obstacle when applying to battery-operated mobile devices. To solve this problem, we propose an approach to augment the capability of the precision scaling technique for the write operation in NVM. Precision scaling is an approximate computing technique to reduce the bit width of data (i.e. precision) for energy reduction. When writing image data to NVM with the precision scaling, the write energy and the image quality are changed according to the write time and the target bit range. We propose an energy-efficient approximate storing scheme for non-volatile flip-flops and a magnetic random-access memory (MRAM) that allows us to write the data by optimizing the bit positions to split the data and the write time for each bit range. By using the statistical model, we obtained optimal values for the write time and the targeted bit range under the trade-off between the write energy reduction and image quality degradation. Simulation results have demonstrated that by using these optimal values the write energy can be reduced up to 50% while maintaining the acceptable image quality. We also investigated the relationship between the input images and the output image quality when using this approach in detail. In addition, we evaluated the energy benefits when applying our approach to nine types of image processing including linear filters and edge detectors. Results showed that the write energy is reduced by further 12.5% at the maximum.

  • SEM Image Quality Assessment Based on Texture Inpainting

    Zhaolin LU  Ziyan ZHANG  Yi WANG  Liang DONG  Song LIANG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2020/10/30
      Vol:
    E104-D No:2
      Page(s):
    341-345

    This letter presents an image quality assessment (IQA) metric for scanning electron microscopy (SEM) images based on texture inpainting. Inspired by the observation that the texture information of SEM images is quite sensitive to distortions, a texture inpainting network is first trained to extract texture features. Then the weights of the trained texture inpainting network are transferred to the IQA network to help it learn an effective texture representation of the distorted image. Finally, supervised fine-tuning is conducted on the IQA network to predict the image quality score. Experimental results on the SEM image quality dataset demonstrate the advantages of the presented method.

  • Discriminative Convolutional Neural Network for Image Quality Assessment with Fixed Convolution Filters

    Motohiro TAKAGI  Akito SAKURAI  Masafumi HAGIWARA  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2019/08/09
      Vol:
    E102-D No:11
      Page(s):
    2265-2266

    Current image quality assessment (IQA) methods require the original images for evaluation. However, recently, IQA methods that use machine learning have been proposed. These methods learn the relationship between the distorted image and the image quality automatically. In this paper, we propose an IQA method based on deep learning that does not require a reference image. We show that a convolutional neural network with distortion prediction and fixed filters improves the IQA accuracy.

  • Binary Sparse Representation Based on Arbitrary Quality Metrics and Its Applications

    Takahiro OGAWA  Sho TAKAHASHI  Naofumi WADA  Akira TANAKA  Miki HASEYAMA  

     
    PAPER-Image, Vision

      Vol:
    E101-A No:11
      Page(s):
    1776-1785

    Binary sparse representation based on arbitrary quality metrics and its applications are presented in this paper. The novelties of the proposed method are twofold. First, the proposed method newly derives sparse representation for which representation coefficients are binary values, and this enables selection of arbitrary image quality metrics. This new sparse representation can generate quality metric-independent subspaces with simplification of the calculation procedures. Second, visual saliency is used in the proposed method for pooling the quality values obtained for all of the parts within target images. This approach enables visually pleasant approximation of the target images more successfully. By introducing the above two novel approaches, successful image approximation considering human perception becomes feasible. Since the proposed method can provide lower-dimensional subspaces that are obtained by better image quality metrics, realization of several image reconstruction tasks can be expected. Experimental results showed high performance of the proposed method in terms of two image reconstruction tasks, image inpainting and super-resolution.

  • A Fully-Blind and Fast Image Quality Predictor with Convolutional Neural Networks

    Zhengxue CHENG  Masaru TAKEUCHI  Kenji KANAI  Jiro KATTO  

     
    PAPER-Image

      Vol:
    E101-A No:9
      Page(s):
    1557-1566

    Image quality assessment (IQA) is an inherent problem in the field of image processing. Recently, deep learning-based image quality assessment has attracted increased attention, owing to its high prediction accuracy. In this paper, we propose a fully-blind and fast image quality predictor (FFIQP) using convolutional neural networks including two strategies. First, we propose a distortion clustering strategy based on the distribution function of intermediate-layer results in the convolutional neural network (CNN) to make IQA fully blind. Second, by analyzing the relationship between image saliency information and CNN prediction error, we utilize a pre-saliency map to skip the non-salient patches for IQA acceleration. Experimental results verify that our method can achieve the high accuracy (0.978) with subjective quality scores, outperforming existing IQA methods. Moreover, the proposed method is highly computationally appealing, achieving flexible complexity performance by assigning different thresholds in the saliency map.

  • Estimating the Quality of Fractal Compressed Images Using Lacunarity

    Megumi TAKEZAWA  Hirofumi SANADA  Takahiro OGAWA  Miki HASEYAMA  

     
    LETTER

      Vol:
    E101-A No:6
      Page(s):
    900-903

    In this paper, we propose a highly accurate method for estimating the quality of images compressed using fractal image compression. Using an iterated function system, fractal image compression compresses images by exploiting their self-similarity, thereby achieving high levels of performance; however, we cannot always use fractal image compression as a standard compression technique because some compressed images are of low quality. Generally, sufficient time is required for encoding and decoding an image before it can be determined whether the compressed image is of low quality or not. Therefore, in our previous study, we proposed a method to estimate the quality of images compressed using fractal image compression. Our previous method estimated the quality using image features of a given image without actually encoding and decoding the image, thereby providing an estimate rather quickly; however, estimation accuracy was not entirely sufficient. Therefore, in this paper, we extend our previously proposed method for improving estimation accuracy. Our improved method adopts a new image feature, namely lacunarity. Results of simulation showed that the proposed method achieves higher levels of accuracy than those of our previous method.

  • Image Quality Assessment Based on Multi-Order Local Features Description, Modeling and Quantification

    Yong DING  Xinyu ZHAO  Zhi ZHANG  Hang DAI  

     
    PAPER-Pattern Recognition

      Pubricized:
    2017/03/16
      Vol:
    E100-D No:6
      Page(s):
    1303-1315

    Image quality assessment (IQA) plays an important role in quality monitoring, evaluation and optimization for image processing systems. However, current quality-aware feature extraction methods for IQA can hardly balance accuracy and complexity. This paper introduces multi-order local description into image quality assessment for feature extraction. The first-order structure derivative and high-order discriminative information are integrated into local pattern representation to serve as the quality-aware features. Then joint distributions of the local pattern representation are modeled by spatially enhanced histogram. Finally, the image quality degradation is estimated by quantifying the divergence between such distributions of the reference image and those of the distorted image. Experimental results demonstrate that the proposed method outperforms other state-of-the-art approaches in consideration of not only accuracy that is consistent with human subjective evaluation, but also robustness and stability across different distortion types and various public databases. It provides a promising choice for image quality assessment development.

  • Naturalization of Screen Content Images for Enhanced Quality Evaluation

    Xingge GUO  Liping HUANG  Ke GU  Leida LI  Zhili ZHOU  Lu TANG  

     
    LETTER-Information Network

      Pubricized:
    2016/11/24
      Vol:
    E100-D No:3
      Page(s):
    574-577

    The quality assessment of screen content images (SCIs) has been attractive recently. Different from natural images, SCI is usually a mixture of picture and text. Traditional quality metrics are mainly designed for natural images, which do not fit well into the SCIs. Motivated by this, this letter presents a simple and effective method to naturalize SCIs, so that the traditional quality models can be applied for SCI quality prediction. Specifically, bicubic interpolation-based up-sampling is proposed to achieve this goal. Extensive experiments and comparisons demonstrate the effectiveness of the proposed method.

  • Sparse Representation for Color Image Super-Resolution with Image Quality Difference Evaluation

    Zi-wen WANG  Guo-rui FENG  Ling-yan FAN  Jin-wei WANG  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2016/10/19
      Vol:
    E100-D No:1
      Page(s):
    150-159

    The sparse representation models have been widely applied in image super-resolution. The certain optimization problem is supposed and can be solved by the iterative shrinkage algorithm. During iteration, the update of dictionaries and similar patches is necessary to obtain prior knowledge to better solve such ill-conditioned problem as image super-resolution. However, both the processes of iteration and update often spend a lot of time, which will be a bottleneck in practice. To solve it, in this paper, we present the concept of image quality difference based on generalized Gaussian distribution feature which has the same trend with the variation of Peak Signal to Noise Ratio (PSNR), and we update dictionaries or similar patches from the termination strategy according to the adaptive threshold of the image quality difference. Based on this point, we present two sparse representation algorithms for image super-resolution, one achieves the further improvement in image quality and the other decreases running time on the basis of image quality assurance. Experimental results also show that our quantitative results on several test datasets are in line with exceptions.

  • Revisiting the Regression between Raw Outputs of Image Quality Metrics and Ground Truth Measurements

    Chanho JUNG  Sanghyun JOO  Do-Won NAM  Wonjun KIM  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2016/08/08
      Vol:
    E99-D No:11
      Page(s):
    2778-2787

    In this paper, we aim to investigate the potential usefulness of machine learning in image quality assessment (IQA). Most previous studies have focused on designing effective image quality metrics (IQMs), and significant advances have been made in the development of IQMs over the last decade. Here, our goal is to improve prediction outcomes of “any” given image quality metric. We call this the “IQM's Outcome Improvement” problem, in order to distinguish the proposed approach from the existing IQA approaches. We propose a method that focuses on the underlying IQM and improves its prediction results by using machine learning techniques. Extensive experiments have been conducted on three different publicly available image databases. Particularly, through both 1) in-database and 2) cross-database validations, the generality and technological feasibility (in real-world applications) of our machine-learning-based algorithm have been evaluated. Our results demonstrate that the proposed framework improves prediction outcomes of various existing commonly used IQMs (e.g., MSE, PSNR, SSIM-based IQMs, etc.) in terms of not only prediction accuracy, but also prediction monotonicity.

  • Color-Enriched Gradient Similarity for Retouched Image Quality Evaluation

    Leida LI  Yu ZHOU  Jinjian WU  Jiansheng QIAN  Beijing CHEN  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2015/12/09
      Vol:
    E99-D No:3
      Page(s):
    773-776

    Image retouching is fundamental in photography, which is widely used to improve the perceptual quality of a low-quality image. Traditional image quality metrics are designed for degraded images, so they are limited in evaluating the quality of retouched images. This letter presents a RETouched Image QUality Evaluation (RETIQUE) algorithm by measuring structure and color changes between the original and retouched images. Structure changes are measured by gradient similarity. Color colorfulness and saturation are utilized to measure color changes. The overall quality score of a retouched image is computed as the linear combination of gradient similarity and color similarity. The performance of RETIQUE is evaluated on a public Digitally Retouched Image Quality (DRIQ) database. Experimental results demonstrate that the proposed metric outperforms the state-of-the-arts.

  • An Image Quality Assessment Using Mean-Centered Weber Ratio and Saliency Map

    Soyoung CHUNG  Min Gyo CHUNG  

     
    LETTER

      Pubricized:
    2015/10/21
      Vol:
    E99-D No:1
      Page(s):
    138-140

    Chen proposed an image quality assessment method to evaluate image quality at a ratio of noise in an image. However, Chen's method had some drawbacks that unnoticeable noise is reflected in the evaluation or noise position is not accurately detected. Therefore, in this paper, we propose a new image quality measurement scheme using the mean-centered WLNI (Weber's Law Noise Identifier) and the saliency map. The experimental results show that the proposed method outperforms Chen's and agrees more consistently with human visual judgment.

  • Moiré Reduction Using Inflection Point and Color Variation in Digital Camera of No Optical Low Pass Filter

    Dae-Chul KIM  Wang-Jun KYUNG  Ho-Gun HA  Yeong-Ho HA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2015/09/10
      Vol:
    E98-D No:12
      Page(s):
    2290-2298

    The role of an optical low-pass filter (OLPF) in a digital still camera is to remove the high spatial frequencies that cause aliasing, thereby enhancing the image quality. However, this also causes some loss of detail. Yet, when an image is captured without the OLPF, moiré generally appears in the high spatial frequency region of the image. Accordingly, this paper presents a moiré reduction method that allows omission of the OLPF. Since most digital still cameras use a CCD or a CMOS with a Bayer pattern, moiré patterns and color artifacts are simultaneously induced by aliasing at high spatial frequencies. Therefore, in this study, moiré reduction is performed in both the luminance channel to remove the moiré patterns and the color channel to reduce color smearing. To detect the moiré patterns, the spatial frequency response (SFR) of the camera is first analyzed. The moiré regions are identified using patterns related to the SFR of the camera and then analyzed in the frequency domain. The moiré patterns are reduced by removing their frequency components, represented by the inflection point between the high-frequency and DC components in the moiré region. To reduce the color smearing, color changing regions are detected using the color variation ratios for the RGB channels and then corrected by multiplying with the average surrounding colors. Experiments confirm that the proposed method is able to reduce the moiré in both the luminance and color channels, while also preserving the detail.

  • Reduced-Reference Image Quality Assessment Based on Discrete Cosine Transform Entropy

    Yazhong ZHANG  Jinjian WU  Guangming SHI  Xuemei XIE  Yi NIU  Chunxiao FAN  

     
    PAPER-Digital Signal Processing

      Vol:
    E98-A No:12
      Page(s):
    2642-2649

    Reduced-reference (RR) image quality assessment (IQA) algorithm aims to automatically evaluate the distorted image quality with partial reference data. The goal of RR IQA metric is to achieve higher quality prediction accuracy using less reference information. In this paper, we introduce a new RR IQA metric by quantifying the difference of discrete cosine transform (DCT) entropy features between the reference and distorted images. Neurophysiological evidences indicate that the human visual system presents different sensitivities to different frequency bands. Moreover, distortions on different bands result in individual quality degradations. Therefore, we suggest to calculate the information degradation on each band separately for quality assessment. The information degradations are firstly measured by the entropy difference of reorganized DCT coefficients. Then, the entropy differences on all bands are pooled to obtain the quality score. Experimental results on LIVE, CSIQ, TID2008, Toyama and IVC databases show that the proposed method performs highly consistent with human perception with limited reference data (8 values).

  • Selective Attention Mechanisms for Visual Quality Assessment

    Ulrich ENGELKE  

     
    INVITED PAPER

      Vol:
    E98-A No:8
      Page(s):
    1681-1688

    Selective visual attention is an integral mechanism of the human visual system that is often neglected when designing perceptually relevant image and video quality metrics. Disregarding attention mechanisms assumes that all distortions in the visual content impact equally on the overall quality perception, which is typically not the case. Over the past years we have performed several experiments to study the effect of visual attention on quality perception. In addition to gaining a deeper scientific understanding of this matter, we were also able to use this knowledge to further improve various quality prediction models. In this article, I review our work with the aim to increase awareness on the importance of visual attention mechanisms for the effective design of quality prediction models.

  • Estimation of Subjective Image Quality for Combinations of Display Physical Factors Based on the Mahalanobis-Taguchi System

    Yusuke AMANO  Gosuke OHASHI  Shogo MORI  Kazuya SAWADA  Takeshi HOSHINO  Yoshifumi SHIMODAIRA  

     
    LETTER

      Vol:
    E98-A No:8
      Page(s):
    1743-1746

    The present study proposes a method for estimation of subjective image quality, for combinations of display physical factors, based on the Mahalanobis-Taguchi system in the field of quality engineering. The proposed method estimates subjective image quality by the estimated equation based on the Mahalanobis-Taguchi System and subjective evaluation experiments using the method of successive categories for images of which parameters are combinations of gamma, maximum luminance and minimum luminance. The estimated image quality is in good agreement with the experimental subjective image quality.

  • An Edge Dependent Weighted Filter for Video Deinterlacing

    Hao ZHANG  Mengtian RONG  Tao LIU  

     
    LETTER-Image

      Vol:
    E98-A No:2
      Page(s):
    788-791

    In this letter, we propose a new intra-field deinterlacing algorithm based on an edge dependent weighted filter (EDWF). The proposed algorithm consists of three steps: 1) calculating the gradients of three directions (45°, 90°, and 135°) in the local working window; 2) achieving the weights of the neighboring pixels by exploiting the edge information in the pixel gradients; 3) interpolating the missing pixel using the proposed EDWF interpolator. Compared with existing deinterlacing methods on different images and video sequences, the proposed algorithm improves the peak signal-to-noise-ratio (PSNR) while achieving better subjective quality.

  • Hybrid Integration of Visual Attention Model into Image Quality Metric

    Chanho JUNG  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2014/08/22
      Vol:
    E97-D No:11
      Page(s):
    2971-2973

    Integrating the visual attention (VA) model into an objective image quality metric is a rapidly evolving area in modern image quality assessment (IQA) research due to the significant opportunities the VA information presents. So far, in the literature, it has been suggested to use either a task-free saliency map or a quality-task one for the integration into quality metric. A hybrid integration approach which takes the advantages of both saliency maps is presented in this paper. We compare our hybrid integration scheme with existing integration schemes using simple quality metrics. Results show that the proposed method performs better than the previous techniques in terms of prediction accuracy.

1-20hit(44hit)