The search functionality is under construction.

Keyword Search Result

[Keyword] artifact(37hit)

1-20hit(37hit)

  • Image Adjustment for Multi-Exposure Images Based on Convolutional Neural Networks

    Isana FUNAHASHI  Taichi YOSHIDA  Xi ZHANG  Masahiro IWAHASHI  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2021/10/21
      Vol:
    E105-D No:1
      Page(s):
    123-133

    In this paper, we propose an image adjustment method for multi-exposure images based on convolutional neural networks (CNNs). We call image regions without information due to saturation and object moving in multi-exposure images lacking areas in this paper. Lacking areas cause the ghosting artifact in fused images from sets of multi-exposure images by conventional fusion methods, which tackle the artifact. To avoid this problem, the proposed method estimates the information of lacking areas via adaptive inpainting. The proposed CNN consists of three networks, warp and refinement, detection, and inpainting networks. The second and third networks detect lacking areas and estimate their pixel values, respectively. In the experiments, it is observed that a simple fusion method with the proposed method outperforms state-of-the-art fusion methods in the peak signal-to-noise ratio. Moreover, the proposed method is applied for various fusion methods as pre-processing, and results show obviously reducing artifacts.

  • Compressed Sensing Framework Applying Independent Component Analysis after Undersampling for Reconstructing Electroencephalogram Signals Open Access

    Daisuke KANEMOTO  Shun KATSUMATA  Masao AIHARA  Makoto OHKI  

     
    PAPER-Biometrics

      Pubricized:
    2020/06/22
      Vol:
    E103-A No:12
      Page(s):
    1647-1654

    This paper proposes a novel compressed sensing (CS) framework for reconstructing electroencephalogram (EEG) signals. A feature of this framework is the application of independent component analysis (ICA) to remove the interference from artifacts after undersampling in a data processing unit. Therefore, we can remove the ICA processing block from the sensing unit. In this framework, we used a random undersampling measurement matrix to suppress the Gaussian. The developed framework, in which the discrete cosine transform basis and orthogonal matching pursuit were used, was evaluated using raw EEG signals with a pseudo-model of an eye-blink artifact. The normalized mean square error (NMSE) and correlation coefficient (CC), obtained as the average of 2,000 results, were compared to quantitatively demonstrate the effectiveness of the proposed framework. The evaluation results of the NMSE and CC showed that the proposed framework could remove the interference from the artifacts under a high compression ratio.

  • Techniques of Reducing Switching Artifacts in Chopper Amplifiers Open Access

    Yoshinori KUSUDA  

     
    INVITED PAPER-Electronic Circuits

      Pubricized:
    2020/04/09
      Vol:
    E103-C No:10
      Page(s):
    458-465

    Chopping technique up-modulates amplifier's offset and low-frequency noise up to its switching frequency, and therefore can achieve low offset and low temperature drift. On the other hand, it generates unwanted AC and DC errors due to its switching artifacts such as up-modulated ripple and glitches. This paper summarizes various circuit techniques of reducing such switching artifacts, and then discusses the advantages and disadvantages of each technique. The comparison shows that newer designs with advanced circuit techniques can achieve lower DC and AC errors with higher chopping frequency.

  • Combining Parallel Adaptive Filtering and Wavelet Threshold Denoising for Photoplethysmography-Based Pulse Rate Monitoring during Intensive Physical Exercise

    Chunting WAN  Dongyi CHEN  Juan YANG  Miao HUANG  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2019/12/03
      Vol:
    E103-D No:3
      Page(s):
    612-620

    Real-time pulse rate (PR) monitoring based on photoplethysmography (PPG) has been drawn much attention in recent years. However, PPG signal detected under movement is easily affected by random noises, especially motion artifacts (MA), affecting the accuracy of PR estimation. In this paper, a parallel method structure is proposed, which effectively combines wavelet threshold denoising with recursive least squares (RLS) adaptive filtering to remove interference signals, and uses spectral peak tracking algorithm to estimate real-time PR. Furthermore, we propose a parallel structure RLS adaptive filtering to increase the amplitude of spectral peak associated with PR for PR estimation. This method is evaluated by using the PPG datasets of the 2015 IEEE Signal Processing Cup. Experimental results on the 12 training datasets during subjects' walking or running show that the average absolute error (AAE) is 1.08 beats per minute (BPM) and standard deviation (SD) is 1.45 BPM. In addition, the AAE of PR on the 10 testing datasets during subjects' fast running accompanied with wrist movements can reach 2.90 BPM. Furthermore, the results indicate that the proposed approach keeps high estimation accuracy of PPG signal even with strong MA.

  • Automatic Stop Word Generation for Mining Software Artifact Using Topic Model with Pointwise Mutual Information

    Jung-Been LEE  Taek LEE  Hoh Peter IN  

     
    PAPER-Software Engineering

      Pubricized:
    2019/05/27
      Vol:
    E102-D No:9
      Page(s):
    1761-1772

    Mining software artifacts is a useful way to understand the source code of software projects. Topic modeling in particular has been widely used to discover meaningful information from software artifacts. However, software artifacts are unstructured and contain a mix of textual types within the natural text. These software artifact characteristics worsen the performance of topic modeling. Among several natural language pre-processing tasks, removing stop words to reduce meaningless and uninteresting terms is an efficient way to improve the quality of topic models. Although many approaches are used to generate effective stop words, the lists are outdated or too general to apply to mining software artifacts. In addition, the performance of the topic model is sensitive to the datasets used in the training for each approach. To resolve these problems, we propose an automatic stop word generation approach for topic models of software artifacts. By measuring topic coherence among words in the topic using Pointwise Mutual Information (PMI), we added words with a low PMI score to our stop words list for every topic modeling loop. Through our experiment, we proved that our stop words list results in a higher performance of the topic model than lists from other approaches.

  • Efficient Methods of Inactive Regions Padding for Segmented Sphere Projection (SSP) of 360 Video

    Yong-Uk YOON  Yong-Jo AHN  Donggyu SIM  Jae-Gon KIM  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2018/08/20
      Vol:
    E101-D No:11
      Page(s):
    2836-2839

    In this letter, methods of inactive regions padding for Segmented Sphere Projection (SSP) of 360 videos are proposed. A 360 video is projected onto a 2D plane to be coded with diverse projection formats. Some projection formats have inactive regions in the converted 2D plane such as SSP. The inactive regions may cause visual artifacts as well as coding efficiency decrease due to discontinuous boundaries between active and inactive regions. In this letter, to improve coding efficiency and reduce visual artifacts, the inactive regions are padded by using two types of adjacent pixels in either rectangular-face or circle-face boundaries. By padding the inactive regions with the highly correlated adjacent pixels, the discontinuities between active and inactive regions are reduced. The experimental results show that, in terms of end-to-end Weighted to Spherically uniform PSNR (WS-PSNR), the proposed methods achieve 0.3% BD-rate reduction over the existing padding method for SSP. In addition, the visual artifacts along the borders between discontinuous faces are noticeably reduced.

  • Single Image Haze Removal Using Hazy Particle Maps

    Geun-Jun KIM  Seungmin LEE  Bongsoon KANG  

     
    LETTER-Image

      Vol:
    E101-A No:11
      Page(s):
    1999-2002

    Hazes with various properties spread widely across flat areas with depth continuities and corner areas with depth discontinuities. Removing haze from a single hazy image is difficult due to its ill-posed nature. To solve this problem, this study proposes a modified hybrid median filter that performs a median filter to preserve the edges of flat areas and a hybrid median filter to preserve depth discontinuity corners. Recovered scene radiance, which is obtained by removing hazy particles, restores image visibility using adaptive nonlinear curves for dynamic range expansion. Using comparative studies and quantitative evaluations, this study shows that the proposed method achieves similar or better results than those of other state-of-the-art methods.

  • Learners' Self Checking and Its Effectiveness in Conceptual Data Modeling Exercises

    Takafumi TANAKA  Hiroaki HASHIURA  Atsuo HAZEYAMA  Seiichi KOMIYA  Yuki HIRAI  Keiichi KANEKO  

     
    PAPER

      Pubricized:
    2018/04/20
      Vol:
    E101-D No:7
      Page(s):
    1801-1810

    Conceptual data modeling is an important activity in database design. However, it is difficult for novice learners to master its skills. In the conceptual data modeling, learners are required to detect and correct errors of their artifacts by themselves because modeling tools do not assist these activities. We call such activities self checking, which is also an important process. However, the previous research did not focus on it and/or the data collection of self checks. The data collection of self checks is difficult because self checking is an internal activity and self checks are not usually expressed. Therefore, we developed a method to help learners express their self checks by reflecting on their artifact making processes. In addition, we developed a system, KIfU3, which implements this method. We conducted an evaluation experiment and showed the effectiveness of the method. From the experimental results, we found out that (1) the novice learners conduct self checks during their conceptual data modeling tasks; (2) it is difficult for them to detect errors in their artifacts; (3) they cannot necessarily correct the errors even if they could identify them; and (4) there is no relationship between the numbers of self checks by the learners and the quality of their artifacts.

  • Deblocking Artifact of Satellite Image Based on Adaptive Soft-Threshold Anisotropic Filter Using Wavelet

    RISNANDAR  Masayoshi ARITSUGI  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/02/26
      Vol:
    E101-D No:6
      Page(s):
    1605-1620

    New deblocking artifact, or blocking artifact reduction, algorithms based on nonlinear adaptive soft-threshold anisotropic filter in wavelet are proposed. Our deblocking algorithm uses soft-threshold, adaptive wavelet direction, adaptive anisotropic filter, and estimation. The novelties of this paper are an adaptive soft-threshold for deblocking artifact and an optimal intersection of confidence intervals (OICI) method in deblocking artifact estimation. The soft-threshold values are adaptable to different thresholds of flat area, texture area, and blocking artifact. The OICI is a reconstruction technique of estimated deblocking artifact which improves acceptable quality level of estimated deblocking artifact and reduces execution time of deblocking artifact estimation compared to the other methods. Our adaptive OICI method outperforms other adaptive deblocking artifact methods. Our estimated deblocking artifact algorithms have up to 98% of MSE improvement, up to 89% of RMSE improvement, and up to 99% of MAE improvement. We also got up to 77.98% reduction of computational time of deblocking artifact estimations, compared to other methods. We have estimated shift and add algorithms by using Euler++(E++) and Runge-Kutta of order 4++ (RK4++) algorithms which iterate one step an ordinary differential equation integration method. Experimental results showed that our E++ and RK4++ algorithms could reduce computational time in terms of shift and add, and RK4++ algorithm is superior to E++ algorithm.

  • Detecting Motor Learning-Related fNIRS Activity by Applying Removal of Systemic Interferences

    Isao NAMBU  Takahiro IMAI  Shota SAITO  Takanori SATO  Yasuhiro WADA  

     
    LETTER-Biological Engineering

      Pubricized:
    2016/10/04
      Vol:
    E100-D No:1
      Page(s):
    242-245

    Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique, suitable for measurement during motor learning. However, effects of contamination by systemic artifacts derived from the scalp layer on learning-related fNIRS signals remain unclear. Here we used fNIRS to measure activity of sensorimotor regions while participants performed a visuomotor task. The comparison of results using a general linear model with and without systemic artifact removal shows that systemic artifact removal can improve detection of learning-related activity in sensorimotor regions, suggesting the importance of removal of systemic artifacts on learning-related cerebral activity.

  • An Improved Single Image Haze Removal Algorithm Using Image Segmentation

    Hanhoon PARK  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E97-D No:9
      Page(s):
    2554-2558

    In this letter, we propose an improved single image haze removal algorithm using image segmentation. It can effectively resolve two common problems of conventional algorithms which are based on dark channel prior: halo artifact and wrong estimation of atmospheric light. The process flow of our algorithm is as follows. First, the input hazy image is over-segmented. Then, the segmentation results are used for improving the conventional dark channel computation which uses fixed local patches. Also, the segmentation results are used for accurately estimating the atmospheric light. Finally, from the improved dark channel and atmospheric light, an accurate transmission map is computed allowing us to recover a high quality haze-free image.

  • No-Reference Quality Metric of Blocking Artifacts Based on Color Discontinuity Analysis

    Leida LI  Hancheng ZHU  Jiansheng QIAN  Jeng-Shyang PAN  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E97-D No:4
      Page(s):
    993-997

    This letter presents a no-reference blocking artifact measure based on analysis of color discontinuities in YUV color space. Color shift and color disappearance are first analyzed in JPEG images. For color-shifting and color-disappearing areas, the blocking artifact scores are obtained by computing the gradient differences across the block boundaries in U component and Y component, respectively. An overall quality score is then produced as the average of the local ones. Extensive simulations and comparisons demonstrate the efficiency of the proposed method.

  • A Jointly Optimized Predictive-Adaptive Partitioned Block Transform for Video Coding

    Di WU  Xiaohai HE  

     
    PAPER-Image Processing

      Vol:
    E96-A No:11
      Page(s):
    2161-2168

    In this paper, we propose a jointly optimized predictive-adaptive partitioned block transform to exploit the spatial characteristics of intra residuals and improve video coding performance. Under the assumptions of traditional Markov representations, the asymmetric discrete sine transform (ADST) can be combined with a discrete cosine transform (DCT) for video coding. In comparison, the interpolative Markov representation has a lower mean-square error for images or regions that have relatively high contrast, and is insensitive to changes in image statistics. Hence, we derive an even discrete sine transform (EDST) from the interpolative Markov model, and use a coding scheme to switch between EDST and DCT, depending on the prediction direction and boundary information. To obtain an implementation independent of multipliers, we also propose an orthogonal 4-point integer EDST, which consists solely of adds and bit-shifts. We implement our hybrid transform coding scheme within the H.264/AVC intra-mode framework. Experimental results show that the proposed scheme significantly outperforms standard DCT and ADST. It also greatly reduces the blocking artifacts typically observed around block edges, because the new transform is more adaptable to the characteristics of intra-prediction residuals.

  • A Spatially Adaptive Gradient-Projection Algorithm to Remove Coding Artifacts of H.264

    Kwon-Yul CHOI  Min-Cheol HONG  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E94-D No:5
      Page(s):
    1073-1081

    In this paper, we propose a spatially adaptive gradient-projection algorithm for the H.264 video coding standard to remove coding artifacts using local statistics. A hybrid method combining a new weighted constrained least squares (WCLS) approach and the projection onto convex sets (POCS) approach is introduced, where weighting components are determined on the basis of the human visual system (HVS) and projection set is defined by the difference between adjacent pixels and the quantization index (QI). A new visual function is defined to determine the weighting matrices controlling the degree of global smoothness, and a projection set is used to obtain a solution satisfying local smoothing constraints, so that the coding artifacts such as blocking and ringing artifacts can be simultaneously removed. The experimental results show the capability and efficiency of the proposed algorithm.

  • Linearity Improvement of Mosquito Noise Level Estimation from Decoded Picture

    Naoya SAGARA  Yousuke KASHIMURA  Kenji SUGIYAMA  

     
    LETTER-Evaluation

      Vol:
    E94-A No:2
      Page(s):
    548-551

    DCT encoding of images leads to block artifact and mosquito noise degradations in the decoded pictures. We propose an estimation to determine the mosquito noise block and level; however, this technique lacks sufficient linearity. To improve its performance, we use the sub-divided block for edge effect suppression. The subsequent results are mostly linear with the quantization.

  • Improvement of Ringing Artifact Reduction Using a K-Means Method for Color Moving Pictures

    Wonwoo JANG  Hagyong HAN  Wontae CHOI  Gidong LEE  Bongsoon KANG  

     
    LETTER-Image

      Vol:
    E93-A No:1
      Page(s):
    348-353

    This paper proposes an improved method that uses a K-means method to effectively reduce the ringing artifacts in a color moving picture. To apply this improved K-method, we set the number of groups for the process to two (K=2) in the three dimensional R, G, B color space. We then improved the R, G, B color value of all of the pixels by moving the current R, G, B color value of each pixel to calculated center values, which reduced the ringing artifacts. The results were verified by calculating the overshoot and the slope of the light luminance around the edges of test images that had been processed by the new algorithm. We then compared the calculated results with the overshoot and slope of the light luminance of the unprocessed image.

  • Estimation of Mosquito Noise Level from Decoded Picture

    Kenji SUGIYAMA  Naoya SAGARA  Yohei KASHIMURA  

     
    PAPER-Evaluation

      Vol:
    E92-A No:12
      Page(s):
    3291-3296

    With DCT coding, block artifact and mosquito noise degradations appear in decoded pictures. The control of post filtering is important to reduce degradations without causing side effects. Decoding information is useful, if the filter is inside or close to the encoder; however, it is difficult to control with independent post filtering, such as in a display. In this case, control requires the estimation of the artifact from only the decoded picture. In this work, we describe an estimation method that determines the mosquito noise block and level. In this method, the ratio of spatial activity is taken between the mosquito block and the neighboring flat block. We test the proposed method using the reconstructed pictures which are coded with different quantization scales. We recognize that the results are mostly reasonable with the different quantizations.

  • Design of a Deblocking Filter for Both Objective and Subjective Metrics

    Ying-Wen CHANG  Yen-Yu CHEN  

     
    LETTER

      Vol:
    E91-A No:8
      Page(s):
    2038-2040

    Blocking artifact is a major limitation of DCT-based codec at low bit rates. This degradation is likely to influence the judgment of a final user. This work presents a powerful post-processing filter in the DCT frequency domain. The proposed algorithm adopts a shift block within four adjacent DCT blocks to reduce computational complexity. The artifacts resulting from quantized and de-quantized process are eliminated by slightly modifying several DCT coefficients in the shift block. Simulation results indicate that the proposed method produces the best image quality in terms of both objective and subjective metrics.

  • Natural Object/Artifact Image Classification Based on Line Features

    Johji TAJIMA  Hironori KONO  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E91-D No:8
      Page(s):
    2207-2211

    Three features for image classification into natural objects and artifacts are investigated. They are 'line length ratio', 'line direction distribution,' and 'edge coverage'. Among the three, the feature 'line length ratio' shows superior classification accuracy (above 90%) that exceeds the performance of conventional features, according to experimental results in application to digital camera images. As the development of this feature was motivated by the fact that the edge sharpening magnitude in image-quality improvement must be controlled based on the image content, this classification algorithm should be especially suitable for the image-quality improvement applications.

  • The Development of BCI Using Alpha Waves for Controlling the Robot Arm

    Shinsuke INOUE  Yoko AKIYAMA  Yoshinobu IZUMI  Shigehiro NISHIJIMA  

     
    PAPER

      Vol:
    E91-B No:7
      Page(s):
    2125-2132

    The highly accurate BCI using alpha waves was developed for controlling the robot arm, and real-time operation was succeeded by using noninvasive electrodes. The significant components of the alpha wave were identified by spectral analysis and confirmation of the amplitude of the alpha wave. When the alpha wave was observed in the subject, the subjects were instructed to select the multiple decision branches, concerning 7 motions (including "STOP") of a robot arm. As a result, high accuracy (70-95%) was obtained, and the subject succeeded in transferring a small box by controlling the robot arm. Since high accuracy was obtained by use of this method, it can be applied to control equipments such as a robot arm. Since the alpha wave can be easily generated, the BCI using alpha waves does not need more training than that using other signals. Moreover, we tried to reduce the false positive errors by effectively detecting artifacts using spectral analysis and detecting signals of 50 µV or more. As a result, the false positive errors could be reduced from 25% to 0%. Therefore, this technique shows great promise in the area of communication and the control of other external equipments, and will make great contribution in the improvement of Quality of Life (QOL) of mobility disabled.

1-20hit(37hit)