Isana FUNAHASHI Taichi YOSHIDA Xi ZHANG Masahiro IWAHASHI
In this paper, we propose an image adjustment method for multi-exposure images based on convolutional neural networks (CNNs). We call image regions without information due to saturation and object moving in multi-exposure images lacking areas in this paper. Lacking areas cause the ghosting artifact in fused images from sets of multi-exposure images by conventional fusion methods, which tackle the artifact. To avoid this problem, the proposed method estimates the information of lacking areas via adaptive inpainting. The proposed CNN consists of three networks, warp and refinement, detection, and inpainting networks. The second and third networks detect lacking areas and estimate their pixel values, respectively. In the experiments, it is observed that a simple fusion method with the proposed method outperforms state-of-the-art fusion methods in the peak signal-to-noise ratio. Moreover, the proposed method is applied for various fusion methods as pre-processing, and results show obviously reducing artifacts.
Daisuke KANEMOTO Shun KATSUMATA Masao AIHARA Makoto OHKI
This paper proposes a novel compressed sensing (CS) framework for reconstructing electroencephalogram (EEG) signals. A feature of this framework is the application of independent component analysis (ICA) to remove the interference from artifacts after undersampling in a data processing unit. Therefore, we can remove the ICA processing block from the sensing unit. In this framework, we used a random undersampling measurement matrix to suppress the Gaussian. The developed framework, in which the discrete cosine transform basis and orthogonal matching pursuit were used, was evaluated using raw EEG signals with a pseudo-model of an eye-blink artifact. The normalized mean square error (NMSE) and correlation coefficient (CC), obtained as the average of 2,000 results, were compared to quantitatively demonstrate the effectiveness of the proposed framework. The evaluation results of the NMSE and CC showed that the proposed framework could remove the interference from the artifacts under a high compression ratio.
Chopping technique up-modulates amplifier's offset and low-frequency noise up to its switching frequency, and therefore can achieve low offset and low temperature drift. On the other hand, it generates unwanted AC and DC errors due to its switching artifacts such as up-modulated ripple and glitches. This paper summarizes various circuit techniques of reducing such switching artifacts, and then discusses the advantages and disadvantages of each technique. The comparison shows that newer designs with advanced circuit techniques can achieve lower DC and AC errors with higher chopping frequency.
Chunting WAN Dongyi CHEN Juan YANG Miao HUANG
Real-time pulse rate (PR) monitoring based on photoplethysmography (PPG) has been drawn much attention in recent years. However, PPG signal detected under movement is easily affected by random noises, especially motion artifacts (MA), affecting the accuracy of PR estimation. In this paper, a parallel method structure is proposed, which effectively combines wavelet threshold denoising with recursive least squares (RLS) adaptive filtering to remove interference signals, and uses spectral peak tracking algorithm to estimate real-time PR. Furthermore, we propose a parallel structure RLS adaptive filtering to increase the amplitude of spectral peak associated with PR for PR estimation. This method is evaluated by using the PPG datasets of the 2015 IEEE Signal Processing Cup. Experimental results on the 12 training datasets during subjects' walking or running show that the average absolute error (AAE) is 1.08 beats per minute (BPM) and standard deviation (SD) is 1.45 BPM. In addition, the AAE of PR on the 10 testing datasets during subjects' fast running accompanied with wrist movements can reach 2.90 BPM. Furthermore, the results indicate that the proposed approach keeps high estimation accuracy of PPG signal even with strong MA.
Jung-Been LEE Taek LEE Hoh Peter IN
Mining software artifacts is a useful way to understand the source code of software projects. Topic modeling in particular has been widely used to discover meaningful information from software artifacts. However, software artifacts are unstructured and contain a mix of textual types within the natural text. These software artifact characteristics worsen the performance of topic modeling. Among several natural language pre-processing tasks, removing stop words to reduce meaningless and uninteresting terms is an efficient way to improve the quality of topic models. Although many approaches are used to generate effective stop words, the lists are outdated or too general to apply to mining software artifacts. In addition, the performance of the topic model is sensitive to the datasets used in the training for each approach. To resolve these problems, we propose an automatic stop word generation approach for topic models of software artifacts. By measuring topic coherence among words in the topic using Pointwise Mutual Information (PMI), we added words with a low PMI score to our stop words list for every topic modeling loop. Through our experiment, we proved that our stop words list results in a higher performance of the topic model than lists from other approaches.
Yong-Uk YOON Yong-Jo AHN Donggyu SIM Jae-Gon KIM
In this letter, methods of inactive regions padding for Segmented Sphere Projection (SSP) of 360 videos are proposed. A 360 video is projected onto a 2D plane to be coded with diverse projection formats. Some projection formats have inactive regions in the converted 2D plane such as SSP. The inactive regions may cause visual artifacts as well as coding efficiency decrease due to discontinuous boundaries between active and inactive regions. In this letter, to improve coding efficiency and reduce visual artifacts, the inactive regions are padded by using two types of adjacent pixels in either rectangular-face or circle-face boundaries. By padding the inactive regions with the highly correlated adjacent pixels, the discontinuities between active and inactive regions are reduced. The experimental results show that, in terms of end-to-end Weighted to Spherically uniform PSNR (WS-PSNR), the proposed methods achieve 0.3% BD-rate reduction over the existing padding method for SSP. In addition, the visual artifacts along the borders between discontinuous faces are noticeably reduced.
Geun-Jun KIM Seungmin LEE Bongsoon KANG
Hazes with various properties spread widely across flat areas with depth continuities and corner areas with depth discontinuities. Removing haze from a single hazy image is difficult due to its ill-posed nature. To solve this problem, this study proposes a modified hybrid median filter that performs a median filter to preserve the edges of flat areas and a hybrid median filter to preserve depth discontinuity corners. Recovered scene radiance, which is obtained by removing hazy particles, restores image visibility using adaptive nonlinear curves for dynamic range expansion. Using comparative studies and quantitative evaluations, this study shows that the proposed method achieves similar or better results than those of other state-of-the-art methods.
Takafumi TANAKA Hiroaki HASHIURA Atsuo HAZEYAMA Seiichi KOMIYA Yuki HIRAI Keiichi KANEKO
Conceptual data modeling is an important activity in database design. However, it is difficult for novice learners to master its skills. In the conceptual data modeling, learners are required to detect and correct errors of their artifacts by themselves because modeling tools do not assist these activities. We call such activities self checking, which is also an important process. However, the previous research did not focus on it and/or the data collection of self checks. The data collection of self checks is difficult because self checking is an internal activity and self checks are not usually expressed. Therefore, we developed a method to help learners express their self checks by reflecting on their artifact making processes. In addition, we developed a system, KIfU3, which implements this method. We conducted an evaluation experiment and showed the effectiveness of the method. From the experimental results, we found out that (1) the novice learners conduct self checks during their conceptual data modeling tasks; (2) it is difficult for them to detect errors in their artifacts; (3) they cannot necessarily correct the errors even if they could identify them; and (4) there is no relationship between the numbers of self checks by the learners and the quality of their artifacts.
New deblocking artifact, or blocking artifact reduction, algorithms based on nonlinear adaptive soft-threshold anisotropic filter in wavelet are proposed. Our deblocking algorithm uses soft-threshold, adaptive wavelet direction, adaptive anisotropic filter, and estimation. The novelties of this paper are an adaptive soft-threshold for deblocking artifact and an optimal intersection of confidence intervals (OICI) method in deblocking artifact estimation. The soft-threshold values are adaptable to different thresholds of flat area, texture area, and blocking artifact. The OICI is a reconstruction technique of estimated deblocking artifact which improves acceptable quality level of estimated deblocking artifact and reduces execution time of deblocking artifact estimation compared to the other methods. Our adaptive OICI method outperforms other adaptive deblocking artifact methods. Our estimated deblocking artifact algorithms have up to 98% of MSE improvement, up to 89% of RMSE improvement, and up to 99% of MAE improvement. We also got up to 77.98% reduction of computational time of deblocking artifact estimations, compared to other methods. We have estimated shift and add algorithms by using Euler++(E++) and Runge-Kutta of order 4++ (RK4++) algorithms which iterate one step an ordinary differential equation integration method. Experimental results showed that our E++ and RK4++ algorithms could reduce computational time in terms of shift and add, and RK4++ algorithm is superior to E++ algorithm.
Isao NAMBU Takahiro IMAI Shota SAITO Takanori SATO Yasuhiro WADA
Functional near-infrared spectroscopy (fNIRS) is a noninvasive neuroimaging technique, suitable for measurement during motor learning. However, effects of contamination by systemic artifacts derived from the scalp layer on learning-related fNIRS signals remain unclear. Here we used fNIRS to measure activity of sensorimotor regions while participants performed a visuomotor task. The comparison of results using a general linear model with and without systemic artifact removal shows that systemic artifact removal can improve detection of learning-related activity in sensorimotor regions, suggesting the importance of removal of systemic artifacts on learning-related cerebral activity.
In this letter, we propose an improved single image haze removal algorithm using image segmentation. It can effectively resolve two common problems of conventional algorithms which are based on dark channel prior: halo artifact and wrong estimation of atmospheric light. The process flow of our algorithm is as follows. First, the input hazy image is over-segmented. Then, the segmentation results are used for improving the conventional dark channel computation which uses fixed local patches. Also, the segmentation results are used for accurately estimating the atmospheric light. Finally, from the improved dark channel and atmospheric light, an accurate transmission map is computed allowing us to recover a high quality haze-free image.
Leida LI Hancheng ZHU Jiansheng QIAN Jeng-Shyang PAN
This letter presents a no-reference blocking artifact measure based on analysis of color discontinuities in YUV color space. Color shift and color disappearance are first analyzed in JPEG images. For color-shifting and color-disappearing areas, the blocking artifact scores are obtained by computing the gradient differences across the block boundaries in U component and Y component, respectively. An overall quality score is then produced as the average of the local ones. Extensive simulations and comparisons demonstrate the efficiency of the proposed method.
In this paper, we propose a jointly optimized predictive-adaptive partitioned block transform to exploit the spatial characteristics of intra residuals and improve video coding performance. Under the assumptions of traditional Markov representations, the asymmetric discrete sine transform (ADST) can be combined with a discrete cosine transform (DCT) for video coding. In comparison, the interpolative Markov representation has a lower mean-square error for images or regions that have relatively high contrast, and is insensitive to changes in image statistics. Hence, we derive an even discrete sine transform (EDST) from the interpolative Markov model, and use a coding scheme to switch between EDST and DCT, depending on the prediction direction and boundary information. To obtain an implementation independent of multipliers, we also propose an orthogonal 4-point integer EDST, which consists solely of adds and bit-shifts. We implement our hybrid transform coding scheme within the H.264/AVC intra-mode framework. Experimental results show that the proposed scheme significantly outperforms standard DCT and ADST. It also greatly reduces the blocking artifacts typically observed around block edges, because the new transform is more adaptable to the characteristics of intra-prediction residuals.
In this paper, we propose a spatially adaptive gradient-projection algorithm for the H.264 video coding standard to remove coding artifacts using local statistics. A hybrid method combining a new weighted constrained least squares (WCLS) approach and the projection onto convex sets (POCS) approach is introduced, where weighting components are determined on the basis of the human visual system (HVS) and projection set is defined by the difference between adjacent pixels and the quantization index (QI). A new visual function is defined to determine the weighting matrices controlling the degree of global smoothness, and a projection set is used to obtain a solution satisfying local smoothing constraints, so that the coding artifacts such as blocking and ringing artifacts can be simultaneously removed. The experimental results show the capability and efficiency of the proposed algorithm.
Naoya SAGARA Yousuke KASHIMURA Kenji SUGIYAMA
DCT encoding of images leads to block artifact and mosquito noise degradations in the decoded pictures. We propose an estimation to determine the mosquito noise block and level; however, this technique lacks sufficient linearity. To improve its performance, we use the sub-divided block for edge effect suppression. The subsequent results are mostly linear with the quantization.
Wonwoo JANG Hagyong HAN Wontae CHOI Gidong LEE Bongsoon KANG
This paper proposes an improved method that uses a K-means method to effectively reduce the ringing artifacts in a color moving picture. To apply this improved K-method, we set the number of groups for the process to two (K=2) in the three dimensional R, G, B color space. We then improved the R, G, B color value of all of the pixels by moving the current R, G, B color value of each pixel to calculated center values, which reduced the ringing artifacts. The results were verified by calculating the overshoot and the slope of the light luminance around the edges of test images that had been processed by the new algorithm. We then compared the calculated results with the overshoot and slope of the light luminance of the unprocessed image.
Kenji SUGIYAMA Naoya SAGARA Yohei KASHIMURA
With DCT coding, block artifact and mosquito noise degradations appear in decoded pictures. The control of post filtering is important to reduce degradations without causing side effects. Decoding information is useful, if the filter is inside or close to the encoder; however, it is difficult to control with independent post filtering, such as in a display. In this case, control requires the estimation of the artifact from only the decoded picture. In this work, we describe an estimation method that determines the mosquito noise block and level. In this method, the ratio of spatial activity is taken between the mosquito block and the neighboring flat block. We test the proposed method using the reconstructed pictures which are coded with different quantization scales. We recognize that the results are mostly reasonable with the different quantizations.
Blocking artifact is a major limitation of DCT-based codec at low bit rates. This degradation is likely to influence the judgment of a final user. This work presents a powerful post-processing filter in the DCT frequency domain. The proposed algorithm adopts a shift block within four adjacent DCT blocks to reduce computational complexity. The artifacts resulting from quantized and de-quantized process are eliminated by slightly modifying several DCT coefficients in the shift block. Simulation results indicate that the proposed method produces the best image quality in terms of both objective and subjective metrics.
Three features for image classification into natural objects and artifacts are investigated. They are 'line length ratio', 'line direction distribution,' and 'edge coverage'. Among the three, the feature 'line length ratio' shows superior classification accuracy (above 90%) that exceeds the performance of conventional features, according to experimental results in application to digital camera images. As the development of this feature was motivated by the fact that the edge sharpening magnitude in image-quality improvement must be controlled based on the image content, this classification algorithm should be especially suitable for the image-quality improvement applications.
Shinsuke INOUE Yoko AKIYAMA Yoshinobu IZUMI Shigehiro NISHIJIMA
The highly accurate BCI using alpha waves was developed for controlling the robot arm, and real-time operation was succeeded by using noninvasive electrodes. The significant components of the alpha wave were identified by spectral analysis and confirmation of the amplitude of the alpha wave. When the alpha wave was observed in the subject, the subjects were instructed to select the multiple decision branches, concerning 7 motions (including "STOP") of a robot arm. As a result, high accuracy (70-95%) was obtained, and the subject succeeded in transferring a small box by controlling the robot arm. Since high accuracy was obtained by use of this method, it can be applied to control equipments such as a robot arm. Since the alpha wave can be easily generated, the BCI using alpha waves does not need more training than that using other signals. Moreover, we tried to reduce the false positive errors by effectively detecting artifacts using spectral analysis and detecting signals of 50 µV or more. As a result, the false positive errors could be reduced from 25% to 0%. Therefore, this technique shows great promise in the area of communication and the control of other external equipments, and will make great contribution in the improvement of Quality of Life (QOL) of mobility disabled.