The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SI(16314hit)

2381-2400hit(16314hit)

  • An Extreme Learning Machine Architecture Based on Volterra Filtering and PCA

    Li CHEN  Ling YANG  Juan DU  Chao SUN  Shenglei DU  Haipeng XI  

     
    PAPER-Information Network

      Pubricized:
    2017/08/02
      Vol:
    E100-D No:11
      Page(s):
    2690-2701

    Extreme learning machine (ELM) has recently attracted many researchers' interest due to its very fast learning speed, good generalization ability, and ease of implementation. However, it has a linear output layer which may limit the capability of exploring the available information, since higher-order statistics of the signals are not taken into account. To address this, we propose a novel ELM architecture in which the linear output layer is replaced by a Volterra filter structure. Additionally, the principal component analysis (PCA) technique is used to reduce the number of effective signals transmitted to the output layer. This idea not only improves the processing capability of the network, but also preserves the simplicity of the training process. Then we carry out performance evaluation and application analysis for the proposed architecture in the context of supervised classification and unsupervised equalization respectively, and the obtained results either on publicly available datasets or various channels, when compared to those produced by already proposed ELM versions and a state-of-the-art algorithm: support vector machine (SVM), highlight the adequacy and the advantages of the proposed architecture and characterize it as a promising tool to deal with signal processing tasks.

  • Detail Preserving Mixed Noise Removal by DWM Filter and BM3D

    Takuro YAMAGUCHI  Aiko SUZUKI  Masaaki IKEHARA  

     
    PAPER-Image

      Vol:
    E100-A No:11
      Page(s):
    2451-2457

    Mixed noise removal is a major problem in image processing. Different noises have different properties and it is required to use an appropriate removal method for each noise. Therefore, removal of mixed noise needs the combination of removal algorithms for each contained noise. We aim at the removal of the mixed noise composed of Additive White Gaussian Noise (AWGN) and Random-Valued Impulse Noise (RVIN). Many conventional methods cannot remove the mixed noise effectively and may lose image details. In this paper, we propose a new mixed noise removal method utilizing Direction Weighted Median filter (DWM filter) and Block Matching and 3D filtering method (BM3D). Although the combination of the DWM filter for RVIN and BM3D for AWGN removes almost all the mixed noise, it still loses some image details. We find the cause in the miss-detection of the image details as RVIN and solve the problem by re-detection with the difference of an input noisy image and the output by the combination. The re-detection process removes only salient noise which BM3D cannot remove and therefore preserves image details. These processes lead to the high performance removal of the mixed noise while preserving image details. Experimental results show our method obtains denoised images with clearer edges and textures than conventional methods.

  • Weighted Voting of Discriminative Regions for Face Recognition

    Wenming YANG  Riqiang GAO  Qingmin LIAO  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/08/04
      Vol:
    E100-D No:11
      Page(s):
    2734-2737

    This paper presents a strategy, Weighted Voting of Discriminative Regions (WVDR), to improve the face recognition performance, especially in Small Sample Size (SSS) and occlusion situations. In WVDR, we extract the discriminative regions according to facial key points and abandon the rest parts. Considering different regions of face make different contributions to recognition, we assign weights to regions for weighted voting. We construct a decision dictionary according to the recognition results of selected regions in the training phase, and this dictionary is used in a self-defined loss function to obtain weights. The final identity of test sample is the weighted voting of selected regions. In this paper, we combine the WVDR strategy with CRC and SRC separately, and extensive experiments show that our method outperforms the baseline and some representative algorithms.

  • Fault Analysis and Diagnosis of Coaxial Connectors in RF Circuits

    Rui JI  Jinchun GAO  Gang XIE  Qiuyan JIN  

     
    PAPER-Electromechanical Devices and Components

      Vol:
    E100-C No:11
      Page(s):
    1052-1060

    Coaxial connectors are extensively used in electrical systems and the degradation of the connector can alter the signal that is being transmitted and leads to faults, which is one of the major causes of low communication quality. In this work, the failure features caused by the degraded connector contact surface were studied. The relationship between the DC resistance and decreased real contact areas was given. Considering the inductance properties and capacitive coupling at high frequencies, the impedance characteristics of the degraded connector were discussed. Based on the transmission line theory and experimental measurement, an equivalent lump circuit of the coaxial connector was developed. For the degraded contact surface, the capacitance was analyzed, and the frequency effect was investigated. According to the high frequency characteristics of the degraded connector, a fault detection and location method for coaxial connectors in RF system was developed using a neural network method. For connectors suffering from different levels of pollution, their impedance modulus varies continuously. Considering the range of the connector's impedance parameters, the fault modes were determined. Based on the scattering parameter simulation of a RF receiver front-end circuit, the S11 and S21 parameters were obtained as feature parameters and Monte Carlo simulations were conducted to generate training and testing samples. Based on the BP neural network algorithm, the fault modes were classified and the results show the diagnosis accuracy was 97.33%.

  • Surface Height Change Estimation Method Using Band-Divided Coherence Functions with Fully Polarimetric SAR Images

    Ryo OYAMA  Shouhei KIDERA  Tetsuo KIRIMOTO  

     
    PAPER-Sensing

      Pubricized:
    2017/05/19
      Vol:
    E100-B No:11
      Page(s):
    2087-2093

    Microwave imaging techniques, in particular, synthetic aperture radar (SAR), are promising tools for terrain surface measurement, irrespective of weather conditions. The coherent change detection (CCD) method is being widely applied to detect surface changes by comparing multiple complex SAR images captured from the same scanning orbit. However, in the case of a general damage assessment after a natural disaster such as an earthquake or mudslide, additional about surface change, such as surface height change, is strongly required. Given this background, the current study proposes a novel height change estimation method using a CCD model based on the Pauli decomposition of fully polarimetric SAR images. The notable feature of this method is that it can offer accurate height change beyond the assumed wavelength, by introducing the frequency band-divided approach, and so is significantly better than InSAR based approaches. Experiments in an anechoic chamber on a 1/100 scaled model of the X-band SAR system, show that our proposed method outputs more accurate height change estimates than a similar method that uses single polarimetric data, even if the height change amount is over the assumed wavelength.

  • KL-UCB-Based Policy for Budgeted Multi-Armed Bandits with Stochastic Action Costs

    Ryo WATANABE  Junpei KOMIYAMA  Atsuyoshi NAKAMURA  Mineichi KUDO  

     
    PAPER-Mathematical Systems Science

      Vol:
    E100-A No:11
      Page(s):
    2470-2486

    We study the budgeted multi-armed bandit problem with stochastic action costs. In this problem, a player not only receives a reward but also pays a cost for an action of his/her choice. The goal of the player is to maximize the cumulative reward he/she receives before the total cost exceeds the budget. In the classical multi-armed bandit problem, a policy called KL-UCB is known to perform well. We propose KL-UCB-SC, an extension of this policy for the budgeted bandit problem. We prove that KL-UCB-SC is asymptotically optimal for the case of Bernoulli costs and rewards. To the best of our knowledge, this is the first result that shows asymptotic optimality in the study of the budgeted bandit problem. In fact, our regret upper bound is at least four times better than that of BTS, the best known upper bound for the budgeted bandit problem. Moreover, an empirical simulation we conducted shows that the performance of a tuned variant of KL-UCB-SC is comparable to that of state-of-the-art policies such as PD-BwK and BTS.

  • Hue-Preserving Color Image Processing with a High Arbitrariness in RGB Color Space

    Minako KAMIYAMA  Akira TAGUCHI  

     
    PAPER-Image Processing

      Vol:
    E100-A No:11
      Page(s):
    2256-2265

    Preserving hue is an important issue for color image processing. In order to preserve hue, color image processing is often carried out in HSI or HSV color space which is translated from RGB color space. Transforming from RGB color space to another color space and processing in this space usually generate gamut problem. We propose image enhancement methods which conserve hue and preserve the range (gamut) of the R, G, B channels in this paper. First we show an intensity processing method while preserving hue and saturation. In this method, arbitrary gray-scale transformation functions can be applied to the intensity component. Next, a saturation processing method while preserving hue and intensity is proposed. Arbitrary gray-scale transform methods can be also applied to the saturation component. Two processing methods are completely independent. Therefore, two methods are easily combined by applying two processing methods in succession. The combination method realizes the hue-preserving color image processing with a high arbitrariness without gamut problem. Furthermore, the concrete enhancement algorithm based on the proposed processing methods is proposed. Numerical results confirm our theoretical results and show that our processing algorithm performs much better than the conventional hue-preserving methods.

  • A Scaling and Non-Negative Garrote in Soft-Thresholding

    Katsuyuki HAGIWARA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/07/27
      Vol:
    E100-D No:11
      Page(s):
    2702-2710

    Soft-thresholding is a sparse modeling method typically applied to wavelet denoising in statistical signal processing. It is also important in machine learning since it is an essential nature of the well-known LASSO (Least Absolute Shrinkage and Selection Operator). It is known that soft-thresholding, thus, LASSO suffers from a problem of dilemma between sparsity and generalization. This is caused by excessive shrinkage at a sparse representation. There are several methods for improving this problem in the field of signal processing and machine learning. In this paper, we considered to extend and analyze a method of scaling of soft-thresholding estimators. In a setting of non-parametric orthogonal regression problem including discrete wavelet transform, we introduced component-wise and data-dependent scaling that is indeed identical to non-negative garrote. We here considered a case where a parameter value of soft-thresholding is chosen from absolute values of the least squares estimates, by which the model selection problem reduces to the determination of the number of non-zero coefficient estimates. In this case, we firstly derived a risk and construct SURE (Stein's unbiased risk estimator) that can be used for determining the number of non-zero coefficient estimates. We also analyzed some properties of the risk curve and found that our scaling method with the derived SURE is possible to yield a model with low risk and high sparsity compared to a naive soft-thresholding method with SURE. This theoretical speculation was verified by a simple numerical experiment of wavelet denoising.

  • Off-Grid Frequency Estimation with Random Measurements

    Xushan CHEN  Jibin YANG  Meng SUN  Jianfeng LI  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:11
      Page(s):
    2493-2497

    In order to significantly reduce the time and space needed, compressive sensing builds upon the fundamental assumption of sparsity under a suitable discrete dictionary. However, in many signal processing applications there exists mismatch between the assumed and the true sparsity bases, so that the actual representative coefficients do not lie on the finite grid discretized by the assumed dictionary. Unlike previous work this paper introduces the unified compressive measurement operator into atomic norm denoising and investigates the problems of recovering the frequency support of a combination of multiple sinusoids from sub-Nyquist samples. We provide some useful properties to ensure the optimality of the unified framework via semidefinite programming (SDP). We also provide a sufficient condition to guarantee the uniqueness of the optimizer with high probability. Theoretical results demonstrate the proposed method can locate the nonzero coefficients on an infinitely dense grid over a wide range of SNR case.

  • JPEG Image Steganalysis from Imbalanced Data

    Jia FU  Guorui FENG  Yanli REN  

     
    LETTER-Information Theory

      Vol:
    E100-A No:11
      Page(s):
    2518-2521

    Image steganalysis can determine whether the image contains the secret messages. In practice, the number of the cover images is far greater than that of the secret images, so it is very important to solve the detection problem in imbalanced image sets. Currently, SMOTE, Borderline-SMOTE and ADASYN are three importantly synthesized algorithms used to solve the imbalanced problem. In these methods, the new sampling point is synthesized based on the minority class samples. But this research is seldom seen in image steganalysis. In this paper, we find that the features of the majority class sample are similar to those of the minority class sample based on the distribution of the image features in steganalysis. So the majority and minority class samples are both used to integrate the new sample points. In experiments, compared with SMOTE, Borderline-SMOTE and ADASYN, this approach improves detection accuracy using the FLD ensemble classifier.

  • Formation of Polymer Walls by Monomer Aggregation Control Utilizing Substrate-Surface Wettability for Flexible LCDs Open Access

    Seiya KAWAMORITA  Yosei SHIBATA  Takahiro ISHINABE  Hideo FUJIKAKE  

     
    INVITED PAPER

      Vol:
    E100-C No:11
      Page(s):
    1005-1011

    We examined the novel aggregation control of the LC and monomer during formation of the polymer walls from a LC/monomer mixture in order to suppress the presence of the residual monomers and polymer networks in the pixel areas. The method is utilization of the differing wettabilities among LC and monomer molecules on a substrate surface. We patterned a substrate surface with a fluororesin and a polyimide film, and promoted phase separation of the LC and monomer by cooling process. This resulted in the LC and monomer aggregates primarily existing in the pixel areas and non-pixel areas, respectively. Moreover, the polymer-walls structure which was formed in this method partitioned into individual pixels in a lattice region and prevented the LC from flowing. This polymer-walls formation technique will be useful for developing high-quality flexible LCDs.

  • An Incremental Simulation Technique Based on Delta Model for Lifetime Yield Analysis

    Nguyen Cao QUI  Si-Rong HE  Chien-Nan Jimmy LIU  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E100-A No:11
      Page(s):
    2370-2378

    As devices continue to shrink, the parameter shift due to process variation and aging effects has an increasing impact on the circuit yield and reliability. However, predicting how long a circuit can maintain its design yield above the design specification is difficult because the design yield changes during the aging process. Moreover, performing Monte Carlo (MC) simulation iteratively during aging analysis is infeasible. Therefore, most existing approaches ignore the continuity during simulations to obtain high speed, which may result in accumulation of extrapolation errors with time. In this paper, an incremental simulation technique is proposed for lifetime yield analysis to improve the simulation speed while maintaining the analysis accuracy. Because aging is often a gradual process, the proposed incremental technique is effective for reducing the simulation time. For yield analysis with degraded performance, this incremental technique also reduces the simulation time because each sample is the same circuit with small parameter changes in the MC analysis. When the proposed dynamic aging sampling technique is employed, 50× speedup can be obtained with almost no decline accuracy, which considerably improves the efficiency of lifetime yield analysis.

  • Improvements on Security Evaluation of AES against Differential Bias Attack

    Haruhisa KOSUGE  Hidema TANAKA  

     
    PAPER-Cryptography and Information Security

      Vol:
    E100-A No:11
      Page(s):
    2398-2407

    In ASIACRYPT2015, a new model for the analysis of block cipher against side-channel attack and a dedicated attack, differential bias attack, were proposed by Bogdanov et al. The model assumes an adversary who has leaked values whose positions are unknown and randomly chosen from internal states (random leakage model). This paper improves the security analysis on AES under the random leakage model. In the previous method, the adversary requires at least 234 chosen plaintexts; therefore, it is hard to recover a secret key with a small number of data. To consider the security against the adversary given a small number of data, we reestimate complexity. We propose another hypothesis-testing method which can minimize the number of required data. The proposed method requires time complexity more than t>260 because of time-data tradeoff, and some attacks are tractable under t≤280. Therefore, the attack is a threat for the long-term security though it is not for the short-term security. In addition, we apply key enumeration to the differential bias attack and propose two evaluation methods, information-theoretic evaluation and experimental one with rank estimation. From the evaluations on AES, we show that the attack is a practical threat for the long-term security.

  • AIGIF: Adaptively Integrated Gradient and Intensity Feature for Robust and Low-Dimensional Description of Local Keypoint

    Songlin DU  Takeshi IKENAGA  

     
    PAPER-Vision

      Vol:
    E100-A No:11
      Page(s):
    2275-2284

    Establishing local visual correspondences between images taken under different conditions is an important and challenging task in computer vision. A common solution for this task is detecting keypoints in images and then matching the keypoints with a feature descriptor. This paper proposes a robust and low-dimensional local feature descriptor named Adaptively Integrated Gradient and Intensity Feature (AIGIF). The proposed AIGIF descriptor partitions the support region surrounding each keypoint into sub-regions, and classifies the sub-regions into two categories: edge-dominated ones and smoothness-dominated ones. For edge-dominated sub-regions, gradient magnitude and orientation features are extracted; for smoothness-dominated sub-regions, intensity feature is extracted. The gradient and intensity features are integrated to generate the descriptor. Experiments on image matching were conducted to evaluate performances of the proposed AIGIF. Compared with SIFT, the proposed AIGIF achieves 75% reduction of feature dimension (from 128 bytes to 32 bytes); compared with SURF, the proposed AIGIF achieves 87.5% reduction of feature dimension (from 256 bytes to 32 bytes); compared with the state-of-the-art ORB descriptor which has the same feature dimension with AIGIF, AIGIF achieves higher accuracy and robustness. In summary, the AIGIF combines the advantages of gradient feature and intensity feature, and achieves relatively high accuracy and robustness with low feature dimension.

  • Robust Ghost-Free High-Dynamic-Range Imaging by Visual Salience Based Bilateral Motion Detection and Stack Extension Based Exposure Fusion

    Zijie WANG  Qin LIU  Takeshi IKENAGA  

     
    PAPER-Image Processing

      Vol:
    E100-A No:11
      Page(s):
    2266-2274

    High-dynamic-range imaging (HDRI) technologies aim to extend the dynamic range of luminance against the limitation of camera sensors. Irradiance information of a scene can be reconstructed by fusing multiple low-dynamic-range (LDR) images with different exposures. The key issue is removing ghost artifacts caused by motion of moving objects and handheld cameras. This paper proposes a robust ghost-free HDRI algorithm by visual salience based bilateral motion detection and stack extension based exposure fusion. For ghost areas detection, visual salience is introduced to measure the differences between multiple images; bilateral motion detection is employed to improve the accuracy of labeling motion areas. For exposure fusion, the proposed algorithm reduces the discontinuity of brightness by stack extension and rejects the information of ghost areas to avoid artifacts via fusion masks. Experiment results show that the proposed algorithm can remove ghost artifacts accurately for both static and handheld cameras, remain robust to scenes with complex motion and keep low complexity over recent advances including rank minimization based method and patch based method by 63.6% and 20.4% time savings averagely.

  • Ball State Based Parallel Ball Tracking and Event Detection for Volleyball Game Analysis

    Xina CHENG  Norikazu IKOMA  Masaaki HONDA  Takeshi IKENAGA  

     
    PAPER-Vision

      Vol:
    E100-A No:11
      Page(s):
    2285-2294

    The ball state tracking and detection technology plays a significant role in volleyball game analysis, whose performance is limited due to the challenges include: 1) the inaccurate ball trajectory; 2) multiple numbers of the ball event category; 3) the large intra-class difference of one event. With the goal of broadcasting supporting for volleyball games which requires a real time system, this paper proposes a ball state based parallel ball tracking and event detection method based on a sequential estimation method such as particle filter. This method employs a parallel process of the 3D ball tracking and the event detection so that it is friendly for real time system implementation. The 3D ball tracking process uses the same models with the past work [8]. For event detection process, a ball event change estimation based multiple system model, a past trajectory referred hit point likelihood and a court-line distance feature based event type detection are proposed. First, the multiple system model transits the ball event state, which consists the event starting time and the event type, through three models dealing with different ball motion situations in the volleyball game, such as the motion keeping and changing. The mixture of these models is decided by estimation of the ball event change estimation. Secondly, the past trajectory referred hit point likelihood avoids the processing time delay between the ball tracking and the event detection process by evaluating the probability of the ball being hit at certain time without using future ball trajectories. Third, the feature of the distance between the ball and the specific court line are extracted to detect the ball event type. Experimental results based on multi-view HDTV video sequences (2014 Inter High School Men's Volleyball Games, Japan), which contains 606 events in total, show that the detection rate reaches 88.61% while the success rate of 3D ball tracking keeps more than 99%.

  • Distortion Control and Optimization for Lossy Embedded Compression in Video Codec System

    Li GUO  Dajiang ZHOU  Shinji KIMURA  Satoshi GOTO  

     
    PAPER-Coding Theory

      Vol:
    E100-A No:11
      Page(s):
    2416-2424

    For mobile video codecs, the huge energy dissipation for external memory traffic is a critical challenge under the battery power constraint. Lossy embedded compression (EC), as a solution to this challenge, is considered in this paper. While previous studies in lossy EC mostly focused on algorithm optimization to reduce distortion, this work, to the best of our knowledge, is the first one that addresses the distortion control. Firstly, from both theoretical analysis and experiments for distortion optimization, a conclusion is drawn that, at the frame level, allocating memory traffic evenly is a reliable approximation to the optimal solution to minimize quality loss. Then, to reduce the complexity of decoding twice, the distortion between two sequences is estimated by a linear function of that calculated within one sequence. Finally, on the basis of even allocation, the distortion control is proposed to determine the amount of memory traffic according to a given distortion limitation. With the adaptive target setting and estimating function updating in each group of pictures (GOP), the scene change in video stream is supported without adding a detector or retraining process. From experimental results, the proposed distortion control is able to accurately fix the quality loss to the target. Compared to the baseline of negative feedback on non-referred B frames, it achieves about twice memory traffic reduction.

  • Robustly Tracking People with LIDARs in a Crowded Museum for Behavioral Analysis

    Md. Golam RASHED  Ryota SUZUKI  Takuya YONEZAWA  Antony LAM  Yoshinori KOBAYASHI  Yoshinori KUNO  

     
    PAPER-Vision

      Vol:
    E100-A No:11
      Page(s):
    2458-2469

    This introduces a method which uses LIDAR to identify humans and track their positions, body orientation, and movement trajectories in any public space to read their various types of behavioral responses to surroundings. We use a network of LIDAR poles, installed at the shoulder level of typical adults to reduce potential occlusion between persons and/or objects even in large-scale social environments. With this arrangement, a simple but effective human tracking method is proposed that works by combining multiple sensors' data so that large-scale areas can be covered. The effectiveness of this method is evaluated in an art gallery of a real museum. The result revealed good tracking performance and provided valuable behavioral information related to the art gallery.

  • Forecasting Network Traffic at Large Time Scales by Using Dual-Related Method

    Liangrui TANG  Shiyu JI  Shimo DU  Yun REN  Runze WU  Xin WU  

     
    PAPER-Network

      Pubricized:
    2017/04/24
      Vol:
    E100-B No:11
      Page(s):
    2049-2059

    Network traffic forecasts, as it is well known, can be useful for network resource optimization. In order to minimize the forecast error by maximizing information utilization with low complexity, this paper concerns the difference of traffic trends at large time scales and fits a dual-related model to predict it. First, by analyzing traffic trends based on user behavior, we find both hour-to-hour and day-to-day patterns, which means that models based on either of the single trends are unable to offer precise predictions. Then, a prediction method with the consideration of both daily and hourly traffic patterns, called the dual-related forecasting method, is proposed. Finally, the correlation for traffic data is analyzed based on model parameters. Simulation results demonstrate the proposed model is more effective in reducing forecasting error than other models.

  • Real-Time Object Tracking via Fusion of Global and Local Appearance Models

    Ju Hong YOON  Jungho KIM  Youngbae HWANG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2017/08/07
      Vol:
    E100-D No:11
      Page(s):
    2738-2743

    In this letter, we propose a robust and fast tracking framework by combining local and global appearance models to cope with partial occlusion and pose variations. The global appearance model is represented by a correlation filter to efficiently estimate the movement of the target and the local appearance model is represented by local feature points to handle partial occlusion and scale variations. Then global and local appearance models are unified via the Bayesian inference in our tracking framework. We experimentally demonstrate the effectiveness of the proposed method in both terms of accuracy and time complexity, which takes 12ms per frame on average for benchmark datasets.

2381-2400hit(16314hit)