The search functionality is under construction.

Author Search Result

[Author] Yen-wei CHEN(24hit)

1-20hit(24hit)

  • A Fast Kinoform Optimization Algorithm Based on Simulated Annealing

    Yen-Wei CHEN  Shinichiro YAMAUCHI  Ning WANG  Zensho NAKAO  

     
    LETTER-Image

      Vol:
    E83-A No:4
      Page(s):
    774-776

    Several methods have be proposed or used to optimize the phase distribution of a kinoform. In this paper, we proposed a fast algorithm for optimization of the kinoform based on simulated annealing to reduce the large computation cost. This method uses a simplified equation to calculate the energy function after perturbation.

  • Attenuation Correction for X-Ray Emission Computed Tomography of Laser-Produced Plasma

    Yen-Wei CHEN  Zensho NAKAO  Shinichi TAMURA  

     
    LETTER-Image Theory

      Vol:
    E79-A No:8
      Page(s):
    1287-1290

    An attenuation correction method was proposed for laser-produced plasma emission computed tomography (ECT), which is based on a relation of the attenuation coefficient and the emission coefficient in plasma. Simulation results show that the reconstructed images are dramatically improved in comparison to the reconstructions without attenuation correction.

  • A Quantitative Evaluation of Neutron Penumbral Imaging with a Toroidal-Segment Aperture

    Yen-Wei CHEN  Zensho NAKAO  Ikuo NAKAMURA  

     
    PAPER-Electromagnetic Theory

      Vol:
    E80-C No:2
      Page(s):
    346-351

    A quantitative study is made on performance of neutron penumbral imaging with a toroidal-segment aperture, and it focused on isoplanaticity of aperture point spread function and effect of the non-isoplanaticity on the reconstructed images. The results show that the aperture point spread function is satisfactorily isoplanatic for a small field of view, while for a large field of view the point spread function is not satisfactorily isoplanatic resulting in some distortion in the reconstructed image and reduction of resolution.

  • Independent Component Analysis for Color Indexing

    Xiang-Yan ZENG  Yen-Wei CHEN  Zensho NAKAO  Jian CHENG  Hanqing LU  

     
    PAPER-Pattern Recognition

      Vol:
    E87-D No:4
      Page(s):
    997-1003

    Color histograms are effective for representing color visual features. However, the high dimensionality of feature vectors results in high computational cost. Several transformations, including singular value decomposition (SVD) and principal component analysis (PCA), have been proposed to reduce the dimensionality. In PCA, the dimensionality reduction is achieved by projecting the data to a subspace which contains most of the variance. As a common observation, the PCA basis function with the lowest frquency accounts for the highest variance. Therefore, the PCA subspace may not be the optimal one to represent the intrinsic features of data. In this paper, we apply independent component analysis (ICA) to extract the features in color histograms. PCA is applied to reduce the dimensionality and then ICA is performed on the low-dimensional PCA subspace. The experimental results show that the proposed method (1) significantly reduces the feature dimensions compared with the original color histograms and (2) outperforms other dimension reduction techniques, namely the method based on SVD of quadratic matrix and PCA, in terms of retrieval accuracy.

  • 3D Multiple-Contextual ROI-Attention Network for Efficient and Accurate Volumetric Medical Image Segmentation

    He LI  Yutaro IWAMOTO  Xianhua HAN  Lanfen LIN  Akira FURUKAWA  Shuzo KANASAKI  Yen-Wei CHEN  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/02/21
      Vol:
    E106-D No:5
      Page(s):
    1027-1037

    Convolutional neural networks (CNNs) have become popular in medical image segmentation. The widely used deep CNNs are customized to extract multiple representative features for two-dimensional (2D) data, generally called 2D networks. However, 2D networks are inefficient in extracting three-dimensional (3D) spatial features from volumetric images. Although most 2D segmentation networks can be extended to 3D networks, the naively extended 3D methods are resource-intensive. In this paper, we propose an efficient and accurate network for fully automatic 3D segmentation. Specifically, we designed a 3D multiple-contextual extractor to capture rich global contextual dependencies from different feature levels. Then we leveraged an ROI-estimation strategy to crop the ROI bounding box. Meanwhile, we used a 3D ROI-attention module to improve the accuracy of in-region segmentation in the decoder path. Moreover, we used a hybrid Dice loss function to address the issues of class imbalance and blurry contour in medical images. By incorporating the above strategies, we realized a practical end-to-end 3D medical image segmentation with high efficiency and accuracy. To validate the 3D segmentation performance of our proposed method, we conducted extensive experiments on two datasets and demonstrated favorable results over the state-of-the-art methods.

  • An Intra- and Inter-Emotion Transformer-Based Fusion Model with Homogeneous and Diverse Constraints Using Multi-Emotional Audiovisual Features for Depression Detection

    Shiyu TENG  Jiaqing LIU  Yue HUANG  Shurong CHAI  Tomoko TATEYAMA  Xinyin HUANG  Lanfen LIN  Yen-Wei CHEN  

     
    PAPER

      Pubricized:
    2023/12/15
      Vol:
    E107-D No:3
      Page(s):
    342-353

    Depression is a prevalent mental disorder affecting a significant portion of the global population, leading to considerable disability and contributing to the overall burden of disease. Consequently, designing efficient and robust automated methods for depression detection has become imperative. Recently, deep learning methods, especially multimodal fusion methods, have been increasingly used in computer-aided depression detection. Importantly, individuals with depression and those without respond differently to various emotional stimuli, providing valuable information for detecting depression. Building on these observations, we propose an intra- and inter-emotional stimulus transformer-based fusion model to effectively extract depression-related features. The intra-emotional stimulus fusion framework aims to prioritize different modalities, capitalizing on their diversity and complementarity for depression detection. The inter-emotional stimulus model maps each emotional stimulus onto both invariant and specific subspaces using individual invariant and specific encoders. The emotional stimulus-invariant subspace facilitates efficient information sharing and integration across different emotional stimulus categories, while the emotional stimulus specific subspace seeks to enhance diversity and capture the distinct characteristics of individual emotional stimulus categories. Our proposed intra- and inter-emotional stimulus fusion model effectively integrates multimodal data under various emotional stimulus categories, providing a comprehensive representation that allows accurate task predictions in the context of depression detection. We evaluate the proposed model on the Chinese Soochow University students dataset, and the results outperform state-of-the-art models in terms of concordance correlation coefficient (CCC), root mean squared error (RMSE) and accuracy.

  • Robust Edge Detection by Independent Component Analysis in Noisy Images

    Xian-Hua HAN  Yen-Wei CHEN  Zensho NAKAO  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E87-D No:9
      Page(s):
    2204-2211

    We propose a robust edge detection method based on independent component analysis (ICA). It is known that most of the basis functions extracted from natural images by ICA are sparse and similar to localized and oriented receptive fields, and in the proposed edge detection method, a target image is first transformed by ICA basis functions and then the edges are detected or reconstructed with sparse components only. Furthermore, by applying a shrinkage algorithm to filter out the components of noise in the ICA domain, we can readily obtain the sparse components of the original image, resulting in a kind of robust edge detection even for a noisy image with a very low SN ratio. The efficiency of the proposed method is demonstrated by experiments with some natural images.

  • Blind Deconvolution Based on Genetic Algorithms

    Yen-Wei CHEN  Zensho NAKAO  Kouichi ARAKAKI  Shinichi TAMURA  

     
    LETTER-Neural Networks

      Vol:
    E80-A No:12
      Page(s):
    2603-2607

    A genetic algorithm is presented for the blind-deconvolution problem of image restoration without any a priori information about object image or blurring function. The restoration problem is modeled as an optimization problem, whose cost function is to be minimized based on mechanics of natural selection and natural genetics. The applicability of GA for blind-deconvolution problem was demonstrated.

  • A New Texture Feature Based on PCA Pattern Maps and Its Application to Image Retrieval

    Xiang-Yan ZENG  Yen-Wei CHEN  Zensho NAKAO  Hanqing LU  

     
    PAPER-Pattern Recognition

      Vol:
    E86-D No:5
      Page(s):
    929-936

    We propose a novel pixel pattern-based approach for texture classification, which is independent of the variance of illumination. Gray scale images are first transformed into pattern maps in which edges and lines, used for characterizing texture information, are classified by pattern matching. We employ principal component analysis (PCA) which is widely applied to feature extraction. We use the basis functions learned through PCA as templates for pattern matching. Using PCA pattern maps, the feature vector is comprised of the numbers of the pixels belonging to a specific pattern. The effectiveness of the new feature is demonstrated by applications to the image retrievals of the Brodatz texture database. Comparisons with multichannel and multiresolution features indicate that the new feature is quite time saving, free of the influence of illumination, and has comparable accuracy.

  • High-Resolution Penumbral Imaging of 14-MeV Neutrons

    Yen-Wei CHEN  Noriaki MIYANAGA  Minoru UNEMOTO  Masanobu YAMANAKA  Tatsuhiko YAMANAKA  Sadao NAKAI  Tetsuo IGUCHI  Masaharu NAKAZAWA  Toshiyuki IIDA  Shinichi TAMURA  

     
    PAPER-Opto-Electronics

      Vol:
    E78-C No:12
      Page(s):
    1787-1792

    We have developed a neutron imaging system based on the penumbral imaging technique. The system consists of a penumbral aperture and a sensitive neutron detector. The aperture was made from a thick (6 cm) tungsten block with a toroidal taper. It can effectively block 14-MeV neutrons and provide a satisfactory sharp, isoplanatic (space-invariant) point spread function (PSF). A two-dimensional scintillator array, which is coupled with a gated two-stage image intensifier system and a CCD camera, was used as a sensitive neutron detector. It can record the neutron image with high sensitivity and high signal-to-noise ratio. The reconstruction was performed with a Wiener filter. The spatial resolution of the reconstructed neutron image was estimated to be 31 µm by computer simulation. Experimental demonstration has been achieved by imaging 14-MeV deuterium-tritium neutrons emitted from a laser-imploded target.

  • Color Independent Components Based SIFT Descriptors for Object/Scene Classification

    Dan-ni AI  Xian-hua HAN  Xiang RUAN  Yen-wei CHEN  

     
    PAPER-Pattern Recognition

      Vol:
    E93-D No:9
      Page(s):
    2577-2586

    In this paper, we present a novel color independent components based SIFT descriptor (termed CIC-SIFT) for object/scene classification. We first learn an efficient color transformation matrix based on independent component analysis (ICA), which is adaptive to each category in a database. The ICA-based color transformation can enhance contrast between the objects and the background in an image. Then we compute CIC-SIFT descriptors over all three transformed color independent components. Since the ICA-based color transformation can boost the objects and suppress the background, the proposed CIC-SIFT can extract more effective and discriminative local features for object/scene classification. The comparison is performed among seven SIFT descriptors, and the experimental classification results show that our proposed CIC-SIFT is superior to other conventional SIFT descriptors.

  • Quantitative Assessment of Facial Paralysis Based on Spatiotemporal Features

    Truc Hung NGO  Yen-Wei CHEN  Naoki MATSUSHIRO  Masataka SEO  

     
    PAPER-Pattern Recognition

      Pubricized:
    2015/10/01
      Vol:
    E99-D No:1
      Page(s):
    187-196

    Facial paralysis is a popular clinical condition occurring in 30 to 40 patients per 100,000 people per year. A quantitative tool to support medical diagnostics is necessary. This paper proposes a simple, visual and robust method that can objectively measure the degree of the facial paralysis by the use of spatiotemporal features. The main contribution of this paper is the proposal of an effective spatiotemporal feature extraction method based on a tracking of landmarks. Our method overcomes the drawbacks of the other techniques such as the influence of irrelevant regions, noise, illumination change and time-consuming process. In addition, the method is simple and visual. The simplification helps to reduce the time-consuming process. Also, the movements of landmarks, which relate to muscle movement ability, are visual. Therefore, the visualization helps reveal regions of serious facial paralysis. For recognition rate, experimental results show that our proposed method outperformed the other techniques tested on a dynamic facial expression image database.

  • Multilinear Supervised Neighborhood Embedding with Local Descriptor Tensor for Face Recognition

    Xian-Hua HAN  Xu QIAO  Yen-Wei CHEN  

     
    LETTER-Pattern Recognition

      Vol:
    E94-D No:1
      Page(s):
    158-161

    Subspace learning based face recognition methods have attracted considerable interest in recent years, including Principal Component Analysis (PCA), Independent Component Analysis (ICA), Linear Discriminant Analysis (LDA), and some extensions for 2D analysis. However, a disadvantage of all these approaches is that they perform subspace analysis directly on the reshaped vector or matrix of pixel-level intensity, which is usually unstable under illumination or pose variance. In this paper, we propose to represent a face image as a local descriptor tensor, which is a combination of the descriptor of local regions (K*K-pixel patch) in the image, and is more efficient than the popular Bag-Of-Feature (BOF) model for local descriptor combination. Furthermore, we propose to use a multilinear subspace learning algorithm (Supervised Neighborhood Embedding-SNE) for discriminant feature extraction from the local descriptor tensor of face images, which can preserve local sample structure in feature space. We validate our proposed algorithm on Benchmark database Yale and PIE, and experimental results show recognition rate with our method can be greatly improved compared conventional subspace analysis methods especially for small training sample number.

  • An ICA-Domain Shrinkage Based Poisson-Noise Reduction Algorithm and Its Application to Penumbral Imaging

    Xian-Hua HAN  Zensho NAKAO  Yen-Wei CHEN  Ryosuke KODAMA  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E88-D No:4
      Page(s):
    750-757

    Penumbral imaging is a technique which exploits the fact that spatial information can be recovered from the shadow or penumbra that an unknown source casts through a simple large circular aperture. Since the technique is based on linear deconvolution, it is sensitive to noise. In this paper, a two-step method is proposed for decoding penumbral images: first, a noise-reduction algorithm based on ICA-domain (independent component analysis-domain) shrinkage is applied to smooth the given noise; second, the conventional linear deconvolution follows. The simulation results show that the reconstructed image is dramatically improved in comparison to that without the noise-removing filters, and the proposed method is successfully applied to real experimental X-ray imaging.

  • Parzen-Window Based Normalized Mutual Information for Medical Image Registration

    Rui XU  Yen-Wei CHEN  Song-Yuan TANG  Shigehiro MORIKAWA  Yoshimasa KURUMI  

     
    PAPER-Biological Engineering

      Vol:
    E91-D No:1
      Page(s):
    132-144

    Image Registration can be seen as an optimization problem to find a cost function and then use an optimization method to get its minimum. Normalized mutual information is a widely-used robust method to design a cost function in medical image registration. Its calculation is based on the joint histogram of the fixed and transformed moving images. Usually, only a discrete joint histogram is considered in the calculation of normalized mutual information. The discrete joint histogram does not allow the cost function to be explicitly differentiated, so it can only use non-gradient based optimization methods, such as Powell's method, to seek the minimum. In this paper, a parzen-window based method is proposed to estimate the continuous joint histogram in order to make it possible to derive the close form solution for the derivative of the cost function. With this help, we successfully apply the gradient-based optimization method in registration. We also design a new kernel for the parzen-window based method. Our designed kernel is a second order polynomial kernel with the width of two. Because of good theoretical characteristics, this kernel works better than other kernels, such as a cubic B-spline kernel and a first order B-spline kernel, which are widely used in the parzen-window based estimation. Both rigid and non-rigid registration experiments are done to show improved behavior of our designed kernel. Additionally, the proposed method is successfully applied to a clinical CT-MR non-rigid registration which is able to assist a magnetic resonance (MR) guided microwave thermocoagulation of liver tumors.

  • Sparse and Low-Rank Matrix Decomposition for Local Morphological Analysis to Diagnose Cirrhosis

    Junping DENG  Xian-Hua HAN  Yen-Wei CHEN  Gang XU  Yoshinobu SATO  Masatoshi HORI  Noriyuki TOMIYAMA  

     
    PAPER-Biological Engineering

      Pubricized:
    2014/08/26
      Vol:
    E97-D No:12
      Page(s):
    3210-3221

    Chronic liver disease is a major worldwide health problem. Diagnosis and staging of chronic liver diseases is an important issue. In this paper, we propose a quantitative method of analyzing local morphological changes for accurate and practical computer-aided diagnosis of cirrhosis. Our method is based on sparse and low-rank matrix decomposition, since the matrix of the liver shapes can be decomposed into two parts: a low-rank matrix, which can be considered similar to that of a normal liver, and a sparse error term that represents the local deformation. Compared with the previous global morphological analysis strategy based on the statistical shape model (SSM), our proposed method improves the accuracy of both normal and abnormal classifications. We also propose using the norm of the sparse error term as a simple measure for classification as normal or abnormal. The experimental results of the proposed method are better than those of the state-of-the-art SSM-based methods.

  • An Adaptive Backpropagation Algorithm for Limited-Angle CT Image Reconstruction

    Fath El Alem F. ALI  Zensho NAKAO  Yen-Wei CHEN  Kazunori MATSUO  Izuru OHKAWA  

     
    PAPER

      Vol:
    E83-A No:6
      Page(s):
    1049-1058

    Presented in this paper is a neural back propagation algorithm for reconstructing two-dimensional CT images from a small number of projection data. The paper extends the work in [1], in which a backpropagation algorithm is applied to the CT image reconstruction problem. The delta rule of the ordinary backpropagation algorithm is modified using a 'secondary' teaching signal and the 'Resilient backpropagation' scheme. Results obtained are presented along with those of two well known conventional methods: MART and EMML method. A quantitative evaluation reveals the effectiveness of the proposed algorithm.

  • Segmentation of Liver in Low-Contrast Images Using K-Means Clustering and Geodesic Active Contour Algorithms Open Access

    Amir H. FORUZAN  Yen-Wei CHEN  Reza A. ZOROOFI  Akira FURUKAWA  Yoshinobu SATO  Masatoshi HORI  Noriyuki TOMIYAMA  

     
    PAPER-Medical Image Processing

      Vol:
    E96-D No:4
      Page(s):
    798-807

    In this paper, we present an algorithm to segment the liver in low-contrast CT images. As the first step of our algorithm, we define a search range for the liver boundary. Then, the EM algorithm is utilized to estimate parameters of a 'Gaussian Mixture' model that conforms to the intensity distribution of the liver. Using the statistical parameters of the intensity distribution, we introduce a new thresholding technique to classify image pixels. We assign a distance feature vectors to each pixel and segment the liver by a K-means clustering scheme. This initial boundary of the liver is conditioned by the Fourier transform. Then, a Geodesic Active Contour algorithm uses the boundaries to find the final surface. The novelty in our method is the proper selection and combination of sub-algorithms so as to find the border of an object in a low-contrast image. The number of parameters in the proposed method is low and the parameters have a low range of variations. We applied our method to 30 datasets including normal and abnormal cases of low-contrast/high-contrast images and it was extensively evaluated both quantitatively and qualitatively. Minimum of Dice similarity measures of the results is 0.89. Assessment of the results proves the potential of the proposed method for segmentation in low-contrast images.

  • An Efficient Deep Learning Based Coarse-to-Fine Cephalometric Landmark Detection Method

    Yu SONG  Xu QIAO  Yutaro IWAMOTO  Yen-Wei CHEN  Yili CHEN  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2021/05/14
      Vol:
    E104-D No:8
      Page(s):
    1359-1366

    Accurate and automatic quantitative cephalometry analysis is of great importance in orthodontics. The fundamental step for cephalometry analysis is to annotate anatomic-interested landmarks on X-ray images. Computer-aided automatic method remains to be an open topic nowadays. In this paper, we propose an efficient deep learning-based coarse-to-fine approach to realize accurate landmark detection. In the coarse detection step, we train a deep learning-based deformable transformation model by using training samples. We register test images to the reference image (one training image) using the trained model to predict coarse landmarks' locations on test images. Thus, regions of interest (ROIs) which include landmarks can be located. In the fine detection step, we utilize trained deep convolutional neural networks (CNNs), to detect landmarks in ROI patches. For each landmark, there is one corresponding neural network, which directly does regression to the landmark's coordinates. The fine step can be considered as a refinement or fine-tuning step based on the coarse detection step. We validated the proposed method on public dataset from 2015 International Symposium on Biomedical Imaging (ISBI) grand challenge. Compared with the state-of-the-art method, we not only achieved the comparable detection accuracy (the mean radial error is about 1.0-1.6mm), but also largely shortened the computation time (4 seconds per image).

  • View-Based Object Recognition Using ND Tensor Supervised Neighborhood Embedding

    Xian-Hua HAN  Yen-Wei CHEN  Xiang RUAN  

     
    PAPER-Pattern Recognition

      Vol:
    E95-D No:3
      Page(s):
    835-843

    In this paper, we propose N-Dimensional (ND) Tensor Supervised Neighborhood Embedding (ND TSNE) for discriminant feature representation, which is used for view-based object recognition. ND TSNE uses a general Nth order tensor discriminant and neighborhood-embedding analysis approach for object representation. The benefits of ND TSNE include: (1) a natural way of representing data without losing structure information, i.e., the information about the relative positions of pixels or regions; (2) a reduction in the small sample size problem, which occurs in conventional supervised learning because the number of training samples is much less than the dimensionality of the feature space; (3) preserving a neighborhood structure in tensor feature space for object recognition and a good convergence property in training procedure. With Tensor-subspace features, the random forests is used as a multi-way classifier for object recognition, which is much easier for training and testing compared with multi-way SVM. We demonstrate the performance advantages of our proposed approach over existing techniques using experiments on the COIL-100 and the ETH-80 datasets.

1-20hit(24hit)