The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ATI(18690hit)

9841-9860hit(18690hit)

  • Improving Accuracy of Recommender System by Item Clustering

    KhanhQuan TRUONG  Fuyuki ISHIKAWA  Shinichi HONIDEN  

     
    PAPER

      Vol:
    E90-D No:9
      Page(s):
    1363-1373

    Recommender System (RS) predicts user's ratings towards items, and then recommends highly-predicted items to user. In recent years, RS has been playing more and more important role in the agent research field. There have been a great deal of researches trying to apply agent technology to RS. Collaborative Filtering, one of the most widely used approach to predict user's ratings in Recommender System, predicts a user's rating towards an item by aggregating ratings given by users who have similar preference to that user. In existing approaches, user similarity is often computed on the whole set of items. However, because the number of items is often very large and so is the diversity among items, users who have similar preference in one category may have totally different judgement on items of another kind. In order to deal with this problem, we propose a method to cluster items, so that inside a cluster, similarity between users does not change significantly from item to item. After the item clustering phase, when predicting rating of a user towards an item, we only aggregate ratings of users who have similarity preference to that user inside the cluster of that item. Experiments evaluating our approach are carried out on the real dataset taken from MovieLens, a movies recommendation web site. Experiment results suggest that our approach can improve prediction accuracy compared to existing approaches.

  • A Method of Measuring Gain in Liquids Based on the Friis Transmission Formula in the Near-Field Region

    Nozomu ISHII  Takuhei AKAGAWA  Ken-ichi SATO  Lira HAMADA  Soichi WATANABE  

     
    PAPER-Measurements

      Vol:
    E90-B No:9
      Page(s):
    2401-2407

    In the 300 MHz to 3 GHz range, probes used to measure specific absorption rate (SAR) of mobile communication devices are usually calibrated using a rectangular waveguide filled with tissue-equivalent liquid. Above 3 GHz, however, this conventional calibration can be inaccurate because the diameter of the probe is comparable to the cross-sectional dimension of the waveguide. Therefore, an alternative method of SAR probe calibration based on another principle was needed and has been developed by the authors. In the proposed calibration method, the gain of the reference antenna in the liquid is first evaluated using the two-antenna method based on the Friis transmission formula in the conducting medium. Then the electric field intensity radiated by the reference antenna is related to the output voltage of the SAR probe at a given point in the liquid. However, the fields are significantly reduced in the liquid, and the gain is impossible to calibrate in the far-field region. To overcome this difficulty, the Friis transmission formula in the conducting medium must be extended to the near-field region. Here, we report results of simulations and experiments on estimated gain based on the extended Friis transmission formula, which holds in the near-field region, and test the validity of the new formula.

  • A Mathematical Analysis on Error Propagation of EREC and Its Application to Optimal Channel-Matched Searching Pattern for Robust Transmission of Coded Video

    Yong-Goo KIM  Yungho CHOI  Yoonsik CHOE  

     
    PAPER-Multimedia Systems for Communications

      Vol:
    E90-B No:9
      Page(s):
    2571-2587

    The error resilient entropy coding (EREC) provides efficient resynchronization method to the coded bitstream, which might be corrupted by transmission errors. The technique has been given more prominence, nowadays, because it achieves fast resynchronization without sizable overhead, and thereby provides graceful quality degradation according to the network conditions. This paper presents a novel framework to analyze the performance of EREC in terms of the error probability in decoding a basic resynchronization unit (RU) for various error prone networks. In order to show the feasibility of the proposed framework, this paper also proposes a novel EREC algorithm based on the slightly modified H.263 bitstream syntax. The proposed scheme minimizes the effect of errors on low frequency DCT coefficients and incorporates near optimal channel-matched searching pattern (SP), which guarantees the best possible quality of reproduced video. Given the number of bits generated for each RU, the near optimal SP is produced by the proposed iterative deterministic partial SP update method, which reduces the complexity of finding optimal solution, O((N-1)!), to O(m·N2). The proposed EREC algorithm significantly improves the decoded video quality, especially when the bit error rate is in the rage of 10-3-10-4. Up to 5 dB enhancement of the PSNR value was observed in a single video frame.

  • Extraction of Finger-Vein Patterns Using Maximum Curvature Points in Image Profiles

    Naoto MIURA  Akio NAGASAKA  Takafumi MIYATAKE  

     
    PAPER

      Vol:
    E90-D No:8
      Page(s):
    1185-1194

    A biometrics system for identifying individuals using the pattern of veins in a finger was previously proposed. The system has the advantage of being resistant to forgery because the pattern is inside a finger. Infrared light is used to capture an image of a finger that shows the vein patterns, which have various widths and brightnesses that change temporally as a result of fluctuations in the amount of blood in the vein, depending on temperature, physical conditions, etc. To robustly extract the precise details of the depicted veins, we developed a method of calculating local maximum curvatures in cross-sectional profiles of a vein image. This method can extract the centerlines of the veins consistently without being affected by the fluctuations in vein width and brightness, so its pattern matching is highly accurate. Experimental results show that our method extracted patterns robustly when vein width and brightness fluctuated, and that the equal error rate for personal identification was 0.0009%, which is much better than that of conventional methods.

  • Lighting Independent Skin Tone Detection Using Neural Networks

    Marvin DECKER  Minako SAWAKI  

     
    LETTER

      Vol:
    E90-D No:8
      Page(s):
    1195-1198

    Skin tone detection in conditions where illuminate intensity and/or chromaticity can vary often comes with high computational time or low accuracy. Here a technique is presented integrating chromaticity and intensity normalization combined with a neural skin tone classification network to achieve robust classification faster than other approaches.

  • Building Systolic Messy Arrays for Infinite Iterative Algorithms

    Makio ISHIHARA  

     
    LETTER-General Fundamentals and Boundaries

      Vol:
    E90-A No:8
      Page(s):
    1719-1723

    The size-dependent array problem is a problem with systolic arrays such that the size of systolic arrays limits the size of calculations, which in a do-loop structure controls how many times it is repeated and how deep the nesting loops are. A systolic array cannot deal with larger calculations. For the size-dependent array problem, a spiral systolic array has been studied so far. It has non-adjacent connections between PEs, such as loop paths for sending data back so that data flows over the array independently of its own size. This paper takes an approach to the problem without non-adjacent connections. This paper discusses systolic messy arrays for infinite iterative algorithms so that they are independent from the size of calculations. First a systolic messy array called two-square shape is introduced then the properties of two-square shape are summarized: memory function, cyclic addition, and cyclic multiplication. Finally a way of building systolic messy arrays that calculate infinite iterative algorithms is illustrated with concrete examples such as an arithmetic progression, a geometric progression, N factorial, and Fibonacci numbers.

  • An Approximation Method of the Quadratic Discriminant Function and Its Application to Estimation of High-Dimensional Distribution

    Shinichiro OMACHI  Masako OMACHI  Hirotomo ASO  

     
    PAPER

      Vol:
    E90-D No:8
      Page(s):
    1160-1167

    In statistical pattern recognition, it is important to estimate the distribution of patterns precisely to achieve high recognition accuracy. In general, precise estimation of the parameters of the distribution requires a great number of sample patterns, especially when the feature vector obtained from the pattern is high-dimensional. For some pattern recognition problems, such as face recognition or character recognition, very high-dimensional feature vectors are necessary and there are always not enough sample patterns for estimating the parameters. In this paper, we focus on estimating the distribution of high-dimensional feature vectors with small number of sample patterns. First, we define a function, called simplified quadratic discriminant function (SQDF). SQDF can be estimated with small number of sample patterns and approximates the quadratic discriminant function (QDF). SQDF has fewer parameters and requires less computational time than QDF. The effectiveness of SQDF is confirmed by three types of experiments. Next, as an application of SQDF, we propose an algorithm for estimating the parameters of the normal mixture. The proposed algorithm is applied to face recognition and character recognition problems which require high-dimensional feature vectors.

  • A Novel Elliptic Curve Dynamic Access Control System

    Jyh-Horng WEN  Ming-Chang WU  Tzer-Shyong CHEN  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E90-B No:8
      Page(s):
    1979-1987

    This study employs secret codes and secret keys based on the elliptic curve to construct an elliptic curve cryptosystem with a dynamic access control system. Consequently, the storage space needed for the secret key generated by an elliptic curve dynamic access control system is smaller than that needed for the secret key generated by exponential operation built on the secure filter (SF) dynamic access control system. Using the elliptic curve to encrypt/decrypt on the secure filter improves the efficiency and security of using exponential operation on the secure filter in the dynamic access control system. With the proposed dynamic elliptic curve access control system, the trusted central authority (CA) can add/delete classes and relationships and change the secret keys at any time to achieve an efficient control and management. Furthermore, different possible attacks are used to analyze the security risks. Since attackers can only obtain the general equations for the elliptic curve dynamic access control system, they are unable to effectively perform an elliptic curve polynomial (ECP) conversion, or to solve the elliptic curve discrete logarithm problem (ECDLP). Thus, the proposed elliptic curve dynamic access control system is secure.

  • A Recursive Data Least Square Algorithm and Its Channel Equalization Application

    Jun-Seok LIM  Jea-Soo KIM  Koeng-Mo SUNG  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E90-B No:8
      Page(s):
    2143-2146

    Using the recursive generalized eigendecomposition method, we develop a recursive form solution to the data least squares (DLS) problem in which the error is assumed to lie in the data matrix only. We apply it to a linear channel equalizer. Simulations shows that the DLS-based equalizer outperforms the ordinary least squares-based one in a channel equalization problem.

  • A Dual-Mode Bluetooth Transceiver with a Two-Point-Modulated Polar-Loop Transmitter and a Frequency-Offset-Compensated Receiver

    Takashi OSHIMA  Masaru KOKUBO  

     
    PAPER-Circuit Theory

      Vol:
    E90-A No:8
      Page(s):
    1669-1678

    An entire dual-mode transceiver capable of both the conventional GFSK-modulated Bluetooth and the Medium-Rate π/4-DQPSK-modulated Bluetooth has been investigated and reported. The transmitter introduces a novel two-point-modulated polar-loop technique without the global feedback to realize reduced power consumption, small chip area and also high modulation accuracy. The receiver shares all the circuits for both operating modes except the demodulators and also features a newly-proposed cancellation technique of the carrier-frequency offset. The transceiver has been confirmed by system or circuit simulations to meet all the dual-mode Bluetooth specifications. The simulation results show that the transmitting power can be larger than 10 dBm while achieving the total power efficiency above 30% and also RMS DEVM of 0.050. It was also confirmed by simulation that the receiver is expected to attain the sensitivity of -85 dBm in both modes while satisfying the image-rejection and the blocker-suppression specifications. The proposed transceiver will provide a low-cost, low-power single-chip RF-IC solution for the next-generation Bluetooth communication.

  • Low Complexity Resource Allocation Algorithm by Multiple Attribute Weighing and User Ranking for OFDMA Systems

    Maduranga LIYANAGE  Iwao SASASE  

     
    PAPER-Transmission Systems and Transmission Equipment for Communications

      Vol:
    E90-B No:8
      Page(s):
    2006-2015

    We propose an effective subcarrier allocation scheme for multiuser orthogonal frequency division multiple access (OFDMA) system in the downlink transmission with low computational complexity. In the proposed scheme, by taking multiple attributes of a user's channel, such as carrier gain decrease rate and variation from the mean channel gain of the system, to determine a rank for the user, subcarriers are then allocated depending on the individual user's rank. Different channel characteristics are used to better understand a user's need for subcarriers and hence determine a priority for the user. We also adopt an attribute weighing scheme to enhance the performance of the proposed scheme. The scheme is computationally efficient, since it avoids using iterations for the algorithm convergence and also common water-filling calculations that become more complex with increasing system parameters. Low complexity is achieved by allocating subcarriers to users depending on their determined rank. Our proposed scheme is simulated in comparison with other mathematically efficient subcarrier allocation schemes as well as with a conventional greedy allocation scheme. It is shown that the proposed method demonstrates competitive results with the simulated schemes.

  • A Novel Defected Elliptical Pore Photonic Crystal Fiber with Ultra-Flattened Dispersion and Low Confinement Losses

    Nguyen Hoang HAI  Yoshinori NAMIHIRA  Feroza BEGUM  Shubi KAIJAGE  S.M. Abdur RAZZAK  Tatsuya KINJO  Nianyu ZOU  

     
    PAPER-Optoelectronics

      Vol:
    E90-C No:8
      Page(s):
    1627-1633

    This paper reports a novel design in Photonic Crystal Fibers (PCFs) with nearly zero ultra-flattened dispersion characteristics. We describe the chromatic dispersion controllability taking non-uniform air hole structures into consideration. Through optimizing non-uniform air hole structures, the ultra-flattened zero dispersion PCFs can be efficiently designed. We show numerically that the proposed non-uniform air cladding structures successfully archive flat dispersion characteristics as well as extremely low confinement losses. As an example, the proposed PCF with flattened dispersion of 0.27 ps/(nmkm) from 1.5 µm to 1.8 µm wavelength with confinement losses of less than 10-11 dB/m. Finally, we point out that full controllability of the chromatic dispersion and confinement losses, along with the fabrication technique, are the main advantages of the proposed PCF structure.

  • Framework for PCE Based Multi-Layer Service Networks

    Mallik TATIPAMULA  Eiji OKI  Ichiro INOUE  Kohei SHIOMOTO  Zafar ALI  

     
    SURVEY PAPER-Traffic Engineering and Multi-Layer Networking

      Vol:
    E90-B No:8
      Page(s):
    1903-1911

    Implementing the fast-responding multi-layer service network (MLSN) functionality will allow the IP/MPLS service network logical topology and Optical Virtual Network topology to be reconfigured dynamically according to the traffic pattern on the network. Direct links can be created or removed in the logical IP/MPLS service network topology, when either extra capacity in MLSN core is needed or existing capacity in core is no longer required. Reconfiguring the logical and virtual network topologies constitute a new manner by which Traffic Engineering (TE) can solve or avoid network congestion problems and service degradations. As both IP and optical network layers are involved, this is called Multi-layer Traffic Engineering. We proposed border model based MLSN architecture in [5]. In this paper, we define the realization of Multi-Layer TE functions using Path Computation Element (PCE) for Border model based MLSN. It defines nodal requirements for multi-layer TE. Requirements of communication protocol between PCC (Path Computation Client) and PCE is introduced. It presents Virtual Network Topology (VNT) scenarios and steps involved along with examples for PCE-based VNT reconfiguration triggered by network failure, where VNT is a set of different layer's network resource accumulation.

  • Improvement of the Stability and Cancellation Performance for the Active Noise Control System Using the Simultaneous Perturbation Method

    Yukinobu TOKORO  Yoshinobu KAJIKAWA  Yasuo NOMURA  

     
    PAPER

      Vol:
    E90-A No:8
      Page(s):
    1555-1563

    In this paper, we propose the introduction of a frequency domain variable perturbation control and a leaky algorithm to the frequency domain time difference simultaneous perturbation (FDTDSP) method in order to improve the cancellation performance and the stability of the active noise control (ANC) system using the perturbation method. Since the ANC system using the perturbation method does not need the secondary path model, it has an advantage of being able to track the secondary path changes. However, the conventional perturbation method has the problem that the cancellation performance deteriorates over the entire frequency band when the frequency response of the secondary path has dips because the magnitude of the perturbation is controlled in the time domain. Moreover, the stability of this method also deteriorates in consequence of the dips. On the other hand, the proposed method can improve the cancellation performance by providing the appropriate magnitude of the perturbation over the entire frequency band and stabilizing the system operation. The effectiveness of the proposed method is demonstrated through simulation and experimental results.

  • Bandwidth Extension with Hybrid Signal Extrapolation for Audio Coding

    Chatree BUDSABATHON  Akinori NISHIHARA  

     
    PAPER

      Vol:
    E90-A No:8
      Page(s):
    1564-1569

    In this paper, we propose a blind method using hybrid signal extrapolation at the decoder to regenerate lost high-frequency components which are removed by encoders. At first, a decoded signal spectral resolution is enhanced by time domain linear predictive extrapolation and then the cut off frequency of each frame is estimated to avoid the spectrum gap between the end of original low frequency spectrum and the beginning of reconstructed high frequency spectrum. By utilizing a correlation between the high frequency spectrum and low frequency spectrum, the low frequency spectrum component is employed to reconstruct the high frequency spectrum component by frequency domain linear predictive extrapolation. Experimental results show an effective improvement of the proposed method in terms of SNR and human listening test results. The proposed method can be used to reconstruct the lost high frequency component to improve the perceptual quality of audio independent of the compression method.

  • A New Adaptive Filter Algorithm for System Identification Using Independent Component Analysis

    Jun-Mei YANG  Hideaki SAKAI  

     
    PAPER

      Vol:
    E90-A No:8
      Page(s):
    1549-1554

    This paper proposes a new adaptive filter algorithm for system identification by using an independent component analysis (ICA) technique, which separates the signal from noisy observation under the assumption that the signal and noise are independent. We first introduce an augmented state-space expression of the observed signal, representing the problem in terms of ICA. By using a nonparametric Parzen window density estimator and the stochastic information gradient, we derive an adaptive algorithm to separate the noise from the signal. The proposed ICA-based algorithm does not suppress the noise in the least mean square sense but to maximize the independence between the signal part and the noise. The computational complexity of the proposed algorithm is compared with that of the standard NLMS algorithm. The stationary point of the proposed algorithm is analyzed by using an averaging method. We can directly use the new ICA-based algorithm in an acoustic echo canceller without double-talk detector. Some simulation results are carried out to show the superiority of our ICA method to the conventional NLMS algorithm.

  • Adaptive Processing over Distributed Networks

    Ali H. SAYED  Cassio G. LOPES  

     
    INVITED PAPER

      Vol:
    E90-A No:8
      Page(s):
    1504-1510

    The article describes recent adaptive estimation algorithms over distributed networks. The algorithms rely on local collaborations and exploit the space-time structure of the data. Each node is allowed to communicate with its neighbors in order to exploit the spatial dimension, while it also evolves locally to account for the time dimension. Algorithms of the least-mean-squares and least-squares types are described. Both incremental and diffusion strategies are considered.

  • Speech Enhancement by Overweighting Gain with Nonlinear Structure in Wavelet Packet Transform

    Sung-il JUNG  Younghun KWON  Sung-il YANG  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E90-B No:8
      Page(s):
    2147-2150

    A speech enhancement method is proposed that can be implemented efficiently due to its use of wavelet packet transform. The proposed method uses a modified spectral subtraction with noise estimation by a least-squares line method and with an overweighting gain per subband with nonlinear structure, where the overweighting gain is used for suppressing the residue of musical noise and the subband is used for applying the weighted values according to the change of signals. The enhanced speech by our method has the following properties: 1) the speech intelligibility can be assured reliably; 2) the musical noise can be reduced efficiently. Various assessments confirmed that the performance of the proposed method was better than that of the compared methods in various noise-level conditions. Especially, the proposed method showed good results even at low SNR.

  • Generation of Training Data by Degradation Models for Traffic Sign Symbol Recognition

    Hiroyuki ISHIDA  Tomokazu TAKAHASHI  Ichiro IDE  Yoshito MEKADA  Hiroshi MURASE  

     
    PAPER

      Vol:
    E90-D No:8
      Page(s):
    1134-1141

    We present a novel training method for recognizing traffic sign symbols. The symbol images captured by a car-mounted camera suffer from various forms of image degradation. To cope with degradations, similarly degraded images should be used as training data. Our method artificially generates such training data from original templates of traffic sign symbols. Degradation models and a GA-based algorithm that simulates actual captured images are established. The proposed method enables us to obtain training data of all categories without exhaustively collecting them. Experimental results show the effectiveness of the proposed method for traffic sign symbol recognition.

  • Image Magnification by a Compact Method with Preservation of Preferential Components

    Akira HIRABAYASHI  

     
    PAPER

      Vol:
    E90-A No:8
      Page(s):
    1534-1541

    Bicubic interpolation is one of the standard approaches for image magnification since it can be easily computed and does not require a priori knowledge nor a complicated model. In spite of such convenience, the images enlarged by bicubic interpolation are blurry, in particular for large magnification factors. This may be explained by four constraints of bicubic interpolation. Hence, by relaxing or replacing the constraints, we propose a new magnification method, which performs better than bicubic interpolation, but retains its compactness. One of the constraints is about criterion, which we replace by a criterion requiring that all pixel values are reproduced and preferential components in input images are perfectly reconstructed. We show that, by choosing the low frequency components or edge enhancement components in the DCT basis as the preferential components, the proposed method performs better than bicubic interpolation, with the same, or even less amount of computation.

9841-9860hit(18690hit)