The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] CRI(505hit)

321-340hit(505hit)

  • Integrated Development Environment for Knowledge-Based Systems and Its Practical Application

    Keiichi KATAMINE  Masanobu UMEDA  Isao NAGASAWA  Masaaki HASHIMOTO  

     
    PAPER-Knowledge Engineering and Robotics

      Vol:
    E87-D No:4
      Page(s):
    877-885

    The modeling of an application domain and its specific knowledge description language are important for developing knowledge-based systems. A rapid-prototyping approach is suitable for such developments since in this approach the modeling and language development are processed simultaneously. However, programming languages and their supporting environments which are usually used for prototyping are not necessarily adequate for developing practical applications. We have been developing an integrated development environment for knowledge-based systems, which supports all the development phases from the early prototyping phase to final commercial development phase. The environment called INSIDE is based on a Prolog abstract machine, and provides all of the functions required for the development of practical applications in addition to the standard Prolog features. This enables the development of both prototypes and practical applications in the same environment. Moreover, their efficient development and maintenance can be achieved. In addition, the effectiveness of INSIDE is described by examples of its practical application.

  • A Compact 16-Channel Integrated Optical Subscriber Module for Economical Optical Access Systems

    Tomoaki YOSHIDA  Hideaki KIMURA  Shuichiro ASAKAWA  Akira OHKI  Kiyomi KUMOZAKI  

     
    PAPER-Fiber-Optic Transmission

      Vol:
    E87-B No:4
      Page(s):
    816-825

    We developed a compact, 16-channel integrated optical subscriber module for one-fiber bi-directional optical access systems. They can support more subscribers in a limited mounting space. For ultimate compactness, we created 8-channel integrated super-compact optical modules, 4-channel integrated limiting amplifiers, and 4-channel integrated LD drivers for Fast Ethernet. We introduce a new simulation method to analyze the electrical crosstalk that degrades sensitivity of the optical module. A new IC architecture is applied to reduce electrical crosstalk. We manufactured the optical subscriber module with these optical modules and ICs. Experiments confirm that the module offers a sensitivity of -27.3 dBm under 16-channel 125 Mbit/s simultaneous operation.

  • Using Nearest Neighbor Rule to Improve Performance of Multi-Class SVMs for Face Recognition

    Sung-Wook PARK  Jong-Wook PARK  

     
    LETTER-Multimedia Systems

      Vol:
    E87-B No:4
      Page(s):
    1053-1057

    The classification time required by conventional multi-class SVMs greatly increases as the number of pattern classes increases. This is due to the fact that the needed set of binary class SVMs gets quite large. In this paper, we propose a method to reduce the number of classes by using nearest neighbor rule (NNR) in the principle component analysis and linear discriminant analysis (PCA+LDA) feature subspace. The proposed method reduces the number of face classes by selecting a few classes closest to the test data projected in the PCA+LDA feature subspace. Results of experiment show that our proposed method has a lower error rate than nearest neighbor classification (NNC) method. Though our error rate is comparable to the conventional multi-class SVMs, the classification process of our method is much faster.

  • On the Descriptional Complexity of Iterative Arrays

    Andreas MALCHER  

     
    PAPER

      Vol:
    E87-D No:3
      Page(s):
    721-725

    The descriptional complexity of iterative arrays (IAs) is studied. Iterative arrays are a parallel computational model with a sequential processing of the input. It is shown that IAs when compared to deterministic finite automata or pushdown automata may provide savings in size which are not bounded by any recursive function, so-called non-recursive trade-offs. Additional non-recursive trade-offs are proven to exist between IAs working in linear time and IAs working in real time. Furthermore, the descriptional complexity of IAs is compared with cellular automata (CAs) and non-recursive trade-offs are proven between two restricted classes. Finally, it is shown that many decidability questions for IAs are undecidable and not semidecidable.

  • Superconducting Properties of EuBa2Cu3O7 Thin Films Deposited on R-Plane Sapphires with CeO2Sm2O3 Buffer Layers Using Magnetron Sputtering

    Osamu MICHIKAMI  Yasuyuki OTA  Shinji KIKUCHI  

     
    PAPER

      Vol:
    E87-C No:2
      Page(s):
    197-201

    In order to improve the critical current density (Jc) of c-axis-oriented EuBa2Cu3O7 (c-EBCO) thin films deposited on R-plane sapphires (R-Al2O3) with a CeO2 buffer layer, insertion of an Sm2O3 buffer layer and optimization of its deposition condition were attempted. The effects of substrate temperature and film thickness of an Sm2O3 buffer layer on the orientation, crystallinity, surface morphology and superconducting properties of EBCO thin films were examined. As a result, EBCO thin films with Jc = 5.7 MA/cm2 at 77.3 K were obtained on a sapphire with a CeO2(80 )Sm2O3(200 ) buffer layer. Epitaxial relations of sputter-deposited films were clarified.

  • A Novel Contour Description with Expansion Ability Using Extended Fractal Interpolation Functions

    Satoshi UEMURA  Miki HASEYAMA  Hideo KITAJIMA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E87-D No:2
      Page(s):
    453-462

    In this paper, a novel description method of the contour of a shape using extended fractal interpolation functions (EFIFs) is presented. Although the scope of application of traditional FIFs has been limited to cases in which a given signal is represented by a single-valued function, the EFIFs derived by the introduction of a new parameter can describe a multiple-valued signal such as the contour of a shape with a high level of accuracy. Furthermore, the proposed description method possesses the useful property that once a given contour has been modeled by the proposed description method, the shape can be easily expanded at an arbitrary expansion rate. Experimental results show the effectiveness and usefulness of the proposed description method for representing contours.

  • New Stopping Criterion of Turbo Decoding for Turbo Processing

    Dong-Kyoon CHO  Byeong-Gwon KANG  Yong-Seo PARK  Keum-Chan WHANG  

     
    LETTER-Fundamental Theories

      Vol:
    E87-B No:1
      Page(s):
    161-164

    A new stopping criterion of turbo decoding based on the maximum a posteriori (MAP) decoder is proposed and applied to the turbo processing system. The new criterion suggested as the combined parity check (CPC) counts the sign difference between combined parity bits and the re-encode parity bits determined from decoded information bits. The CPC requires decoded parity bits and a turbo encoder but it performs better in terms of the bit error rate and the average number of iterations in the turbo processing system.

  • Analyzing the Impact of Data Errors in Safety-Critical Control Systems

    Orjan ASKERDAL  Magnus GAFVERT  Martin HILLER  Neeraj SURI  

     
    PAPER-Verification and Dependability Analysis

      Vol:
    E86-D No:12
      Page(s):
    2623-2633

    Computers are increasingly used for implementing control algorithms in safety-critical embedded applications, such as engine control, braking control and flight surface control. Consequently, computer errors can have severe impact on the safety of such systems. Addressing the coupling of control performance with computer related errors, this paper develops a methodology for analyzing the impacts data errors have on control system dependability. The impact of a data error is measured as the resulting control error. We use maximum bounds on this measure as the criterion for control system failure (i.e., if the control error exceeds a certain threshold, the system has failed). In this paper we a) develop suitable models of computer faults for analysis of control level effects and related analysis methods, and b) apply traditional control theory analysis methods for understanding the impacts of data errors on system dependability. An automobile slip-control brake-system is used as an example showing the viability of our approach.

  • Critical Path Selection for Deep Sub-Micron Delay Test and Timing Validation

    Jing-Jia LIOU  Li-C. WANG  Angela KRSTIĆ  Kwang-Ting (Tim) CHENG  

     
    PAPER-Timing Verification and Test Generation

      Vol:
    E86-A No:12
      Page(s):
    3038-3048

    Critical path selection is an indispensable step for AC delay test and timing validation. Traditionally, this step relies on the construction of a set of worse-case paths based upon discrete timing models. However, the assumption of discrete timing models can be invalidated by timing defects and process variation in the deep sub-micron domain, which are often continuous in nature. As a result, critical paths defined in a traditional timing analysis approach may not be truly critical in reality. In this paper, we propose using a statistical delay evaluation framework for estimating the quality of a path set. Based upon the new framework, we demonstrate how the traditional definition of a critical path set may deviate from the true critical path set in the deep sub-micron domain. To remedy the problem, we discuss improvements to the existing path selection strategies by including new objectives. We then compare statistical approaches with traditional approaches based upon experimental analysis of both defect-free and defect-injected cases.

  • Electro-Optical Properties of OCB Mode for Multi-Media Application

    Changhun LEE  Haksun CHANG  Seonhong AHN  Kunjong LEE  

     
    PAPER-LCD Technology

      Vol:
    E86-C No:11
      Page(s):
    2249-2252

    We have obtained high performance and low voltage driving OCB panel by reducing the critical voltage and retardation matching between liquid crystal layer and compensation films. Flattening color filter layer and optimizing rubbing process have minimized the critical voltage in the panel. In addition, an appropriate retardation of the film and LC layer has scanned to achieve low driving voltage and high transmission. Especially, by adopting new driving scheme, we considerably reduced the initial bend transition time, which is known as one of drawbacks in OCB mode. As a result, we developed the proto-type 17" WXGA OCB panel with less than 5 V drive, over 90% of TN light efficiency and over 80 degree for all viewing direction except for rubbing direction including color shift as well as high-speed response time.

  • Adaptive Rekeying for Secure Multicast

    Sandeep KULKARNI  Bezawada BRUHADESHWAR  

     
    PAPER-Security

      Vol:
    E86-B No:10
      Page(s):
    2957-2965

    In this paper, we focus on the problem of secure multicast in dynamic groups. In this problem, a group of users communicate using a shared group key. Due to the dynamic nature of these groups, to preserve secrecy, it is necessary to change the group key whenever the group membership changes. While the group key is being changed, the group communication needs to be interrupted until the rekeying is complete. This interruption is especially necessary if the rekeying is done because a user has left (or is removed). We split the rekeying cost into two parts: the cost of the critical path--where each user receives the new group key, and the cost of the non-critical path--where each user receives any other keys that it needs to obtain. We present a family of algorithms that show the tradeoff between the cost of the critical path and the cost of the non-critical path. Our solutions allow the group controller to choose the appropriate algorithm for key distribution by considering the requirements on critical and non-critical cost. In our solutions, the group controller can dynamically change the algorithm for key distribution to adapt to changing application requirements. Moreover, we argue that our solutions allow the group controller to effectively manage heterogeneous groups where users have different requirements/capabilities.

  • RFI Cancellation in DMT VDSL: A Digital Frequency Domain Scheme

    Riccardo LOCATELLI  Silvia BRINI  Luca FANUCCI  Christophe Del TOSO  

     
    PAPER

      Vol:
    E86-A No:8
      Page(s):
    1993-2000

    In this paper a digital frequency domain RFI (Radio Frequency Interference) cancellation scheme for DMT (Discrete Multitone) based VDSL (Very high speed Digital Subscriber Line) systems is presented. The proposed algorithm has been optimized and characterized in terms of complexity and performance. Optimizations were also performed from an implementation point of view by deducing key dependencies among our RFI model coefficients that let us drastically reduce the size of the memories involved. System simulations showed the effectiveness of the canceller: in terms of VDSL performance parameters such as bit rate, the optimized cancellation scheme recovers almost totally the performance degradation due to RFI.

  • A High Throughput Pipelined Architecture for Blind Adaptive Equalizer with Minimum Latency

    Masashi MIZUNO  James OKELLO  Hiroshi OCHI  

     
    PAPER

      Vol:
    E86-A No:8
      Page(s):
    2011-2019

    In this paper, we propose a pipelined architecture for an equalizer based on the Multilevel Modified Constant Modulus Algorithm (MMCMA). We also provide the correction factor that mathematically converts the proposed pipelined adaptive equalizer into an equivalent non-pipelined conventional MMCMA based equalizer. The proposed method of pipelining uses modules with 6 filter coefficients, resulting in an overall latency of a single sampling period, along the main transmission line. The basic concept of the proposed architecture is to implement the Finite Impulse Response (FIR) filter and the algorithm portion of the adaptive equalizer, such that the critical path of the whole circuit has a maximum of three complex multipliers and three adders.

  • Multi-Code Multi-Carrier CDMA Modulation with Adaptive Bit-Loading for VDSL Modems

    Massimo ROVINI  Giovanni VANINI  Luca FANUCCI  

     
    PAPER

      Vol:
    E86-A No:8
      Page(s):
    1985-1992

    This paper presents a new modulation scheme for Very-High Speed Digital Subscriber Lines (VDSL) modem, featuring a Multi-Code Multi-Carrier Code Division Multiple Access (MC2-CDMA) modulation. The system takes advantage from both the CDMA modulation and the Multi-Carrier transmission, and furthermore the channel throughput is increased adopting a multi-code approach. Starting from an overview of this novel scheme, encompassing the transmitter, channel and receiver description, a brief review of the equalization techniques is also considered and a proper bit-loading algorithm is derived to find out the achievable overall channel rate. The aim of this paper, besides introducing this novel scheme, is to demonstrate its suitability for a VDSL environment, where the achievable channel rate represents a real challenge. By means of a further optimisation, a general improvement of the system performance with respect to the standardized Discrete Multi Tone (DMT) modulation is also demonstrated.

  • Theoretical Consideration of Nonlinear Compensation Method for Minimizing High-Order Intermodulation Distortion Nonlinear Compensation in a Direct Optical FM RoF System

    Akihiko MURAKOSHI  Katsutoshi TSUKAMOTO  Shozo KOMAKI  

     
    PAPER-Photonic Links for Wireless Communications

      Vol:
    E86-C No:7
      Page(s):
    1167-1174

    An optical FM system using an optical FM LD (laser diode) and an optical frequency discriminator (OFD), in which a nonlinear compensation scheme based on the interaction between its nonlinearities can minimize intermodulation distortion. This paper theoretically investigates the minimization influence for 3rd plus 5th order intermodulation distortion power for an optical FM radio-on-fiber system. The carrier to noise-plus-distortion power ratio (CNDR) is theoretically analyzed in employing the OFD whose transmission characteristic is controlled by a phase shifter. The results show that the designed receiver can achieve higher CNDR in the application of multicarrier transmission.

  • Improving Precision of the Subspace Information Criterion

    Masashi SUGIYAMA  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E86-A No:7
      Page(s):
    1885-1895

    Evaluating the generalization performance of learning machines without using additional test samples is one of the most important issues in the machine learning community. The subspace information criterion (SIC) is one of the methods for this purpose, which is shown to be an unbiased estimator of the generalization error with finite samples. Although the mean of SIC agrees with the true generalization error even in small sample cases, the scatter of SIC can be large under some severe conditions. In this paper, we therefore investigate the causes of degrading the precision of SIC, and discuss how its precision could be improved.

  • Model Selection with Componentwise Shrinkage in Orthogonal Regression

    Katsuyuki HAGIWARA  

     
    PAPER-Digital Signal Processing

      Vol:
    E86-A No:7
      Page(s):
    1749-1758

    In the problem of determining the major frequency components of a signal disturbed by noise, a model selection criterion has been proposed. In this paper, the criterion has been extended to cover a penalized cost function that yields a componentwise shrinkage estimator, and it exhibited a consistent model selection when the proposed criterion was used. Then, a simple numerical simulation was conducted, and it was found that the proposed criterion with an empirically estimated componentwise shrinkage estimator outperforms the original criterion.

  • Adaptive Blind Source Separation Using a Risk-Sensitive Criterion

    Junya SHIMIZU  

     
    PAPER-Digital Signal Processing

      Vol:
    E86-A No:7
      Page(s):
    1724-1731

    An adaptive blind signal separation filter is proposed using a risk-sensitive criterion framework. This criterion adopts an exponential type function. Hence, the proposed criterion varies the consideration weight of an adaptation quantity depending on errors in the estimates: the adaptation is accelerated when the estimation error is large, and unnecessary acceleration of the adaptation does not occur close to convergence. In addition, since the algorithm derivation process relates to an H filtering, the derived algorithm has robustness to perturbations or estimation errors. Hence, this method converges faster than conventional least squares methods. Such effectiveness of the new algorithm is demonstrated by simulation.

  • An Efficient Multiple Description Coding Using Whitening Transform

    Kwang-Pyo CHOI  Keun-Young LEE  

     
    PAPER

      Vol:
    E86-A No:6
      Page(s):
    1382-1389

    This paper proposes an enhanced method for multiple description coding (MDC) using whitening transform. The MDC using correlating transform is an error resilient coding technique that explicitly adds correlation between two descriptions to enable the estimation of one set from the other when one set is dropped in channel. This paper proposes a method to overcome practical problems that decoder must know statistics of original image in the conventional correlating transform method. The MDC using whitening transform does not need additional statistical information to reconstruct a image because the coefficients whitening transformed have uni-variance statistics. Our experimental results show that the proposed method achieves a good trade-off between the coding efficiency and the reconstruction quality. We obtain that PSNR of image reconstructed from two descriptions is about 0.93 dB higher at the 1.0BPP and PSNR from only one description is about 1.88 dB higher than conventional method at the same rate of 'Lena' image.

  • Vector Quantization Codebook Design Using the Law-of-the-Jungle Algorithm

    Hiroyuki TAKIZAWA  Taira NAKAJIMA  Kentaro SANO  Hiroaki KOBAYASHI  Tadao NAKAMURA  

     
    PAPER-Image Processing, Image Pattern Recognition

      Vol:
    E86-D No:6
      Page(s):
    1068-1077

    The equidistortion principle[1] has recently been proposed as a basic principle for design of an optimal vector quantization (VQ) codebook. The equidistortion principle adjusts all codebook vectors such that they have the same contribution to quantization error. This paper introduces a novel VQ codebook design algorithm based on the equidistortion principle. The proposed algorithm is a variant of the law-of-the-jungle algorithm (LOJ), which duplicates useful codebook vectors and removes useless vectors. Due to the LOJ mechanism, the proposed algorithm can establish the equidistortion condition without wasting learning steps. This is significantly effective in preventing performance degradation caused when initial states of codebook vectors are improper to find an optimal codebook. Therefore, even in the case of improper initialization, the proposed algorithm can achieve minimization of quantization error based on the equidistortion principle. Performance of the proposed algorithm is discussed through experimental results.

321-340hit(505hit)