The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

3141-3160hit(3945hit)

  • A Quadriphase Sequence Pair Whose Aperiodic Auto/Cross-Correlation Functions Take Pure Imaginary Values

    Shinya MATSUFUJI  Naoki SUEHIRO  Noriyoshi KUROYANAGI  

     
    LETTER

      Vol:
    E82-A No:12
      Page(s):
    2771-2773

    This paper presents a quadriphase sequence pair, whose aperiodic auto-correlation functions for non-zero shifts and cross-one for any shift take pure imaginary values. Functions for pairs of length 2n are formulated, which map the vector space of order n over GF(2) to Z4. It is shown that they are bent for any n, such that their Fourier transforms take all the unit magnitude.

  • Chip-by-Chip Turbo Coding for DS/SS Systems

    Chen ZHENG  Takaya YAMAZATO  Masaaki KATAYAMA  Akira OGAWA  

     
    PAPER

      Vol:
    E82-A No:12
      Page(s):
    2751-2757

    Most of error correcting codes applying to DS/SS systems are such that information data is first (bit-by-bit) encoded and then spread by pseudo noise (PN) sequence. Thus, coding gain achieved by such systems are mainly due to the error correcting codes and the redundancy produced by the spreading codes shows no effect on the coding gain. In this paper, a chip-by-chip Turbo coding for DS/SS systems is proposed. The input information data is first spread by PN sequence and then fed into the Turbo-encoder which operates in chip timing. As the Turbo-encoder operates in chip timing, a large interleaving size would be obtained, which improves the performance. As results, superior performances with coding gain of more than 3.0 dB and 5.0 dB for AWGN and Rayleigh-fading channel, respectively, were found with short frame size of information data.

  • An Efficient Method for Reconfiguring the 1 1/2 Track-Switch Mesh Array

    Tadayoshi HORITA  Itsuo TAKANAMI  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E82-D No:12
      Page(s):
    1545-1553

    As VLSI technology has developed, the interest in implementing an entire or significant part of a parallel computer system using wafer scale integration is growing. The major problem for the case is the possibility of drastically low yield and/or reliability of the system if there is no strategy for coping with such situations. Various strategies to restructure the faulty physical system into the fault-free target logical system are described in the literature [1]-[5]. In this paper, we propose an efficient approximate method which can reconstruct the 1 1/2 track-switch mesh arrays with faulty PEs using hardware as well as software. A logical circuit added to each PE and a network connecting the circuits are used to decide spare PEs which compensate for faulty PEs. The hardware compexity of each circuit is much less than that of a PE where the size of each additional circuit is independent of array sizes and constant. By using the exclusive hardware scheme, a built-in self-reconfigurable system without using a host computer is realizable and the time for reconfiguring arrays becomes very short. The simulation result of the performance of the method shows that the reconstructing efficiency of our algorithm is a little less than those of the exaustive and Shigei's ones [6] and [7], but much better than that of the neural one [3]. We also compare the time complexities of reconstructions by hardware as well as software, and the hardware complexity in terms of the number of gates in the logical circuit added to each PE among the other methods.

  • Evaluation of Two Load-Balancing Primary-Backup Process Allocation Schemes

    Heejo LEE  Jong KIM  Sung Je HONG  

     
    PAPER-Fault Tolerant Computing

      Vol:
    E82-D No:12
      Page(s):
    1535-1544

    In this paper, we show two process allocation schemes to tolerate multiple faults when the primary-backup replication method is used. The first scheme, called multiple backup scheme, is running multiple backup processes for each process to tolerate multiple faults. The second scheme, called regenerative backup scheme, is running only one backup process for each process, but re-generates backup processes for processes that do not have a backup process after a fault occurrence to keep the primary-backup process pair available. In both schemes, we propose heuristic process allocation methods for balancing loads in spite of the occurrence of faults. Then we evaluate and compare the performance of the proposed heuristic process allocation methods using simulation. Next, we analyze the reliability of two schemes based on their fault-tolerance capability. For the analysis of fault-tolerance capability, we find the degree of fault tolerance for each scheme. Then we find the reliability of each scheme using Markov chains. The comparison results of two schemes indicate that the regenerative single backup process allocation scheme is more suitable than the multiple backup allocation scheme.

  • Experimental Characterization of the Feedback Induced Noise in Self-Pulsing Lasers

    Minoru YAMADA  Yasuyuki ISHIKAWA  Shunsuke YAMAMURA  Mitsuharu KIDU  Atsushi KANAMORI  Youichi AOKI  

     
    PAPER-Quantum Electronics

      Vol:
    E82-C No:12
      Page(s):
    2241-2247

    Generating conditions of the optical feedback noise in self-pulsing lasers were experimentally examined. The noise charcteristics were determined by changing the operating power, the feedback distance and the feedback ratio for several types of self-pulsing lasers. The idea of the effective modulation index was introduced to evaluate the generating conditions in an uniform manner based on the mode competition theory. Validity of the idea was experimentally confirmed for generation of noise.

  • Semi-Automatic Tool for Aligning a Parameterized CAD Model to Stereo Image Pairs

    Chu-Song CHEN  Kuan-Chung HUNG  Yi-Ping HUNG  Lin-Lin CHEN  Chiou-Shann FUH  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E82-D No:12
      Page(s):
    1582-1588

    Fully automatic reconstruction of 3D models from images is well-known to be a difficult problem. For many applications, a limited amount of human assistance is allowed and can greatly reduce the complexity of the 3D reconstruction problem. In this paper, we present an easy-to-use method for aligning a parameterized 3D CAD model to images taken from different views. The shape parameters of the 3D CAD model can be recovered accurately. Our work is composed of two parts. In the first part, we developed an interactive tool which allows the user to associate the features in the CAD model to the features in the 2D images. This interactive tool is designed to achieve efficiency and accuracy. In the second part, 3D information extracted from different stereo views are integrated together by using an optimization technique to obtain accurate shape parameters. Some experimental results have been shown to demonstrate the accuracy and usefulness of the recovered CAD model.

  • Precise Write-Time Compensation for Nonlinear Transition Shift in Magnetic Tape Recording Using a d=1 RLL Code

    Toshihiro UEHARA  Keigo MAJIMA  Shoichiro OGAWA  Junji NUMAZAWA  

     
    PAPER

      Vol:
    E82-C No:12
      Page(s):
    2234-2240

    We propose precise write-time compensation for nonlinear transition shift in magnetic tape recording using d=1 runlength-limited (RLL) code as a channel modulation. In this write-time compensation approach, the write current transitions having a transition within 3 bits earlier are shifted so as to minimize the transition shift of the readback signal. First, we precisely measured the nonlinear transition shift using a VCR. Next, based on this measurement, we simulated the effects of the write-time compensation, verifying them in recording experiments with a VCR. The results show that when the optimum read equalization is applied to the readback signal, this write-time compensation approach increases the eye height and eye width while improving the byte error rate by about two orders.

  • High-Availability Scheme Using Data Partitioning for Cluster Systems

    Yuzuru MAYA  Akira OHTSUJI  

     
    PAPER-Computer Systems

      Vol:
    E82-D No:11
      Page(s):
    1457-1465

    For cluster systems consisting of multiple nodes and shared servers which consist of an on-line and a backup server, we propose a hot-standby scheme of shared servers. In this scheme for shared servers, the shared servers have user data and control data. The on-line shared server sends only the control data to the backup server when it receives an update command. When the on-line shared server fails, the backup shared server reconstructs the shared data by using the latest control data sent from the on-line server and the user data sent from each node. We evaluated the system recovery time and the performance overhead for the hot-standby scheme. This enables the system recovery time to be shortened to 30 seconds and the performance overhead to be reduced to 2%.

  • A Compact Smith-Purcell Free-Electron Laser with a Bragg Cavity

    Tipyada THUMVONGSKUL  Akimasa HIRATA  Toshiyuki SHIOZAWA  

     
    PAPER-Electromagnetic Theory

      Vol:
    E82-C No:11
      Page(s):
    2094-2100

    The growth and saturation characteristics of an electromagnetic (EM) wave in a Smith-Purcell free-electron laser (FEL) with a Bragg cavity are investigated in detail with the aid of numerical simulation based upon the fluid model of the electron beam. To analyze the problem, a two-dimensional (2-D) model of the Smith-Purcell FEL is considered. The model consists of a planar relativistic electron beam and a parallel plate metallic waveguide, which has a uniform grating carved on one plate. For confinement and extraction of EM waves, a Bragg cavity is formed by a couple of reflector gratings with proper spatial period and length, which are connected at both ends of the waveguide. The results of numerical simulation show that a compact Smith-Purcell FEL can be realized by using a Bragg cavity composed of metallic gratings.

  • A Compositional Approach for Constructing Communication Services and Protocols

    Bhed Bahadur BISTA  Kaoru TAKAHASHI  Norio SHIRATORI  

     
    PAPER

      Vol:
    E82-A No:11
      Page(s):
    2546-2557

    The complexity of designing communication protocols has lead researchers to develop various techniques for designing and verifying protocols. One of the most important techniques is a compositional technique. Using a compositional technique, a large and complex protocol is designed and verified by composing small and simple protocols which are easy to handle, design and verify. Unlike the other compositional approaches, we propose compositional techniques for simultaneously composing service specifications and protocol specifications based on Formal Description Techniques (FDTs) called LOTOS. The proposed techniques consider alternative, sequential, interrupt and parallel composition of service specifications and protocol specifications. The composite service specification and the composite protocol specification preserve the original behaviour and the correctness properties of individual service specifications and protocol specifications. We use the weak bisimulation equivalence (), to represent the correctness properties between the service specification and the protocol specification. When a protocol specification is weak bisimulation equivalent to a service specification, the protocol satisfies all the logical properties of a communication protocol as well as provides the services that are specified in the service specification.

  • Improving Dictionary-Based Code Compression in VLIW Architectures

    Sang-Joon NAM  In-Cheol PARK  Chong-Min KYUNG  

     
    PAPER

      Vol:
    E82-A No:11
      Page(s):
    2318-2324

    Reducing code size is crucial in embedded systems as well as in high-performance systems to overcome the communication bottleneck between memory and CPU, especially with VLIW (Very Long Instruction Word) processors that require a high-bandwidth instruction prefetching. This paper presents a new approach for dictionary-based code compression in VLIW processor-based systems using isomorphism among instruction words. After we divide instruction words into two groups, one for opcode group and the other for operand group, the proposed compression algorithm is applied to each group for maximal code compression. Frequently-used instruction words are extracted from the original code to be mapped into two dictionaries, an opcode dictionary and an operand dictionary. According to the SPEC95 benchmarks, the proposed technique has achieved an average code compression ratio of 63%, 69%, and 71% in a 4-issue, 8-issue, and 12-issue VLIW architecture, respectively.

  • Scattered Signal Enhancement Algorithm Applied to Radar Target Discrimination Schemes

    Diego-Pablo RUIZ  Antolino GALLEGO  Maria-Carmen CARRION  

     
    PAPER-Antennas and Propagation

      Vol:
    E82-B No:11
      Page(s):
    1858-1866

    A procedure for radar target discrimination is presented in this paper. The scheme includes an enhancement of late-time noisy scattering data based on a proposed signal processing algorithm and a decision procedure using previously known resonance annihilation filters. The signal processing stage is specifically adapted to scattering signals and makes use of the results of the singularity expansion method. It is based on a signal reconstruction using the SVD of a data matrix with a suitable choice of the number of singular vectors employed. To justify the inclusion of this stage, this procedure is shown to maintain the signal characteristics necessary to identify the scattered response. Simulation results clearly reveal a significant improvement due to the inclusion of the proposed stage. This improvement becomes especially important when the noise level is high or the targets to be discriminated (five regular polygonal loops) have a similar geometry.

  • Hardware Synthesis from C Programs with Estimation of Bit Length of Variables

    Osamu OGAWA  Kazuyoshi TAKAGI  Yasufumi ITOH  Shinji KIMURA  Katsumasa WATANABE  

     
    PAPER

      Vol:
    E82-A No:11
      Page(s):
    2338-2346

    In the hardware synthesis methods with high level languages such as C language, optimization quality of the compilers has a great influence on the area and speed of the synthesized circuits. Among hardware-oriented optimization methods required in such compilers, minimization of the bit length of the data-paths is one of the most important issues. In this paper, we propose an estimation algorithm of the necessary bit length of variables for this aim. The algorithm analyzes the control/data-flow graph translated from C programs and decides the bit length of each variable. On several experiments, the bit length of variables can be reduced by half with respect to the declared length. This method is effective not only for reducing the circuit area but also for reducing the delay of the operation units such as adders.

  • Time Complexity Analysis of the Minimal Siphon Extraction Problem of Petri Nets

    Masahiro YAMAUCHI  Toshimasa WATANABE  

     
    PAPER

      Vol:
    E82-A No:11
      Page(s):
    2558-2565

    Given a Petri net N=(P, T, E), a siphon is a set S of places such that the set of input transitions to S is included in the set of output transitions from S. Concerning extraction of one or more minimal siphons containing a given specified set Q of places, the paper shows several results on polynomial time solvability and NP-completeness, mainly for the case |Q| 1.

  • ECL-Compatible Low-Power-Consumption 10-Gb/s GaAs 8:1 Multiplexer and 1:8 Demultiplexer

    Nobuhide YOSHIDA  Masahiro FUJII  Takao ATSUMO  Keiichi NUMATA  Shuji ASAI  Michihisa KOHNO  Hirokazu OIKAWA  Hiroaki TSUTSUI  Tadashi MAEDA  

     
    PAPER-Low Power-Consumption RF ICs

      Vol:
    E82-C No:11
      Page(s):
    1992-1999

    An emitter coupled logic (ECL) compatible low-power GaAs 8:1 multiplexer (MUX) and 1:8 demultiplexer (DEMUX) for 10-Gb/s optical communication systems has been developed. In order to decrease the power consumption and to maximize the timing margin, we estimated the power consumption for direct-coupled FET logic (DCFL) and source-coupled FET logic (SCFL) circuits in terms of the D-type flip-flop (D-FF) operating speed and the duty-ratio variation. Based on the result, we used SCFL circuits in the clock-generating circuit and the circuits operating at 10 Gb/s, and we used DCFL circuits in the circuits operating below 5 Gb/s. These ICs, which are mounted on ceramic packages, operate at up to 10 Gb/s with power consumption of 1.2 W for the 8:1 MUX and 1.0 W for the 1:8 DEMUX. This is the lowest power consumption yet reported for 10-Gb/s 8:1 MUX and 1:8 DEMUX.

  • Almost Sure and Mean Convergence of Extended Stochastic Complexity

    Masayuki GOTOH  Toshiyasu MATSUSHIMA  Shigeichi HIRASAWA  

     
    PAPER-Source Coding/Image Processing

      Vol:
    E82-A No:10
      Page(s):
    2129-2137

    We analyze the extended stochastic complexity (ESC) which has been proposed by K. Yamanishi. The ESC can be applied to learning algorithms for on-line prediction and batch-learning settings. Yamanishi derived the upper bound of ESC satisfying uniformly for all data sequences and that of the asymptotic expectation of ESC. However, Yamanishi concentrates mainly on the worst case performance and the lower bound has not been derived. In this paper, we show some interesting properties of ESC which are similar to Bayesian statistics: the Bayes rule and the asymptotic normality. We then derive the asymptotic formula of ESC in the meaning of almost sure and mean convergence within an error of o(1) using these properties.

  • A Wide-Viewing-Angle π Cell Compensated with a Discotic Film

    Hiroyuki MORI  

     
    PAPER

      Vol:
    E82-C No:10
      Page(s):
    1787-1791

    We have realized excellent viewing angle characteristics for the π cell, by combining a discotic negative birefringence film, which has a hybrid alignment structure, and a positive a-plate. The negative birefringence of the film completely compensates the positive birefringence of the π cell liquid crystals in the dark-state. The roll of a c-plate, which should be accompanied by the a-plate to suppress the light leakage from crossed polarizer at oblique incident angles, was substituted for the vertically aligned component of the π cell liquid crystal. Taking into account the fast electrooptical response, which is one order faster than that of the twisted-nematic liquid-crystal display, the π cell is one of the most promising liquid-crystal-display modes.

  • A Bit Rate Reduction Technique for Vector Quantization Image Data Compression

    Yung-Gi WU  Shen-Chuan TAI  

     
    PAPER-Source Coding/Image Processing

      Vol:
    E82-A No:10
      Page(s):
    2147-2153

    In this paper, a technique to reduce the overhead of Vector Quantization (VQ) coding is developed here. Our method exploits the inter-index correlation property to reduce the overhead to transmit encoded indices. Discrete Cosine Transform (DCT) is the tool to decorrelate the above correlation to get further bit rate reduction. As we know, the codewords in the codebook that generated from conventional LBG algorithm do not have any specified orders. Hence, the indices for selected codewords to represent respective adjacent blocks are random distributions. However, due to the homogeneous property existing among adjacent regions in original image, we re-arrange the codebook according to our predefined weighting criterion to enable the selected neighboring indices capable of indicating the homogeneous feature as well. Then, DCT is used to compress those VQ encoded indices. Because of the homogeneous characteristics existing among the selected adjacent indices after codebook permutation, DCT can achieve better compression efficiency. However, as we know, DCT introduces distortion by the quantization procedure, which yield error-decoded indices. Therefore, we utilize an index residue compensation method to make up that error decoded indices which have high complexity deviation to reduce those unpleasant visual effects caused by distorted indices. Statistics illustrators and table are addressed to demonstrate the efficient performance of proposed method. Experiments are carried out to Lena and other natural gray images to demonstrate our claims. Simulation results show that our method saves more than 50% bit rate to some images while preserving the same reconstructed image qualities as standard VQ coding scheme.

  • Simulation Algorithms among Enhanced Mesh Models

    Susumu MATSUMAE  Nobuki TOKURA  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E82-D No:10
      Page(s):
    1324-1337

    In this paper, we present simulation algorithms among enhanced mesh models. The enhanced mesh models here include reconfigurable mesh and mesh with multiple broadcasting. A reconfigurable mesh (RM) is a processor array that consists of processors arranged to a 2-dimensional grid with a reconfigurable bus system. The bus system can be used to dynamically obtain various interconnection patterns among the processors during the execution of programs. A horizontal-vertical RM (HV-RM) is obtained from the general RM model, by restricting the network topology it can take to the ones in which each bus segment must be along row or column. A mesh with multiple broadcasting (MWMB) is an enhanced mesh, which has additional broadcasting buses endowed to every row and column. We present two algorithms:1) an algorithm that simulates a HV-RM of size nn time-optimally in θ(n) time on a MWMB of size nn, and 2) an algorithm that simulates a RM of size nn in θ(log2 n) time on a HV-RM of size nn. Both algorithms use a constant number of storage in each processor. Furthermore, we show that a RM of size nn can be simulated in θ((n/m)2 log n log m) time on a HV-RM of size mm, in θ ((n/m)2 m log n log m) time on a MWMB of size mm (m < n). These simulations use θ((n/m)2) storage in each processor, which is optimal.

  • Evolutional Design and Training Algorithm for Feedforward Neural Networks

    Hiroki TAKAHASHI  Masayuki NAKAJIMA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E82-D No:10
      Page(s):
    1384-1392

    In pattern recognition using neural networks, it is very difficult for researchers or users to design optimal neural network architecture for a specific task. It is possible for any kinds of neural network architectures to obtain a certain measure of recognition ratio. It is, however, difficult to get an optimal neural network architecture for a specific task analytically in the recognition ratio and effectiveness of training. In this paper, an evolutional method of training and designing feedforward neural networks is proposed. In the proposed method, a neural network is defined as one individual and neural networks whose architectures are same as one species. These networks are evaluated by normalized M. S. E. (Mean Square Error) which presents a performance of a network for training patterns. Then, their architectures evolve according to an evolution rule proposed here. Architectures of neural networks, in other words, species, are evaluated by another measurement of criteria compared with the criteria of individuals. The criteria assess the most superior individual in the species and the speed of evolution of the species. The species are increased or decreased in population size according to the criteria. The evolution rule generates a little bit different architectures of neural network from superior species. The proposed method, therefore, can generate variety of architectures of neural networks. The designing and training neural networks which performs simple 3 3 and 4 4 pixels which include vertical, horizontal and oblique lines classifications and Handwritten KATAKANA recognitions are presented. The efficiency of proposed method is also discussed.

3141-3160hit(3945hit)