Jun'ichi SAKAGUCHI Tsutomu HOSHINO Kensaku FUJII Juro OHGA
This paper introduces an acoustic echo canceller system materialized with a 16-bit fixed point processing type DSP (Analog Devices, ADSP-2181). This experimental system uses the tri-quantized-x individually normalized least mean square (INLMS) algorithm little degrading the convergence property under the fixed point processing. The experimental system also applies a small step gain to the algorithm to prevent the double-talk from increasing the estimation error. Such a small step gain naturally reduces the convergence speed. The experimental system compensates the reduction by applying the block length adjustment technique to the algorithm. This technique enables to ceaselessly update the coefficients of the adaptive filter even when the reference signal power is low. The experimental system thus keeps the echo return loss enhancement (ERLE) high against the double-talk.
Ken-ichi KITAYAMA Hideyuki SOTOBAYASHI Naoya WADA
Optical code division multiplexing (OCDM) is the other class of multiplexing techniques than time division multiplexing (TDM), wavelength division multiplexing (WDM) and space division multiplexing (SDM). OCDM has been proposed in mid '70s. It has been long since OCDM remains outside the mainstream of research community of optical communications, however, possible scarcity of the wavelength resource in future photonic networks, the simple access protocol as well as versatility of optical codes motivate recent growth of OCDM research activities. In this paper, first, fundamentals of OCDM concept are presented, highlighting optical encoding and optical time gate detection which realize time spreading/despreading. Next, current research activities of OCDM are reviewed by focusing particularly on the optical implementations and the proof-of-concept experiments. It is devoted to three categories; high bit rate point-to-point transmissions, gigabit multiple access, followed by optical path networks using optical code. Finally, future issues are briefly summarized.
Junji HIROKANE Yoshiteru MURAKAMI Akira TAKAHASHI Shigeo TERASHIMA
A standard of Advanced Storage Magneto Optical (AS-MO) having a 6 Gbyte capacity in a 120 mm-diameter single side disk was established by using a magnetically induced superresolution readout method. Transition from in-plane to perpendicular magnetization for exchange-coupled readout layer (GdFeCo) and in-plane magnetization mask layer (GdFe) of the AS-MO disk has been investigated using the noncontinuous model. The readout resolution was sensitive to the thickness of the readout layer. To evaluate readout characteristics of AS-MO disks, the simulation using micro magnetics model was performed and the readout layers were designed. The readout characteristics of the AS-MO disk is improved by making the readout layer thinner.
Seng Ghee TAN Thomas LIEW Teck Ee LOH Teck Seng LOW
Both frequency- and time-domain analyses of glide signals from a PZT glide-slider flying over a laser zone-textured (LZT) thin film disk medium were used to determine the slider vibration at a small disk-slider clearance. Slider vibration was found to be particularly dependent on the uniformly placed laser bump and the effects due to the air-bearing stiffness over the LZT medium. We found that a high density of small, pointed laser bumps (10X) has a more distinct impact on airflow than large, jagged-rim craterlike laser bumps (1X) on the slider. We therefore investigated the effect of laser bump density on the slider vibration, and found that marginally higher laser bump density (3X versus 2X) results in higher slider vibration. While resonant vibration has been a major glide problem, the effects of laser bump density have also recently become important in the face of ultralow glide height, 0.5 µ" (12 nm). Its influence can be clearly observed when the disk-slider clearance becomes very small. At such an ultrasmall disk-slider clearance, even minimal slider vibration can be detrimental to the head-disk interface. Taking into account the various contributions of slider vibration and considering possible damage to the head-disk interface, it is clear that the optimization of laser bump design should go beyond just the glide height and coefficient of stiction. It should take into account the effects of laser bump height, density and spatial distribution on vibration-induced flying height variation while maintaining a low glide height and coefficient of stiction. An ideal LZT medium should therefore have low bump height to enable low glide height, i. e. , 0.5 µ" (12 nm), but specific bump shapes and sufficient density to achieve low stiction. Laser bump density should, however, be controlled to moderate its effect on slider vibration and possibly disk-slider collision (297 words).
Thermal stability of anisotropic and isotropic Co alloy thin-film media is investigated. The orientation ratio of CoCrTa(Pt)/Cr media was controlled by the mechanical texture of the NiP/Al substrates. Bulk magnetic properties, delta M curves and time decay of magnetization in the circumferential and radial directions were measured. The maximum magnetic viscosity coefficient calculated from the time decay of magnetization in the circumferential direction was higher than that in the radial direction for a mechanically textured sample, while it was similar in both directions for a non-textured sample. The magnetic viscosity coefficient in the circumferential direction is smaller than that in the radial direction when the reverse field is in the range of the demagnetization field for thin-film recording media. This implies that an anisotropic sample (namely, a sample with a high orientation ratio) will be more thermally stable when it is not exposed to a large external magnetic field.
As many research works are based on some previous results, my paper, namely The Integrated Scheduling and Allocation of High-Level Test Synthesis, makes use of some techniques by T. Kim. However, I did not state explicitly that some parts of my work are based on Kim's approach although I have referred to his paper. I would like to express my deep apology to Kim for not having emphasized Kim's contribution to my work. But my intention was not to steal Kim's ideas. I would like to emphasize the following difference.
Taku YOSHIOKA Shin ISHII Minoru ITO
This article discusses automatic strategy acquisition for the game "Othello" based on a reinforcement learning scheme. In our approach, a computer player, which initially knows only the game rules, becomes stronger after playing several thousands of games against another player. In each game, the computer player refines the evaluation function for the game state, which is achieved by min-max reinforcement learning (MMRL). MMRL is a simple learning scheme that uses the min-max strategy. Since the state space of Othello is huge, we employ a normalized Gaussian network (NGnet) to represent the evaluation function. As a result, the computer player becomes strong enough to beat a player employing a heuristic strategy. This article experimentally shows that MMRL is better than TD(0) and also shows that the NGnet is better than a multi-layered perceptron, in our Othello task.
A new class of nonlinear filters called Vector Median Rational Hybrid Filters (VMRHF) for multispectral image processing is introduced and applied to color image filtering problems. These filters are based on Rational Functions (RF). The VMRHF filter is a two-stage filter, which exploits the features of the vector median filter and those of the rational operator. The filter output is a result of vector rational function operating on the output of three sub-functions. Two vector median (VMF) sub-filters and one center weighted vector median filter (CWVMF) are proposed to be used here due to their desirable properties, such as, edge and details preservation and accurate chromaticity estimation. Experimental results show that the new VMRHF outperforms a number of widely known nonlinear filters for multispectral image processing such as the Vector Median ilter (VMF) and Distance Directional Filters (DDf) with respect to all criteria used.
Kwame Osei BOATENG Hiroshi TAKAHASHI Yuzo TAKAMATSU
In our previous paper we presented a path-tracing method of multiple gate delay fault diagnosis in combinational circuits. In this paper, we propose an improved method that uses the ambiguous delay model. This delay model makes provision for parameter variations in the manufacturing process of ICs. For the effectiveness of the current method, we propose a timed 8-valued simulation and some new diagnostic rules. Furthermore, we introduce a preparatory process that speeds up diagnosis. Also, at the end of diagnosis, additional information from the results of the preparatory process makes it possible to distinguish between non-existent faults and undiagnosed faults.
Tadayoshi HORITA Itsuo TAKANAMI
As VLSI technology has developed, the interest in implementing an entire or significant part of a parallel computer system using wafer scale integration is growing. The major problem for the case is the possibility of drastically low yield and/or reliability of the system if there is no strategy for coping with such situations. Various strategies to restructure the faulty physical system into the fault-free target logical system are described in the literature [1]-[5]. In this paper, we propose an efficient approximate method which can reconstruct the 1 1/2 track-switch mesh arrays with faulty PEs using hardware as well as software. A logical circuit added to each PE and a network connecting the circuits are used to decide spare PEs which compensate for faulty PEs. The hardware compexity of each circuit is much less than that of a PE where the size of each additional circuit is independent of array sizes and constant. By using the exclusive hardware scheme, a built-in self-reconfigurable system without using a host computer is realizable and the time for reconfiguring arrays becomes very short. The simulation result of the performance of the method shows that the reconstructing efficiency of our algorithm is a little less than those of the exaustive and Shigei's ones [6] and [7], but much better than that of the neural one [3]. We also compare the time complexities of reconstructions by hardware as well as software, and the hardware complexity in terms of the number of gates in the logical circuit added to each PE among the other methods.
Kazuo TANADA Hiroshi KUBO Atsushi IWASE Makoto MIYAKE
This paper proposes an adaptive list-output Viterbi equalizer (LVE) with fast compare-select operation, in order to achieve a good trade-off between bit error rate (BER) performance and processing speed. An LVE, which keeps several survivors for each state, has good BER performance in the presence of wide-spread intersymbol interference. However, the LVE suffers from large processing delay due to its sorting-based compare-select operation. The proposed adaptive LVE greatly reduces its processing delay, because it simplifies compare-select operation. In addition, computer simulation shows that the proposed LVE causes only slight BER performance degradation due to its simplification of compare-select operation. Thus, the proposed LVE achieves better BER performance than decision-feedback sequence estimation (DFSE) without an increase in processing delay.
Chou-Chen WANG Chin-Hsing CHEN Chaur-Heh HSIEH
Image coding with vector quantization (VQ) reveals several defects which include edge degradation and high encoding complexity. This paper presents an edge-preserving coding system based on VQ to overcome these defects. A signal processing unit first classifies image blocks into low-activity or high-activity class. A high-activity block is then decomposed into a smoothing factor, a bit-plane and a smoother (lower variance) block. These outputs can be more efficiently encoded by VQ with lower distortion. A set of visual patterns is used to encode the bit-planes by binary vector quantization. We also develop a modified search-order coding to further reduce the redundancy of quantization indexes. Simulation results show that the proposed algorithm achieves much better perceptual quality with higher compression ratio and significant lower computational complexity, as compared to the direct VQ.
Mitsuo GEN Yinzhen LI Kenichi IDA
In this paper, we present a new approach which is spanning tree-based genetic algorithm for solving a multi-objective transportation problem. The transportation problem as a special type of the network optimization problems has the special data structure in solution characterized as a transportation graph. In encoding transportation problem, we introduce one of node encodings based on a spanning tree which is adopted as it is capable of equally and uniquely representing all possible basic solutions. The crossover and mutation were designed based on this encoding. Also we designed the criterion that chromosome has always feasibility converted to a transportation tree. In the evolutionary process, the mixed strategy with (µ+λ)-selection and roulette wheel selection is used. Numerical experiments show the effectiveness and efficiency of the proposed algorithm.
Kazuo MORI Takehiko KOBAYASHI Takaya YAMAZATO Akira OGAWA
This paper examines fairness of service in the up-link of CDMA cellular slotted-ALOHA packet communication systems with site diversity reception. Site diversity rescues the packets originating mainly from near the edge of the cells, whereas packets originating near the base stations can not obtain the benefits of diversity reception. This situation causes an unfairness in packet reception that depends on location of the mobile station. Two transmission control schemes for reducing this unfairness are proposed. In the first scheme, mobile stations control the target received power for the open-loop power control based on the reception level of the pilot signals of the surrounding base stations. In the second, mobile stations control transmit permission probability. Successful packet reception rate, fairness coefficient and throughput performance are evaluated in fading environments with imperfect power control. Computer simulation shows that both schemes improve service fairness for all mobile stations and throughput performances. A performance comparison between the two schemes concludes that transmission power control outperforms transmit permission probability control as a simple technique for maintaining fairness of services.
Shietung PENG Stanislav G. SEDUKHIN
The design of array processors for solving linear systems using two-step division-free Gaussian elimination method is considered. The two-step method can be used to improve the systems based on the one-step method in terms of numerical stability as well as the requirements for high-precision. In spite of the rather complicated computations needed at each iteration of the two-step method, we develop an innovative parallel algorithm whose data dependency graph meets the requirements for regularity and locality. Then we derive two-dimensional array processors by adopting a systematic approach to investigate the set of all admissible solutions and obtain the optimal array processors under linear time-space scheduling. The array processors is optimal in terms of the number of processing elements used.
Sang Yong MOON Hong Seong PARK Wook Hyun KWON
In this paper, a token-controlled network with exhaustive service strategy is analyzed. The mean and variance of service time of a station, and the mean token rotation time on the network are derived under the condition that the buffer capacity of each station is individually finite. For analysis, an extended stochastic Petri-net model of a station is presented. Then, by analyzing the model, the mean service time of a station and the mean token rotation time are derived, as functions of the given network parameters such as the total number of stations on the network, the arrival rate of frames, the transmission rate of frames, and the buffer capacity. The variance of service time of a station is also derived. By examining derived results, it is shown that they exactly describe the actual operations of the network. In addition, computer simulations with sufficient confidence intervals help to validate the results.
Ryoichi KAWAHARA Yuki KAMADO Masaaki OMOTANI Shunsaku NAGATA
This paper proposes implementing guaranteed frame rate (GFR) service using the available bit rate (ABR) control mechanism in large-scale networks. GFR is being standardized as a new ATM service category to provide a minimum cell rate (MCR) guarantee to each virtual channel (VC) at the frame level. Although ABR also can support MCR, a source must adjust its cell emission rate according to the network congestion indication. In contrast, GFR service is intended for users who are not equipped to comply with the source behavior rules required by ABR. It is expected that many existing users will fall into this category. As one implementation of GFR, weighted round robin (WRR) with per-VC queueing at each switch is well known. However, WRR is hard to implement in a switch supporting a large number of VCs because it needs to determine in one cell time which VC queue should be served. In addition, it may result in ineffective bandwidth utilization at the network level because its control mechanism is closed at the node level. On the other hand, progress in ABR service standardization has led to the development of some ABR control algorithms that can handle a large number of connections. Thus, we propose implementing GFR using an already developed ABR control mechanism that can cope with many connections. It consists of an explicit rate (ER) control mechanism and a virtual source/virtual destination (VS/VD) mechanism. Allocating VSs/VDs to edge switches and ER control to backbone switches enables us to apply ABR control up to the entrance of a network, which results in effective bandwidth utilization at the network level. Our method also makes it possible to share resources between GFR and ABR connections, which decreases the link cost. Through simulation analysis, we show that our method can work better than WRR under various traffic conditions.
Di WU Naoki INAGAKI Nobuyoshi KIKUMA
Hallen's integral equation for cylindrical antennas is modified to deal with finite gap excitation. Because it is based on more realistic modeling, the solution is more accurate, and the convergence is guaranteed. The new equation is written with a new excitation function dependent on the gap width. The moment method analysis is presented where the piecewise sinusoidal surface current functions are used in Galerkin's procedure. Total, external and internal current distributions can be determined. Numerical results for cylindrical antennas with wide variety of gap width and radius are shown, and are compared with the numerical results by the Pocklington type integral equation and those by measurement.
This paper proposes a new polling-based dynamic slot assignment (DSA) scheme. With the rapid progress of wireless access systems, wireless data communication will become more and more attractive. In wireless data communication, an efficient DSA scheme is required to enhance system throughput, since the capacity of radio links is often smaller than that of wired links. A polling-based DSA scheme is typically used in centralized slot assignment control systems. It, however, is difficult to assign the slots to the targeted mobile terminals in a fair-share manner if only a polling-based scheme is used, especially in unbalanced-traffic circumstances, as revealed later. To solve this problem, we propose the exponential decreasing and proportional increasing rate control as is employed in available bit rate (ABR) service in ATM so that fair slot assignment is achieved even in heavily-unbalanced-traffic circumstances. Moreover, so that an AP operating with a large number of MTs can avoid long transmission delays, a polling-based resource request scheme with random access is featured in a new algorithm. Simulations verify that the proposed scheme offers fair slot assignment for each user while maintaining high throughput and short delay performance.
One of the problems in multi-carrier modulation is nonlinear distortion due to nonlinearity of channels, such as in HPA (High Power Amplifier). This problem is also true of multi-carrier SS (Spread Spectrum) systems. In this paper, an inter-modulation compensation scheme for multi-carrier M-ary/SS system is proposed. We propose two methods to control the sequences transmitted in parallel to avoid the occurrence of severe inter-modulation distortion. One is the "package sequence selection" method, which requires slight redundancy. The other method is based on error correction code, which requires no additional frequency or power except the redundancy for error correction. We confirm the validity of our proposed scheme by computer simulation, and the BER (Bit Error Rate) performance in an AWGN (Additive White Gaussian Noise) channel is presented.