Satoru HORI Tomoaki KUMAGAI Tetsu SAKATA Masahiro MORIKURA
This paper proposes a new vector error measurement scheme for orthogonal frequency division multiplexing (OFDM) systems that is used to define transmit modulation accuracy. The transmit modulation accuracy is defined to guarantee inter-operability among wireless terminals. In OFDM systems, the transmit modulation accuracy measured by the conventional vector error measurement scheme can not guarantee inter-operability due to the effect of phase noise. To overcome this problem, the proposed vector error measurement scheme utilizes pilot signals in multiple OFDM symbols to compensate the phase rotation caused by the phase noise. Computer simulation results show that the vector error measured by the proposed scheme uniquely corresponds to the C/N degradation in packet error rate even if phase noise exists in the OFDM signals. This means that the proposed vector error measurement scheme makes it possible to define the transmit modulation accuracy and so guarantee inter-operability among wireless terminals.
Takehiko KOBAYASHI Noriteru SHINAGAWA Yoneo WATANABE
Future cellular communication systems will be called upon to provide multimedia services (voice, data, and video) for various user platforms (pedestrians, cars, and trains) that have a variety of mobility characteristics. Knowledge of mobility characteristics is essential for planning, designing and operating communication networks. The position data of selected vehicles (taxis) have been measured by using the Global Positioning System at 1-s intervals. Those data are used for evaluating mobility characteristics, such as probabilistic distributions of speed, cell dwell time, and cell crossover rate of vehicles, assuming that cells are hypothetically laid over the loci of the vehicles. The cell dwell time of vehicles is found to follow a lognormal distribution, rather than a conventionally-presumed negative exponential distribution. When the holding time distribution and random origination of calls along the loci are assumed, the properties of the cell dwell time and the handoff rate of terminals communicating in the hypothetical cellular systems are also estimated from the measured data.
Tadayoshi HORITA Itsuo TAKANAMI
As VLSI technology has developed, the interest in implementing an entire or significant part of a parallel computer system using wafer scale integration is growing. The major problem for the case is the possibility of drastically low yield and/or reliability of the system if there is no strategy for coping with such situations. Various strategies to restructure the faulty physical system into the fault-free target logical system are described in the literature [1]-[5]. In this paper, we propose an efficient approximate method which can reconstruct the 1 1/2 track-switch mesh arrays with faulty PEs using hardware as well as software. A logical circuit added to each PE and a network connecting the circuits are used to decide spare PEs which compensate for faulty PEs. The hardware compexity of each circuit is much less than that of a PE where the size of each additional circuit is independent of array sizes and constant. By using the exclusive hardware scheme, a built-in self-reconfigurable system without using a host computer is realizable and the time for reconfiguring arrays becomes very short. The simulation result of the performance of the method shows that the reconstructing efficiency of our algorithm is a little less than those of the exaustive and Shigei's ones [6] and [7], but much better than that of the neural one [3]. We also compare the time complexities of reconstructions by hardware as well as software, and the hardware complexity in terms of the number of gates in the logical circuit added to each PE among the other methods.
Kwame Osei BOATENG Hiroshi TAKAHASHI Yuzo TAKAMATSU
In our previous paper we presented a path-tracing method of multiple gate delay fault diagnosis in combinational circuits. In this paper, we propose an improved method that uses the ambiguous delay model. This delay model makes provision for parameter variations in the manufacturing process of ICs. For the effectiveness of the current method, we propose a timed 8-valued simulation and some new diagnostic rules. Furthermore, we introduce a preparatory process that speeds up diagnosis. Also, at the end of diagnosis, additional information from the results of the preparatory process makes it possible to distinguish between non-existent faults and undiagnosed faults.
A new class of nonlinear filters called Vector Median Rational Hybrid Filters (VMRHF) for multispectral image processing is introduced and applied to color image filtering problems. These filters are based on Rational Functions (RF). The VMRHF filter is a two-stage filter, which exploits the features of the vector median filter and those of the rational operator. The filter output is a result of vector rational function operating on the output of three sub-functions. Two vector median (VMF) sub-filters and one center weighted vector median filter (CWVMF) are proposed to be used here due to their desirable properties, such as, edge and details preservation and accurate chromaticity estimation. Experimental results show that the new VMRHF outperforms a number of widely known nonlinear filters for multispectral image processing such as the Vector Median ilter (VMF) and Distance Directional Filters (DDf) with respect to all criteria used.
Taku YOSHIOKA Shin ISHII Minoru ITO
This article discusses automatic strategy acquisition for the game "Othello" based on a reinforcement learning scheme. In our approach, a computer player, which initially knows only the game rules, becomes stronger after playing several thousands of games against another player. In each game, the computer player refines the evaluation function for the game state, which is achieved by min-max reinforcement learning (MMRL). MMRL is a simple learning scheme that uses the min-max strategy. Since the state space of Othello is huge, we employ a normalized Gaussian network (NGnet) to represent the evaluation function. As a result, the computer player becomes strong enough to beat a player employing a heuristic strategy. This article experimentally shows that MMRL is better than TD(0) and also shows that the NGnet is better than a multi-layered perceptron, in our Othello task.
As many research works are based on some previous results, my paper, namely The Integrated Scheduling and Allocation of High-Level Test Synthesis, makes use of some techniques by T. Kim. However, I did not state explicitly that some parts of my work are based on Kim's approach although I have referred to his paper. I would like to express my deep apology to Kim for not having emphasized Kim's contribution to my work. But my intention was not to steal Kim's ideas. I would like to emphasize the following difference.
Yoshiyuki SHINKAWA Masao J. MATSUMOTO
Adaptability evaluation of software systems is one of the key concerns in both software engineering and requirements engineering. In this paper, we present a formal and systematic approach to evaluate adaptability of software systems to requirements in enterprise business applications. Our approach consists of three major parts, that is, the common modeling method for both business realms and software realms, functional adaptability evaluation between the models with Σ algebra and behavioral adaptability evaluation with process algebra. By our approach, one can rigorously and uniquely determine whether a software system is adaptable to the requirements, either totally or partially. A sample application from an order processing is illustrated to show how this approach is effective in solving the adaptability evolution problem.
Heejo LEE Jong KIM Sung Je HONG
In this paper, we show two process allocation schemes to tolerate multiple faults when the primary-backup replication method is used. The first scheme, called multiple backup scheme, is running multiple backup processes for each process to tolerate multiple faults. The second scheme, called regenerative backup scheme, is running only one backup process for each process, but re-generates backup processes for processes that do not have a backup process after a fault occurrence to keep the primary-backup process pair available. In both schemes, we propose heuristic process allocation methods for balancing loads in spite of the occurrence of faults. Then we evaluate and compare the performance of the proposed heuristic process allocation methods using simulation. Next, we analyze the reliability of two schemes based on their fault-tolerance capability. For the analysis of fault-tolerance capability, we find the degree of fault tolerance for each scheme. Then we find the reliability of each scheme using Markov chains. The comparison results of two schemes indicate that the regenerative single backup process allocation scheme is more suitable than the multiple backup allocation scheme.
Hironari MASUI Koichi TAKAHASHI Satoshi TAKAHASHI Kouzou KAGE Takehiko KOBAYASHI
Measurements of delay spread were performed at microwave frequencies of 3.35, 8.45 and 15.75 GHz along quasi line-of-sight streets in metropolitan Tokyo. It is found that the delay spreads increase with the measurement distance and reach around 600 ns up to 1 km. It is also confirmed that a cumulative probability of the delay spreads follows a log-normal distribution. The gradients of delay spreads against the distance are greater for a lower mobile antenna height hm = 1.6 m than for hm = 2.7 m in these measurements because of blocking effect by the traffic of vehicles and pedestrians on the road. When the mobile antenna height is 2.7 m, the delay spreads within the range before the break points are observed relatively small: 90 ns (3.35 GHz), 140 ns (8.45 GHz) and 150 ns (15.75 GHz) at the cumulative probability of 90%. The gradients of delay spreads against the distance are greater for wider streets in our measurements.
I would like to draw the attention of the editorial board of IEICE Transactions on Fundamentals of Electronics, Communications and Computer Sciences and its readers to a recent paper, Tianruo Yang, "The integrated scheduling and allocation of high-level test synthesis," vol. E82-A, no. 1, January 1999, pp. 145-158. (Here we call this paper the Yang's paper. ) Yang did not give the correct information about the originality of the paper. I will point out that the writings (and the idea accordingly) of section 6 of Yang's paper came from papers [1] and [2].
Fumihito SASAMORI Fumio TAKAHATA
The transmission quality in mobile wireless communications is affected by not only the thermal noise but also the multipass fading which changes drastically an amplitude and a phase of received signal. The paper proposes the theoretical and approximate methods for deriving an average bit error rate in DS-CDMA systems under the Rician fading environment on the assumption of the frequency non-selective fading, as parameters of the number of simultaneous access stations, the maximum Doppler frequency and so on. It is confirmed from the coincidence of theoretical and approximate results with simulation ones that the proposed approach is applicable to a variety of system parameters.
Kohji TAKEO Shinichi SATO Akira OGAWA
This paper describes the effects of traffic distributions on uplink and downlink communications qualities in CDMA cellular systems. Many researches have been done from the viewpoint of the system capacity under ideal conditions in both uplink and downlink. However, there are few studies regarding traffic distributions that concurrently affect the uplink and downlink quality. The characteristics in both links are different even in a spatially uniform traffic distribution because the system structures are not symmetric between both links. When non-uniform radio environments are assumed, both link qualities become very different from each other. It is therefore important to design systems in consideration of link-specific characteristics in whole service area. This paper clarifies the difference in both link characteristics in CDMA systems regarding traffic distributions.
Akihito MORIMOTO Masaaki KATAYAMA Takaya YAMAZATO Akira OGAWA
This paper discusses the employment of adaptive array antennas at the base station of a Multi Processing Gain (MPG) CDMA system. It is shown that the adaptive array antenna with the weight control scheme based on the signal before despreading procedure does not increase but even decreases the performance than that with an omni-directional antenna, and the cause of this serious performance degradation is revealed. Then it is shown that the performance with the weight control scheme based on the signal after despreading procedure is always better than that with an omni-directional antenna. Furthermore, the possibilities of performance improvement by the combination of adaptive array antenna and interference cancellation techniques are mentioned.
Nasser HAMAD Takeshi HASHIMOTO
In this paper, system capacity of the reverse link of a wireless multimedia CDMA system with transmission power control is analysed for receivers with and without CCI cancellers. For N classes of users, system capacity is represented by a point in an N-dimensional space. It is shown that system capacity is improved considerably with CCI cancellers, that system capacity region is non-convex in general, and that its boundary is well approximated with a unique hyper plane when CCI cancellers are fully employed.
Youhei IKAI Masaaki KATAYAMA Takaya YAMAZATO Akira OGAWA
In this paper, we propose the introduction of space diversity techniques to the code acquisition of a direct-sequence spread-spectrum signal. In this scheme, both a transmitter and a receiver have multiple antennas and the signals corresponding to all the combinations of the transmitter and receiver antennas are combined at the acquisition circuit of the receiver. The performance is evaluated for an indoor packet radio communication system from the viewpoints of the average time for acquisition, the probability of success of acquisition, and the necessary preamble length. As the result, we show great performance improvements by the proposed scheme under slow and flat Rayleigh fading environment.
Chen ZHENG Takaya YAMAZATO Masaaki KATAYAMA Akira OGAWA
Most of error correcting codes applying to DS/SS systems are such that information data is first (bit-by-bit) encoded and then spread by pseudo noise (PN) sequence. Thus, coding gain achieved by such systems are mainly due to the error correcting codes and the redundancy produced by the spreading codes shows no effect on the coding gain. In this paper, a chip-by-chip Turbo coding for DS/SS systems is proposed. The input information data is first spread by PN sequence and then fed into the Turbo-encoder which operates in chip timing. As the Turbo-encoder operates in chip timing, a large interleaving size would be obtained, which improves the performance. As results, superior performances with coding gain of more than 3.0 dB and 5.0 dB for AWGN and Rayleigh-fading channel, respectively, were found with short frame size of information data.
Shinya MATSUFUJI Naoki SUEHIRO Noriyoshi KUROYANAGI
This paper presents a quadriphase sequence pair, whose aperiodic auto-correlation functions for non-zero shifts and cross-one for any shift take pure imaginary values. Functions for pairs of length 2n are formulated, which map the vector space of order n over GF(2) to Z4. It is shown that they are bent for any n, such that their Fourier transforms take all the unit magnitude.
Jium-Ming LIN Hsiu-Ping WANG Ming-Chang LIN
In this paper, the Linear Exponential Quadratic Gaussian with Loop Transfer Recovery (LEQG/LTR) methodology is employed for the design of high performance induction motor servo systems. In addition, we design a speed sensorless induction motor vector controlled driver with both the extended Kalman filter and the LEQG/LTR algorithm. The experimental realization of an induction servo system is given. Compared with the traditional PI and LQG/LTR methods, it can be seen that the system output sensitivity for parameter variations and the rising time for larger command input of the proposed method can be significantly reduced.
Junji HIROKANE Yoshiteru MURAKAMI Akira TAKAHASHI Shigeo TERASHIMA
A standard of Advanced Storage Magneto Optical (AS-MO) having a 6 Gbyte capacity in a 120 mm-diameter single side disk was established by using a magnetically induced superresolution readout method. Transition from in-plane to perpendicular magnetization for exchange-coupled readout layer (GdFeCo) and in-plane magnetization mask layer (GdFe) of the AS-MO disk has been investigated using the noncontinuous model. The readout resolution was sensitive to the thickness of the readout layer. To evaluate readout characteristics of AS-MO disks, the simulation using micro magnetics model was performed and the readout layers were designed. The readout characteristics of the AS-MO disk is improved by making the readout layer thinner.