This paper describes a novel IDDQ sensor circuit that is driven by only an abnormal IDDQ. The sensor circuit has relatively high sensitivity and can operate at a low supply voltage. Based on a very simple idea, it requires two additional power supplies. It can operate at either 5-V or 3.3-V VDD with the same design. Simulation results show that it can detect a 16-µA abnormal IDDQ at 3.3-V VDD. This sensor circuit causes a smaller voltage drop and smaller performance penalty in the circuit under test than other ones.
Naoki SHIRAMATSU Shuji IWATA Takumi MINEMOTO
Reducing moire is an important consideration in CRT design. This paper aims to investigate how the visibility of the inverse-phase raster moire, a typical pattern of the raster moire, is influenced by the distribution of the electron beam and the structure of shadow mask apertures. First, a simple model based on the luminance distribution on the CRT screen and characteristics of the human vision was used to calculate the perceived intensity of the inverse-phase raster moire. This calculation was made to examine the effect of model parameters. It showed that the inverse-phase raster moire consists of (1,1)-order moire components. It was also found that the perceived intensity increases with a decrease in electron beam diameter and with an increase in horizontal aperture pitch. In addition, a subjective evaluation test was conducted using an inverse-phase moire pattern reproduced by the image simulation. Test results agreed with the calculated results. Finally, it was revealed that when an electron beam shape having a Gauss distribution was used, most of the raster moire is the inverse-phase raster moire caused by the (1,1)-order component, while the (2,2)-order moire component was very low.
Tsutomu KAWABATA Frans M. J. WILLEMS
We propose a variation of the Context Tree Weighting algorithm for tree source modified such that the growth of the context resembles Lempel-Ziv parsing. We analyze this algorithm, give a concise upper bound to the individual redundancy for any tree source, and prove the asymptotic optimality of the data compression rate for any stationary and ergodic source.
Ryutaroh MATSUMOTO Tomohiko UYEMATSU
We generalize the construction of quantum error-correcting codes from F4-linear codes by Calderbank et al. to pm-state systems. Then we show how to determine the error from a syndrome. Finally we discuss a systematic construction of quantum codes with efficient decoding algorithms.
Ahmad CHELDAVI Gholamali REZAI-RAD
Based on genetic algorithm (GA) in this paper we present a simple method to extract distributed circuit parameters of a multiple coupled nonuniform microstrip transmission lines from it's measured or computed S-parameters. The lines may be lossless or lossy, with frequency dependent parameters. First a sufficient amount of information about the system is measured or computed over an specified frequency range. Then this information is used as an input for a GA to determine the inductance and capacitance matrices of the system. The theory used for fitness evaluation is based on the steplines approximation of the nonuniform transmission lines and quasi-TEM assumptions. Using steplines approximation the system of coupled nonuniform transmission lines is subdivided into arbitrary large number of coupled uniform lines (steplines) with different characteristics. Then using modal decomposition method the system of coupled partial differential equations for each step is decomposed to a number of uncoupled ordinary wave equations which are then solved in frequency-domain.
Tomohiro SUGIMOTO Kouichi YAMAZAKI
We show some numerical results of computer simulations of secret key reconciliation (SKR) protocol "Cascade" and clarify its properties. By using these properties, we propose to improve the protocol performance on the number of publicly exchanged bits which should be as few as possible.
This paper concentrates on the model useful for analyzing the error performance of M-estimators of a single unknown signal parameter: that is the error intensity model. We develop the point process representation for the estimation error, the conditional distribution of the estimator, and the distribution of error candidate point process. Then the error intensity function is defined as the probability density of the estimate and the general form of the error intensity function is derived. We compute the explicit form of the intensity functions based on the local maxima model of the error generating point process. While the methods described in this paper are applicable to any estimation problem with continuous parameters, our main application will be time delay estimation. Specifically, we will consider the case where coherent impulsive interference is involved in addition to the Gaussian noise. Based on numerical simulation results, we compare each of the error intensity model in terms of the accuracy of both error probability and mean squared error (MSE) predictions, and the issue of extendibility to multiple parameter estimation is also discussed.
Naofumi HOMMA Takafumi AOKI Tatsuo HIGUCHI
This paper presents an efficient graph-based evolutionary optimization technique called Evolutionary Graph Generation (EGG), and its application to the design of fast constant-coefficient multipliers using parallel counter-tree architecture. An important feature of EGG is its capability to handle the general graph structures directly in evolution process instead of encoding the graph structures into indirect representations, such as bit strings and trees. This paper also addresses the major problem of EGG regarding the significant computation time required for verifying the function of generated circuits. To solve this problem, a new functional verification technique for arithmetic circuits is proposed. It is demonstrated that the EGG system can create efficient multiplier structures which are comparable or superior to the known conventional designs.
Medium noise is the dominant noise in ultrahigh density disk recording systems. The peak, width and jitter noise are analyzed by micromagnetic simulations. Four different media, with a fixed grain size of 135 and a coercivity of 2900 Oe, are chosen for medium noise analysis. The linear recording density is increased from 340 KFCI (Kilo flux-changes per inch) to 750 KFCI, while the area density goes up from 14.3 Gb/in2 to 31.5 Gb/in2. The peak-amplitude noise is studied by the distribution of the peak magnetization Mp in each bit. The distribution of Mp develops from a delta-function around the remanence Mr at low densities to a flat distribution at extremely high densities. It is found that the transition a-parameter is no longer proportional to the square root of Mrδ, as given in the William-Comstock approximation. The peak-jitter noise in the read back voltage is analyzed by the percentage of the transition jitter in a bit length.
SungHo CHO Jeong-Hyon HWANG Kyoung Yul BAE Chong-Sun HWANG
In Optimistic Two-Phase Locking (O2PL), when a transaction requests a commit, the transaction can not be committed until all requested locks are obtained. By this reason, O2PL leads to unnecessary waits and operations even though it adopts an optimistic approach. This paper suggests an efficient optimistic cache consistency protocol that provides serializability of committed transactions. Our cache consistency scheme, called PCP (Preemptive Cache Protocol), decides whether to commit or abort without waiting when transactions request commits. In PCP, some transactions that read stale data items can not be aborted, because it adopts a re-ordering scheme to enhance the performance. In addition, for re-ordering, PCP stores only one version of each data item. This paper presents a simulation-based analysis on the performance of PCP with other protocols such as O2PL, Optimistic Concurrency Control and Caching Two-Phase Locking. The simulation experiments show that PCP performs as well as or better than other schemes with low overhead.
In order to observe temporal distribution of sea clutter, radar echoes were taken from high sea state 7 at a fixed azimuth angle of 317. It is shown that the sea-clutter amplitudes obey the Weibull distribution at a grazing angle of 3.9, and obey both the Weibull distribution and K-distribution at grazing angles of 7.5 and 61.4. As the grazing angle increases, the shape parameters of Weibull distribution and K-distribution increase with both the distributions themselves tending to be closer to the Rayleigh distribution.
Jianting CAO Noboru MURATA Shun-ichi AMARI Andrzej CICHOCKI Tsunehiro TAKEDA Hiroshi ENDO Nobuyoshi HARADA
Magnetoencephalography (MEG) is a powerful and non-invasive technique for measuring human brain activity with a high temporal resolution. The motivation for studying MEG data analysis is to extract the essential features from measured data and represent them corresponding to the human brain functions. In this paper, a novel MEG data analysis method based on independent component analysis (ICA) approach with pre-processing and post-processing multistage procedures is proposed. Moreover, several kinds of ICA algorithms are investigated for analyzing MEG single-trial data which is recorded in the experiment of phantom. The analyzed results are presented to illustrate the effectiveness and high performance both in source decomposition by ICA approaches and source localization by equivalent current dipoles fitting method.
Xiaolei GUO Tony T. LEE Hung-Hsiang Jonathan CHAO
Flow control algorithm in high speed networks is a resource-sharing policy implemented in a distributed manner. This paper introduces a novel concept of backlog balancing and demonstrates its application to network flow control and congestion control by presenting a rate-based flow control algorithm for ATM networks. The aim of flow control is to maximize the network utilization for achieving high throughput with tolerable delay for each virtual circuit (VC). In a resource-sharing environment, this objective may also cause network congestion when a cluster of aggressive VC's are contending for the same resource at a particular node. The basic idea of our algorithm is to adjust the service rate of each node along a VC according to backlog discrepancies between neighboring nodes (i.e., to reduce the backlog discrepancy). The handshaking procedure between any two consecutive nodes is carried out by a link-by-link binary feedback protocol. Each node will update its service rate periodically based on a linear projection model of the flow dynamics. The updated service rate per VC at a node indicates its explicit demand of bandwidth, so a service policy implementing dynamic bandwidth allocation is introduced to enforce such demands. Simulation study has validated the concept and its significance in achieving the goal of flow control and yet preventing network congestion at the same time.
Akihiko SUGIURA Keiichi YONEMURA Hiroshi HARASHIMA
Recently, cerebral disease is being a serious problem in an aging society. But, rank evaluation of cerebral disease is not developed and therefore rehabilitation is hard. In this study, we try to assess slight cerebral disease by taking notice of recognition mechanism of face and realizing face image synthesis using computer technology. If we can find a slight cerebral disease and rank evaluation, we can apply to rehabilitation, and a load of medical doctor and patient decreases. We have obtained a result by the experiment, so we report it.
Kei SAKAGUCHI Jun-ichi TAKADA Kiyomichi ARAKI
An optimization of the smoothing preprocessing for the correlated signal parameter estimation was considered. Although the smoothing factor (the number of subarrays) is a free parameter in the smoothing preprocessing, a useful strategy to determine it has not yet been established. In this paper, we investigated thoroughly about the smoothing factor and also proposed a new scheme to optimize it. The proposed method, using the smoothed equivalent diversity profile (SED profile), is able to evaluate the effect of smoothing preprocessing without any a priori information. Therefore, this method is applicable in the real multipath parameter estimation.
Seisuke FUKUDA Motoshi BABA Haruto HIROSAWA
Speckle statistically brings series connections of dark pixels, which can be observed as dark line features in synthetic aperture radar (SAR) images. The dark lines have no physical meaning. In this paper, line features of that kind in high-resolution SAR images whose intensity obeys a K-distribution are studied. It is stochastically explained that the dark line features in 1-look K-distributed images can be observed more distinctly than those in exponential distributed images. It is further revealed that such line features are detectable enough, even if the K-distributed images are multilooked. The experiments on simulated images as well as on actual SAR images confirm the explanation.
Tomonori HASEGAWA Masayuki HOSHINO Takashi IWASAKI
A novel method for image reconstruction of a microwave hologram synthesized from one-dimensional data is proposed. In the data acquisition, an emitting antenna is shifted along a line. At every position of the emitting antenna, the amplitude and phase of diffraction fields are measured with a detecting antenna along a line perpendicular to the shifted direction. An equivalent two-dimensional diffraction field is synthesized from the one-dimensional data sets. The conventional reconstruction method applied to the one-dimensional configuration was the Fresnel approximation method. In this paper, an equivalent diffraction is introduced in order to obtain better images than the Fresnel approximation. An experiment made at 10 GHz shows the usefulness of the proposed method.
Toru SATO Takuji NAKAMURA Koji NISHIMURA
Meteor storms and showers are now considered as potential hazard in the space environment. Radar observations of meteors has an advantage of a much higher sensitivity over optical observations. The MU radar of Kyoto University, Japan has a unique capability of very fast beam steerability as well as a high sensitivity to the echoes from ionization around the meteors. We developed a special observation scheme which enables us to determine the orbit of individual meteors. The direction of the target is determined by comparing the echo intensity at three adjacent beams. The Doppler pulse compression technique is applied to improve the signal-to-noise ratio of the echoes from the very fast target, and also to determine the range accurately. The developed scheme was applied to the observation made during the Leonid meteor storm on November 18, 1998 (JST). Estimated orbital distribution seems to suggest that the very weak meteors detected by the MU radar are dominated by sporadic meteors rather than the stream meteors associated with the Leonids storm.
Hiroaki HORIE Toshio IGUCHI Hiroshi HANADO Hiroshi KUROIWA Hajime OKAMOTO Hiroshi KUMAGAI
An airborne cloud profiling radar (SPIDER) which has several unique features has been developed at CRL. In this paper, the objectives and design considerations are outlined, and the system is described. The features of SPIDER are summarized below. (1) A W-band frequency (95 GHz) is used to provide very high sensitivity to small cloud particles. (2) The radar is carried by a jet aircraft that can fly high above most clouds. (3) Full-polarimetric and Doppler capabilities are incorporated in the unit. (4) Almost all radar operational parameters are under software control, and most processing is in real time. (5) The design gives consideration to the study of cloud radiation and microphysics. The system has been completed and is still undergoing performance testing. The functions and performance of the SPIDER system are currently fulfilling the intentions of its design. Several interesting cloud features that had not been seen with previous instruments have already been observed.
We study how to generalize a key agreement and password authentication protocol on the basis of the well known hard problems such as a discrete logarithm problem and a Diffie-Hellman problem. The key agreement and password authentication protocol is necessary for networked or internetworked environments to provide the user knowledge-based authentication and to establish a new cryptographic key for the further secure session. The generalized protocol implies in this paper to require only weak constraints and to be generalized easily in any other cyclic groups which preserve two hard problems. The low entropy of password has made it difficult to design such a protocol and to prove its security soundness. In this paper, we devise a protocol which is easy to be generalized and show its security soundness in the random oracle model. The proposed protocol reduces the constraints extremely only to avoiding a smooth prime modulus. Our main contribution is in solving the password's low entropy problem in the multiplicative group for the generalization.