Kanae NAOI Koji NAKAMAE Hiromu FUJIOKA Takao IMAI Kazunori SEKINE Noriaki TAKEDA Takeshi KUBO
We have developed a three-dimensional eye movement simulator that simulates eye movement. The simulator allows us to extract the instantaneous eye movement rotation axes from clinical data sequences. It calculates the plane formed by rotation axes and displays it on an eyeball with rotation axes. It also extracts the innervations for eye muscles. The developed simulator is mainly programmed by a CG programming language, OpenGL. First, the simulator was applied to saccadic eye movement data in order to show the so-called Listing's plane on which all hypothetical rotation axes lie. Next, it was applied to clinical data sequences of two patients with benign paroxysmal positional vertigo (BPPV). Instantaneous actual rotation axes and innervations for eye muscle extracted from data sequences have special characteristics. These results are useful for the elucidation of the mechanism of vestibular symptoms, particularly vertigo.
Sathit INTAJAG Kitti PAITHOONWATANAKIJ
Edge detection has been an essential step in image processing, and there has been much work undertaken to date. This paper inspects a fuzzy mathematical morphology in order to reach a higher-level of edge-image processing. The proposed scheme uses a fuzzy morphological gradient to detect object boundaries, when the boundaries are roughly defined as a curve or a surface separating homogeneous regions. The automatic edge detection algorithm consists of two major steps. First, a new version of anisotropic diffusion is proposed for edge detection and image restoration. All improvements of the new version use fuzzy mathematical morphology to preserve the edge accuracy and to restore the images to homogeneity. Second, the fuzzy morphological gradient operation detects the step edges between the homogeneous regions as object boundaries. This operation uses geometrical characteristics contained in the structuring element in order to extract the edge features in the set of edgeness, a set consisting of the quality values of the edge pixels. This set is prepared with fuzzy logic for decision and selection of authentic edge pixels. For experimental results, the proposed method has been tested successfully with both synthetic and real pictures.
Shinichi HABATA Mitsuo YOKOKAWA Shigemune KITAWAKI
The Earth Simulator (ES), developed by the Japanese government's initiative "Earth Simulator project," is a highly parallel vector supercomputer system. In May 2002, the ES was proven to be the most powerful computer in the world by achieving 35.86 teraflops on the LINPACK benchmark and 26.58 teraflops for a global atmospheric circulation model with the spectral method. Three architectural features enabled these great achievements; vector processor, shared-memory and high-bandwidth non-blocking interconnection crossbar network. In this paper, an overview of the ES, the three architectural features and the result of performance evaluation are described particularly with its hardware realization of the interconnection among 640 processor nodes.
Bojan CUKIC Erdogan GUNEL Harshinder SINGH Lan GUO
Software certification is a notoriously difficult problem. From software reliability engineering perspective, certification process must provide evidence that the program meets or exceeds the required level of reliability. When certifying the reliability of a high assurance system very few, if any, failures are observed by testing. In statistical estimation theory the probability of an event is estimated by determining the proportion of the times it occurs in a fixed number of trials. In absence of failures, the number of required certification tests becomes impractically large. We suggest that subjective reliability estimation from the development lifecycle, based on observed behavior or the reflection of one's belief in the system quality, be included in certification. In statistical terms, we hypothesize that a system failure occurs with the hypothesized probability. Presumed reliability needs to be corroborated by statistical testing during the reliability certification phase. As evidence relevant to the hypothesis increases, we change the degree of belief in the hypothesis. Depending on the corroboration evidence, the system is either certified or rejected. The advantage of the proposed theory is an economically acceptable number of required system certification tests, even for high assurance systems so far considered impossible to certify.
Tadashi DOHI Kazuki IWAMOTO Hiroyuki OKAMURA Naoto KAIO
Software rejuvenation is a proactive fault management technique that has been extensively studied in the recent literature. In this paper, we focus on an example for a telecommunication billing application considered in Huang et al. (1995) and develop the discrete-time stochastic models to estimate the optimal software rejuvenation schedule. More precisely, two software availability models with rejuvenation are formulated via the discrete semi-Markov processes, and the optimal software rejuvenation schedules which maximize the steady-state availabilities are derived analytically. Further, we develop statistically non-parametric algorithms to estimate the optimal software rejuvenation schedules, provided that the complete sample data of failure times are given. Then, a new statistical device, called the discrete total time on test statistics, is introduced. Finally, we examine asymptotic properties for the statistical estimation algorithms proposed in this paper through a simulation experiment.
This paper focuses on flow control in high-speed networks. Each node in a network handles its local traffic flow on the basis of only the information it is aware of, but it is preferable that the decision-making of each node leads to high performance of the whole network. To this end, we investigate the relationship between the flow control mechanism of each node and network performance. We consider the situation in which the capacity of a link in the network is changed but individual nodes are not aware of this. Then we investigate the stability and adaptability of the network performance, and discuss an appropriate flow control model on the basis of simulation results.
Minoru IDA Kenji KURISHIMA Noriyuki WATANABE
We describe 150-nm-thick collector InP-based double heterojunction bipolar transistors with two types of thin pseudomorphic bases. The emitter and collector layers are designed for high collector current operation. The collector current blocking is suppressed by the compositionally step-graded collector structure even at JC of over 500 kA/cm2 with practical breakdown characteristics. An HBT with a 20-nm-thick base achieves a high fT of 351 GHz at high JC of 667 kA/cm2, and a 30-nm-base HBT achieves a high value of 329 GHz for both fT and fmax at JC of 583 kA/cm2. An equivalent circuit analysis suggests that the extremely small carrier-transit-delay contributes to the ultrahigh fT.
Tomoharu SHIBUYA Masatoshi ONIKUBO Kohichi SAKANIWA
In this paper, we investigate Tanner's lower bound for the minimum distance of regular LDPC codes based on combinatorial designs. We first determine Tanner's lower bound for LDPC codes which are defined by modifying bipartite graphs obtained from combinatorial designs known as Steiner systems. Then we show that Tanner's lower bound agrees with or exceeds conventional lower bounds including the BCH bound, and gives the true minimum distance for some EG-LDPC codes.
In this paper, the time-frequency separation algorithm (TFS) proposed by Belouchrani and Amin is applied to ground penetrating radar (GPR) data to reduce ground clutter, that hides reflected waves from a near-surface planar interface. We formulated the problem with several assumptions so that narrow band signals, whose center frequency and baseband signal depend on propagation paths, are received at the receiver, when a wideband signal is radiated from a transmitter. These phenomena can be clearly seen in time-frequency distribution (TFD) of the received signal. In this paper, we adopted the TFS utilizing the TFD signature as a blind separation technique to separate the ground clutter from the target signals. We show numerical and experimental results in order to verify the validity of the problem formulation and the TFS. We carried out GPR measurements to measure permafrost in Yakutsk, Russia. We found the difference in TFD signatures between the ground clutter and the target signal in the experimental data. We could detect the upper boundary of the permafrost with the TFS in spite of the unstable ground clutter.
Seongyong KIM Kong-Joo LEE Key-Sun CHOI
We propose a normalization scheme of syntactic structures using a binary phrase structure grammar with composite labels. The normalization adopts binary rules so that the dependency between two sub-trees can be represented in the label of the tree. The label of a tree is composed of two attributes, each of which is extracted from each sub-tree, so that it can represent the compositional information of the tree. The composite label is generated from part-of-speech tags using an automatic labelling algorithm. Since the proposed normalization scheme is binary and uses only part-of-speech information, it can readily be used to compare the results of different syntactic analyses independently of their syntactic description and can be applied to other languages as well. It can also be used for syntactic analysis, which performs higher than the previous syntactic description for Korean corpus. We implement a tool that transforms a syntactic description into normalized one based on this proposed scheme. It can help construct a unified syntactic corpus and extract syntactic information from various types of syntactic corpus in a uniform way.
In this paper, a high-performance pipelining architecture for 2-D inverse discrete wavelet transform (IDWT) is proposed. We use a tree-block pipeline-scheduling scheme to increase computation performance and reduce temporary buffers. The scheme divides the input subbands into several wavelet blocks and processes these blocks one by one, so the size of buffers for storing temporal subbands is greatly reduced. After scheduling the data flow, we fold the computations of all wavelet blocks into the same low-pass and high-pass filters to achieve higher hardware utilization and minimize hardware cost, and pipeline these two filters efficiently to reach higher throughput rate. For the computations of N N-sample 2-D IDWT with filter length of size K, our architecture takes at most (2/3)N2 cycles and requires 2N(K-2) registers. In addition, each filter is designed regularly and modularly, so it is easily scalable for different filter lengths and different levels. Because of its small storage, regularity, and high performance, the architecture can be applied to time-critical image decompression.
Jong-Hyun PARK Wan-Hyun CHO Soon-Young PARK
In this paper we present an unsupervised color image segmentation algorithm based on statistical models. We have adopted the Gaussian mixture model to represent the distribution of color feature vectors. A novel deterministic annealing EM and mean field theory from statistical mechanics are used to compute the posterior probability distribution of each pixel and estimate the parameters of the Gaussian Mixture Model. We describe the noncontexture segmentation algorithm that uses a deterministic annealing approach and the contexture segmentation algorithm that uses the mean field theory. The experimental results show that the deterministic annealing EM and mean field theory provide a global optimal solution for the maximum likelihood estimators and that these algorithms can efficiently segment the real image.
A low energy plasma based on an electron discharge was investigated for the pre-epi clean of silicon wafers and for plasma enhanced homo and hetero epitaxial growth of Si and SiGe layers. VS were produced in a short, completely dry process sequence consisting of LEPC and LEPECVD only. The wafer/epilayer interface obtained in this process sequence was suitable to grow high quality VS with low surface roughness and dislocation densities. Based on this process and its implementation in a 200/300 mm single wafer cluster tool, a high volume and economical production of VS seems possible.
Pusadee SERESANGTAKUL Tomio TAKARA
We have developed Thai speech synthesis by rule using cepstral parameters. In order to synthesize the pitch contour of Thai tones, we have applied an extension of Fujisaki's model. A mid tone is unique for Thai when compared to Chinese. For the extension of Fujisaki's model to Thai tones, we assumed that the mid tone is neutral and we adopted its phrase component as the phrase components for all tones. According to our study on the pitch contour of five Thai tones using this model, the result shows that the command pattern for the local F0 components needs both positive and negative commands. Listening tests showed that the intelligibility of the Thai tones measured in terms of error rate were 0.0%, 0.7% and 2.7% for analysis/synthesis, Fujisaki's model and the polynomial model, respectively. Therefore, it is shown that the extension of Fujisaki's model is effective for Thai.
Eugeny LYUMKIS Rimvydas MICKEVICIUS Oleg PENZIN Boris POLSKY Karim El SAYED Andreas WETTSTEIN Wolfgang FICHTNER
TCAD is gaining acceptance in the heterostructure industry. This article discusses the specific challenges a device simulator must manage to be a useful tool in designing and optimizing modern heterostructure devices. Example simulation results are given for HEMTs and HBTs, illustrating the complex physical processes in heterostructure devices, such as nonlocal effects in carrier transport, lattice self-heating, hot-electron effects, traps, electron tunneling, and quantum transport.
Hiroto KITABAYASHI Suehiro SUGITANI Yoshino K. FUKAI Yasuro YAMANE Takatomo ENOKI
We demonstrated the uniformity and stability as well as the high breakdown voltage of 0.1-µm-gate InP HEMTs with a double recess structure. To overcome the drawbacks regarding the uniformity and stability in the double recess structure, an InP passivation layer that functions as an etch-stopper and a surface passivator was successfully applied to the structure. It was confirmed that there was no degradation in the uniformity and stability of device performance for the double recess HEMTs that had the breakdown voltages in the on-state and off-state improved by a factor of 1.6.
C. R. BOLOGNESI Martin W. DVORAK Simon P. WATKINS
We study the advantages and limitations of InP/GaAsSb/InP DHBTs for high-speed digital circuit applications. We show that the high-current performance limitation in these devices is electrostatic in nature. Comparison of the location of collector current blocking in various collector designs suggests a smoother, more gradual onset of blocking effects in type-II collectors. A comparison of collector current blocking effects between InP/GaAsSb--based and various designs of InP/GaInAs--based DHBTs provides support for our analysis.
Junichi FUNASAKA Masato BITO Kenji ISHIDA Kitsutaro AMANO
As so many software titles are now being distributed via the Internet, the number of accesses to file servers, such as FTP servers, is rapidly increasing. To prevent the concentration of accesses to the original file server, mirror servers are being introduced that contain the same directories and files as held by the original server. However, inconsistency among the mirror servers and the original server is often observed because of delivery latency, traffic congestion on the network, and management policies of the mirror servers. This inconsistency degrades the value of the mirror servers. Accordingly, we have developed an intermediate FTP proxy server system that guarantees the freshness of the files as well as preventing access concentration on the original FTP server. The system adopts per-file selection of the replicated files; most existing methods are based on per-host or per-directory selection. Therefore it can assure users of a quick, stable, and up-to-date FTP mirroring service even in the face of frequent content updates, which tend to degrade the homogeneity of services. Moreover, it can forward the retrieved files with little overhead. Tests confirmed that our system is comparable to existing systems from the viewpoint of actual retrieval time, required traffic, and load endurance. This technology can assure clients that they will receive the latest version of the file(s) desired. It well supports heterogeneous network environments such as the Internet.
We propose an image generation method for an immersive multi-screen environment that contains a motion ride. To allow a player to look around freely in a virtual world, a method to generate an arbitrary direction image is required, and this technology has already been established. In our environment, displayed images must also be updated according to the movement of the motion ride in order to keep a consistency between the player's viewpoint and the virtual world. In this paper, we indicate that this updating process can be performed by the similar method to generate looking-around images and the same data format can be applicable. Then we discuss the format in terms of the data size and the amount of calculations need to consider the performance in our display environment, and we propose new image formats which improve on the widely-used formats such as the perspective, or the fish-eye format.
Guangyuan ZHAO William SUTTON Dimitris PAVLIDIS Edwin L. PINER Johannes SCHWANK Seth HUBBARD
Schottky gas sensors of CO were fabricated using high quality AlGaN/GaN/Si heterostructures. The CO sensors show good sensitivity in the temperature range of 250 to 300 (530%, at 160 ppm CO in N2) and fast response comparable with SnO2 sensors. A two-region linear regime was observed for the dependence of sensitivity on CO concentration. GaN sensors on Si substrate offer the possibility of integration with Si based electronics. The gas sensors show slow response with time, the change of material properties possibly in the presence of large thermal stress.