Younggeun HAN Chang-Seok KIM Un-Chul PAEK Youngjoo CHUNG
We will discuss performance optimization of strain and temperature sensors based on long period fiber gratings (LPFGs) through control of the temperature sensitivity of the resonant peak shifts. Distinction between the effects of strain and temperature is a major concern for applications to communication and sensing. This was achieved in this work by suppressing or enhancing the temperature sensitivity by adjusting the doping concentrations of GeO2 and B2O3 in the core or cladding. The LPFGs were fabricated with a CO2 laser by the mechanical stress relaxation and microbending methods. The optimized temperature sensitivities were 0.002 nm/ for the suppressed case and 0.28 nm/ for the enhanced case, respectively. These LPFGs were used for simultaneous measurement of strain and temperature. The result indicates the rms errors of 23 µstrain for the strain and 1.3 for the temperature.
The two dimensional mesh is widely considered to be a promising parallel architecture in its scalability. In this architecture, processors are naturally placed at intersections of horizontal and vertical grids, while there can be three different types of communication links: (i) The first type is the most popular model, called a mesh-connected computer: Each processor is connected to its four neighbours by local connections. (ii) Each processor of the second type is connected to a couple of (row and column) buses. The system is then called a mesh of buses. (iii) The third model is equipped with both buses and local connections, which is called a mesh-connected computer with buses. Mesh routing has received considerable attention for the last two decades, and a variety of algorithms have been proposed. This paper provides an overview of lower and upper bounds for algorithms, with pointers to the literature, and suggests further research directions for mesh routing.
Takao ASANO Kenichiro IWAMA Hideyuki TAKADA Yoshiko YAMASHITA
For NP-hard combinatorial optimization problems, approximation algorithms with high performances have been proposed. In many of these algorithms, mathematical programming techniques have been used and proved to be very useful. In this survey, we present recent mathematical programming techniques as well as classic fundamental techniques, by showing how these techniques are used in designing high-quality approximation algorithms for NP-hard combinatorial optimization problems.
Mutsumi KIMURA Tokuro OZAWA Satoshi INOUE
The pseudo-lattice method has been developed for dynamic 3-D liquid-crystal director simulation in thin-film-transistor liquid-crystal displays. Its feature is that the equation of motion of the director is not formularized from the real-lattices, but from the pseudo-lattices organized between the real-lattices. The director on the pseudo-lattice is calculated from the real-lattices by insertion. The objective is to simulate the continuous nematic symmetry correctly and to reduce time and memory needed for the calculation. Especially in this paper, the pseudo-lattice method is explained in detail. Moreover, experiments have been done, and the simulated behavior and location of the bright line, which is caused by the distortion of the director profile, were confirmed to be the same as the actual ones. In particular, the movement and elimination process of the bright line were simulated for the first time.
Osamu TOHYAMA Shigeo MAEDA Kazuhiro ABE Manabu MURAYAMA
When a micromachine works inside a narrow space inside tubes and equipment such as a microfactory, a microdevice that has a visual function is indispensable. To monitor the minute shapes of microfabrication and microassembly process that are impossible to observe, fiber-optic sensors and actuators for environmental recognition devices have been developed. The devices are designed to allow stereoscopic and microscopic observation and to measure the dimensions of microparts. To achieve these goals and to realize minute structures and functions, we developed environmental recognition devices for microfabrication process with functions of far and near field observation, tactile sensing and tip articulation, for microassembly process with functions of stereoscopic observation and tip articulation. The results show that easy and safe environmental recognition is possible in the narrow spaces of a microfactory.
Norifumi YASUE Hiroshi NARUSE Jun-ichi MASUDA Hironori KINO Toshio NAKAMURA Taketoshi YAMAURA
This paper describes a load carrying test for a concrete pipe designed to study the effectiveness of distributed strain measurement using an optical fiber sensor. We performed a load carrying test on a concrete pipe and attempted to detect the distributed strain inside it using an optical fiber sensor mounted inside the pipe. We confirmed that it was possible to detect the strain in a concrete structure by using an optical fiber sensor after a crack had occurred on the concrete surface. This paper shows that measurement using the optical fiber sensor was effective despite great changes in the strain conditions of the measured object over a short distance.
Sandeep VOHRA Gregg JOHNSON Michael TODD Bruce DANVER Bryan ALTHOUSE
This paper describes the implementation of a Bragg grating-based strain-monitoring system on the Viaduc des Vaux bridge during its construction in 1997 and 1998. The bridge was constructed in a cantilevered, push/pull incremental launching method, and data obtained from two tests were shown to reveal interesting features of the box-girder strain response during the push and pull phases, particularly with regard to limit loads and local buckling. When appropriate, data were compared to data obtained from conventional resistive strain gages and from simple analytical models.
Shigeru MASUYAMA Shin-ichi NAKAYAMA
This paper analyzes what structural features of graph problems allow efficient parallel algorithms. We survey some parallel algorithms for typical problems on three kinds of graphs, outerplanar graphs, trapezoid graphs and in-tournament graphs. Our results on the shortest path problem, the longest path problem and the maximum flow problem on outerplanar graphs, the minimum-weight connected dominating set problem and the coloring problem on trapezoid graphs and Hamiltonian path and Hamiltonian cycle problem on in-tournament graphs are adopted as working examples.
Toshimitsu MASUZAWA Michiko INOUE
Distributed computation has attracted considerable attention and large-scale distributed systems have been designed and developed. A distributed system inherently has possibility of fault tolerance because of its redundancy. Thus, a great deal of investigation has been made to design fault-tolerant distributed algorithms. This paper introduces two promising paradigms, self-stabilization and wait-freedom, for designing fault-tolerant distributed algorithms and discusses some subjects important from the point of view of algorithm engineering.
Naoto OKA Chiharu MIYAZAKI Shuichi NITTA
In this paper, the evaluation of emission from a PCB by using crosstalk between a low frequency signal trace and a digital signal trace is investigated. These signal traces are closely routed in parallel to each other on the different several signal planes in the PCB. It is shown experimentally that the coupled signal trace with a cable section causes drastic increase of emission from the PCB. From the measurement results of current distribution on the cable section, it is shown that this current distribution contributes to the increase of emission from the PCB. Therefore, emission increasing by coupling between signal traces is evaluated by crosstalk between them. The measurement results of radiation and the calculation results of crosstalk on the PCB (deviation from results of the PCB which is referred, respectively) agree with each other within 2 dB range or 3.5 dB range. This result shows that reduction effect of emission from the PCB can be predicted by calculation results of crosstalk. Moreover, it is shown that evaluation of emission level by using crosstalk is useful to decide PCB's structure for reduction of emission from a high-density assembled PCB. From the viewpoint of practical application, it is effective for the reduction of emission from a PCB to separate a low frequency signal trace from a high-speed digital signal trace by ground plane of a PCB.
Eric W. M. WONG Andy K. M. CHAN Sammy CHAN King-Tim KO
The Virtual Path (VP) concept in ATM networks simplifies network structure, traffic control and resource management. For VP formulation, a VP can carry traffic of the same type (the separate scheme) or of different types (the unified scheme). For VP adjustment, a certain amount of bandwidth can be dynamically assigned (reserved) to VPs, where the amount (the bandwidth incremental/decremental size) is a predetermined system parameter. In this paper, we study Least Loaded Path-based dynamic routing schemes with various residual bandwidth definitions under different bandwidth allocation (VP formulation and adjustment) schemes. In particular, we evaluate the call blocking probability and VP set-up processing load with varying (bandwidth) incremental sizes. Also, We investigate numerically how the use of VP trades the blocking probability with the processing load. It is found that the unified scheme could outperform the separate scheme in certain incremental sizes. Moreover, we propose two ways to reduce the processing load without increasing the blocking probability. Using these methods, the separate scheme always outperforms the unified scheme.
This is a survey of algorithmic results in the theory of "discrete convex analysis" for integer-valued functions defined on integer lattice points. The theory parallels the ordinary convex analysis, covering discrete analogues of the fundamental concepts such as conjugacy, the Fenchel min-max duality, and separation theorems. The technical development is based on matroid-theoretic concepts, in particular, submodular functions and exchange axioms.
Given N real weights w1, w2, . . . , wN stored in one-dimensional array, we consider the problem for finding an optimal interval I [1, N] under certain criteria. We shall review efficient algorithms developed for solving such problems under several optimality criteria. This problem can be naturally extended to two-dimensional case. Namely, given a NN two-dimensional array of N2 reals, the problem seeks to find a subregion of the array (e. g. , rectangular subarray R) that optimizes a certain objective function. We shall also review several algorithms for such problems. We shall also mention applications of these problems to region segmentation in image processing and to data mining.
Triangulations have been one of main research topics in computational geometry and have many applications in computer graphics, finite element methods, mesh generation, etc. This paper surveys properties of triangulations in the two- or higher-dimensional spaces. For triangulations of the planar point set, we have a good triangulation, called the Delaunay triangulation, which satisfies several optimality criteria. Based on Delaunay triangulations, many properties of planar triangulations can be shown, and a graph structure can be constructed for all planar triangulations. On the other hand, triangulations in higher dimensions are much more complicated than in planar cases. However, there does exist a subclass of triangulations, called regular triangulations, with nice structure, which is also touched upon.
Hideki YAMAUCHI Yoshinori TAKEUCHI Masaharu IMAI
This paper proposes an efficient architecture for fractal image coding processors. The proposed architecture achieves high-speed image coding comparable to conventional JPEG processing. This architecture achieves less than 33.3 msec fractal image compression coding against a 512 512 pixel image and enables full-motion fractal image coding. The circuit size of the proposed architecture design is comparable to those of JPEG processors and much smaller than those of previously proposed fractal processors.
It is an important problem in signal processing, system realization and system identification to find linear discrete-time systems which are consistent with given covariance parameters. This problem is formulated as a problem of finding discrete-time positive real functions which interpolate given covariance parameters. Among various solutions to the problem, a recent remarkable one is a parameterization of all the discrete-time strictly positive real functions that interpolate the covariance parameters and have a limited McMillan degree. In this paper, we use more general input-output characteristics than covariance parameters and consider finding discrete-time positive real functions which interpolate such characteristics. The input-output characteristics are given by the coefficients of the Taylor series at some complex points in the open unit disk. Based on our previous work, we present an algorithm to generate all the discrete-time positive real functions that interpolate the input-output characteristics and have a limited McMillan degree. The algorithm is more general and simpler than the previous one, and is an important practical supplement to the previous work. Moreover, the interpolation of the general input-output characteristics can be effectively applied to the frequency-weighted model reduction. Hence, the algorithm makes a contribution to the problem from the practical viewpoint as well as the theoretical viewpoint.
Given a plane graph G, we wish to find a drawing of G in the plane such that the vertices of G are represented as grid points, and the edges are represented as straight-line segments between their endpoints without any edge-intersection. Such drawings are called planar straight-line drawings of G. An additional objective is to minimize the area of the rectangular grid in which G is drawn. In this paper first we review known two methods to find such drawings, then explain a hidden relation between them, and finally survey related results.
The connectivity augmentation problem asks to add to a given graph the smallest number of new edges so that the edge- (or vertex-) connectivity of the graph increases up to a specified value k. The problem has been extensively studied, and several efficient algorithm have been discovered. We survey the recent development of the algorithms for this problem. In particular, we show how the minimum cut algorithm due to Nagamochi and Ibaraki is effectively applied to solve the edge-connectivity augmentation problem.
This paper presents a dosimetric analysis in an anatomically realistic human head model for a helical antenna portable telephone by using the finite-difference time-domain (FDTD) method. The head model, developed from magnetic resonance imaging (MRI) data of a Japanese adult head, consists of 530 thousand voxels, of 2 mm dimensions, segmented into 15 tissue types. The helical antenna was modeled as a stack of dipoles and loops with an adequate relative weight, whose validity was confirmed by comparing the calculated near magnetic fields with published measured data. SARs are given both for the spatial peak value in the whole head and the averages in various major organs.
Tetsushi WATANABE Osami WADA Takuya MIYASHITA Ryuji KOGA
This paper explains a mechanism of common-mode generation on a printed circuit board with a narrow ground pattern. A transmission line has its value of degree of unbalance. At a connection point of two transmission lines having different degrees of unbalance, common mode voltage is generated proportional to the difference, and it drives common mode current. The authors propose a method to evaluate common mode current distribution and verify it by measurement. Although calculated common mode current is larger than measured one by a few dBs, both of them are proportional to the degree of unbalance. An EMI reduction technique, 'unbalance matching,' is also proposed.