The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Ti(30728hit)

29541-29560hit(30728hit)

  • Equation for Brief Evaluation of the Convergence Rate of the Normalized LMS Algorithm

    Kensaku FUJII  Juro OHGA  

     
    LETTER

      Vol:
    E76-A No:12
      Page(s):
    2048-2051

    This paper presents an equation capable of briefly evaluating the length of white noise sequence to be sent as a training signal. The equation is formulated by utilizing the formula describing the convergence property, which has been derived from the IIR filter expression of the NLMS algorithm. The result revealed that the length is directly proportional to I/[K(2-K)] where K is a step gain and I is the number of the adaptive filter taps.

  • Higher-Order Analysis on Phase Noise Generation in Varactor-Tuned Oscillators-- Baseband Noise Upconversion in GaAs MESFET Oscillators--

    Takashi OHIRA  

     
    LETTER-Microwave and Millimeter Wave Technology

      Vol:
    E76-C No:12
      Page(s):
    1851-1854

    Phase noise generation in varactor-tuned oscillators is analyzed by an asymptotic perturbation technique. It is found out that 1/f noise and AM noise are converted into phase noise by first- and higher-order nonlinearities of the varactor. The deduced formula can be utilized in CAD for circuit evaluation/optimization of varactor-tuned osicillators.

  • Analysis of Abrupt Discontinuities in Weakly Guiding Waveguides by a Modified Beam Propagation Method

    Masashi HOTTA  Masahiro GESHIRO  Shinnosuke SAWA  

     
    PAPER

      Vol:
    E76-B No:12
      Page(s):
    1552-1557

    The beam propagation method (BPM) is a powerful and manageable method for the analysis of wave propagation along weakly guiding optical waveguides. However, the effects of reflected waves are not considered in the original BPM. In this paper, we propose two simple modifications of the BPM to make it relevant in characterizing abrupt discontinuities in weakly guiding waveguides at which a significant amount of reflection is expected to be observed. Validity of the present modifications is confirmed by the numerical results for abrupt discontinuities in step-index slab waveguides and butt-joints between different slab waveguides.

  • Tropospheric Propagation Characteristics at Ku-Band for Satellite to Ground and LOS Paths in Surabaya, Indonesia

    Gert BRUSSAARD  Jaap DIJK  Kim LIU  Jan DERKSEN  

     
    LETTER

      Vol:
    E76-B No:12
      Page(s):
    1593-1597

    Some results are presented of a one-year measurement period on an INTELSAT down link at Ku band with elevation of 14 for concurrent measurements of beacon attenuation, sky noise and point rainfall rate. Also some results are presented of line-of-sight (LOS) link fading characteristics at the same place. The projection of the down link trajectory on earth has nearly the same direction as the LOS path trajectory. The measurement results are compared with the theoretical values according to the CCIR recommended procedures of rain attenuation predictions for tropical regions, especially Surabaya, Indonesia. A record rain attenuation value of 80dB was observed.

  • Full Wave Analysis of the Australian Omega Signal Observed by the Akebono Satellite

    Isamu NAGANO  Paul A. ROSEN  Satoshi YAGITANI  Minoru HATA  Kazutoshi MIYAMURA  Iwane KIMURA  

     
    PAPER

      Vol:
    E76-B No:12
      Page(s):
    1571-1578

    The Akebono satellite observed the Australian Omega signals when it passed about 1000km over the Omega station. In this paper, we compare the observed Omega signal intensities with the values obtained using a full wave calculation and we discuss a mechanism of modulation of the signals. The relative spatial variations of the calculated Omega intensities are quite consistent with those observed, but the absolute calculated intensities themselves are several dB larger than the observed intensities. This difference in intensity may be due to the horizontal inhomogeneity of the D region, which is not modeled in the full wave calculation, or to an incorrect assumption about radiation characteristics of the Omega antenna. It is found that modulation of the observed signals is caused by the interference between the waves with different k vectors.

  • Two-Dimensional Active Imaging of Conducting Objects Buried in a Dielectric Half-Space

    Yiwei HE  Toru UNO  Saburo ADACHI  Takunori MASHIKO  

     
    PAPER

      Vol:
    E76-B No:12
      Page(s):
    1546-1551

    A two-dimensional quasi-exact active imaging method for detecting the conducting objects buried in a dielectric half-space is proposed. In this imaging method, an image function which is a projection of buried object to an arbitrary direction, is introduced exactly by taking account of the presence of the planar boundary. The image function is synthesized from the scattering fields which are measured by moving a transmitting antenna (a current source) and a receiving antenna (an observation point) simultaneously along the ground surface. The scattering field is generated by the physical optics current assumed on the surface of buried object. Because the effectiveness of physical optics approximation has been confirmed for this problem, this is a quasi-exact active imaging method. The validity of this imaging method is confirmed by some numerical simulations and an experiment.

  • Rain Depolarization Characteristics Related to Rainfall Types on Ka-Band Satellite-to-Ground Path

    Yasuyuki MAEKAWA  Nion Sock CHANG  Akira MIYAZAKI  

     
    PAPER

      Vol:
    E76-B No:12
      Page(s):
    1564-1570

    Observations of rain depolarization characteristics were conducted using the CS-2 and CS-3 beacon signals (19.45GHz, circular polarization, elevation angle=49.5) during seven years of 1986-1992 at Neyagawa, Osaka. The mean cross-polar phase relative to the co-polar phase of each rainfall event is distributed in a comparatively wide range from -100 to -150. This large variation is suggested to be caused by the difference of raindrop size distribution (DSD) in addition to that of rain intensity. The effects of DSD are examined by rain attenuation statistics for specific months, together with direct measurements of raindrop diameters on the ground for several rainfall events. Compared with representative DSD models, the effects of the Joss-drizzle type with relatively small raindrops primarily appear in "Baiu (Tsuyu)" period, while the effects of the Marshall-Palmer type which represents a standard type are enhanced in "Shurin (Akisame)" period. On the other hand, the effects of the Joss-thunderstorm type with comparatively large raindrops do not indicate a very clear seasonal variation. Possible improvements of XPD performed by differential phase shifters are generally found to be lower than 10dB for the rain depolarization due to the effect of residual differential attenuation after the cancellation of differential phase shift. Such XPD improvements are, however, very sensitive to the type of DSD, and it is suggested that the improvements are at least greater than 6dB for the Joss-drizzle type, whereas they are less than 6dB for the Marshall-Palmer and Joss-thunderstorm types. The effects of the XPD improvements are thus related to rainfall types, i.e., the type of DSD, and the improvements are considerably dependent upon the seasons in which each rainfall type frequently appears.

  • A Two-Cascaded Filtering Method for the Enhancement of X-Ray CT Image

    Shanjun ZHANG  Toshio KAWASIMA  Yoshinao AOKI  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:12
      Page(s):
    1500-1509

    A two-cascaded image processing approach to enhance the subtle differences in X-ray CT image is proposed. In the method, an asymmetrical non-linear subfilter is introduced to reduce the noise inherent in the image while preserving local edges and directional structural information. Then, a subfilter is used to compress the global dynamic range of the image and emphasize the details in the homogeneous regions by performing a modular transformation on local image den-sities. The modular transformation is based on a dynamically defined contrast fator and the histogram distributions of the image. The local contrast factor is described in accordance with Weber's fraction by a two-layer neighborhood system where the relative variances of the medians for eight directions are computed. This method is suitable for low contrast images with wide dynamic ranges. Experiments on X-ray CT images of the head show the validity of the method.

  • Scene Interpretation with Default Parameter Models and Qualitative Constraints

    Michael HILD  Yoshiaki SHIRAI  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E76-D No:12
      Page(s):
    1510-1520

    High variability of object features and bad class separation of objects are the main causes for the difficulties encountered during the interpretation of ground-level natural scenes. For coping with these two problems we propose a method which extracts those regions that can be segmented and immediately recognized with sufficient reliability (core regions) in the first stage, and later try to extend these core regions up to their real object boundaries. The extraction of reliable core regions is generally difficult to achieve. Instead of using fixed sets of features and fixed parameter settings, our method employs multiple local features (including textural features) and multiple parameter settings. Not all available features may yield useful core regions, but those core regions that are extracted from these multiple features make a cntributio to the reliability of the objects they represent. The extraction mechanism computes multiple segmentations of the same object from these multiple features and parameter settings, because it is not possible to extract such regions uniquely. Then those regions are extracted which satisfy the constraints given by knowledge about the objects (shape, location, orientation, spatial relationships). Several spatially overlapping regions are combined. Combined regions obtained for several features are integrated to form core regions for the given object calss.

  • Data Compression of Ambulatory ECG by Using Multi-Template Matching and Residual Coding

    Takanori UCHIYAMA  Kenzo AKAZAWA  Akira SASAMORI  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1419-1424

    This paper proposed a new algorithm of data compression for ambulatory ECG, where no distortion was included in the reconstructed signal, templates were constructed selectively from detected beats, and categorized ECG morphologies (templates) could be displayed in decoding the compressed data. This algorithm consisted of subtracting a best-fit template from the detected beat with an aid of multi-template matching, first differencing of the resulting residuals and modified Huffman coding. This algorithm was evaluated by applying it to ECG signals of the American Heart Association (AHA) data base in terms of bit rates. Following features were indicated. (1) Decompressed signal coincided completely with the original sampled ECG data. (2) Bit rate was approximately 800 bps at the appropriate threshold 50-60 units (1 unit2.4µVolt) for the template matching. This bit rate was almost the same as that of the direct compression (encoding the first differenced signal of original signal). (3) The decompressed templates could make it easy to classify the templates into the normal and abnormal beats; this could be executed without fully decompressing the ECG signal.

  • Optical Array Imaging System with Improved Focusing Function

    Osamu IKEDA  

     
    PAPER-Parallel/Multidimensional Signal Processing

      Vol:
    E76-A No:12
      Page(s):
    2108-2113

    In a previous article, an optical array imaging system has been presented. In this system, first, a set of array data is collected by repeatedly illuminating the object with laser light from each array element, detecting the reflected light as interferogram, and extracting the reflected wave field based on the spatial heterodyne detection. Then, an eigenvalue analysis is applied to the data to derive the wave field that would backpropagate and focus at a single point on the object; in this case, the iterative algorithm is used which indicates that the object point may have the largest reflectivity. It was shown experimentally that the single-point-focusing was attained for objects having several such parts with almost the same reflectivities. A preliminary study by computer simulation, however, indicates that the probability with which the wave focuses at multiple object points would not be small enough, resulting in a degraded image with ghost image components. In this paper, the array data within subaperture regions are selectively used to attain the single-point-focusing and obtain a good image for any object. First, it is shown analytically that the change in the dimension or center position of the aperture is effective to change the eigenvector so that it attains the single-point-focusing. Then, a procedure to find the optimum subapertures and a measure evaluating the degree of single-point-focusing for the eigenvector are presented. The method is examined in detail using experimentally obtained array data, and the results show that the method is effective in obtaining good images for any objects without sacrificing image resolution. When we compare the imaging system to an automatic focusing camera, it may be said that the additional processings enhance the capability of automatic focusing to a great degree.

  • Multiwave: A Wavelet-Based ECG Data Compression Algorithm

    Nitish V. THAKOR  Yi-chun SUN  Hervé RIX  Pere CAMINAL  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1462-1469

    MultiWave data compression algorithm is based on the multiresolution wavelet techniqu for decomposing Electrocardiogram (ECG) signals into their coarse and successively more detailed components. At each successive resolution, or scale, the data are convolved with appropriate filters and then the alternate samples are discarded. This procedure results in a data compression rate that increased on a dyadic scale with successive wavelet resolutions. ECG signals recorded from patients with normal sinus rhythm, supraventricular tachycardia, and ventriular tachycardia are analyzed. The data compression rates and the percentage distortion levels at each resolution are obtained. The performance of the MultiWave data compression algorithm is shown to be superior to another algorithm (the Turning Point algorithm) that also carries out data reduction on a dyadic scale.

  • A Model for Explaining a Phenomenon in Creative concept Formation

    Koichi HORI  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E76-D No:12
      Page(s):
    1521-1527

    This paper gives a model to explain one phenomenon found in the process of creative concept formation, i.e. the phenomenon that people often get trapped in some state where the mental world remains nebulous and sometimes suddenly make a jump to a new concept. This phenomenon has been qualitatively explained mainly by the philosophers but there have not been models for explaining it quantitatively. Such model is necessary in a new research field to study the systems for aiding human creative activities. So far, the work on creation aid has not had theoretical background and the systems have been built based only on trial and error. The model given in this paper explains some aspects of the phenomena found in creative activities and give some suggestions for the future systems for aiding creative concept formation.

  • Load Balancing Based on Load Coherence between Continuous Images for an Object-Space Parallel Ray-Tracing System

    Hiroaki KOBAYASHI  Hideyuki KUBOTA  Susumu HORIGUCHI  Tadao NAKAMURA  

     
    PAPER-Computer Systems

      Vol:
    E76-D No:12
      Page(s):
    1490-1499

    The ray-tracing algorithm can synthesize very realistic images. However, the ray tracing is very time consuming. To solve this problem, a load balancing strategy using temporal coherence between images in an animation is presented for balancing computational loads among processing elements of a parallel processng system. Our parallel processing model is based on a space subdivision method for the ray-tracing algorithm. A subdivided object space is distributed among processing elements of the parallel system. To clarify the effectiveness of the load balancing strategy, we examine the system performance by computer simulation.

  • A Translation Method from Natural Language Specifications of Communication Protocols into Algebraic Specifications Using Contextual Dependencies

    Yasunori ISHIHARA  Hiroyuki SEKI  Tadao KASAMI  Jun SHIMABUKURO  Kazuhiko OKAWA  

     
    PAPER-Automaton, Language and Theory of Computing

      Vol:
    E76-D No:12
      Page(s):
    1479-1489

    This paper presents a method of translating natural language specifications of communication protocols into algebraic specifications. Such a natural language specification specifies action sequences performed by the protocol machine (program). Usually, a sentence implicitly specifies the state of the protocol machine at which the described actions must be performed. The authors propose a method of analyzing the implicitly specified states of the protocol machine taking the OSI session protocol specification (265 sentences) as an example. The method uses the following properties: (a) syntactic properties of a natural language (English in this paper); (b) syntactic properties introduced by the target algebraic specifications, e.g., type constraints; (c) properties specific to the target domain, e.g., properties of data types. This paper also shows the result of applying this method to the main part of the OSI session protocol specification (29 paragraphs, 98 sentences). For 95 sentences, the translation system uniquely determines the states specified implicitly by these sentences, using only (a) and (b) described above. By using (c) in addition, each implicitly specified state in the remaining three sentences is uniquely determined.

  • A Hybrid-ARQ Protocol with Adaptive Rate Error Control

    Hui ZHAO  Toru SATO  Iwane KIMURA  

     
    PAPER-Information Theory and Coding Theory

      Vol:
    E76-A No:12
      Page(s):
    2095-2101

    This paper presents an adaptive rate error control scheme for digital communication over time-varying channels. The cyclic code with majority-logic decoding is used in a cascaded way as an inner code to create a simple and powerful hybrid-ARQ error control scheme. Inner code is used only for error correction and the outer code is used for both error correction and error detection. When an error is detected, retransmission is required. The unsuccessful packets are not discarded as with conventional schemes, but are combined with their retransmitted copies. Approximations for the throughput efficiency and the undetectable error probability are given. A high reliability coupled with a simple high-speed implementation makes it suitable for high data rate error control over both stationary and nonstationary channels. Adaptive error control scheme becomes the best solution for time-varying channels when the optimum code is selected according to the actual channel conditions to enhance the system performance. The main feature of this system is that the basic structure of the encoder and decoder need not be modified while the error-correction capability of the code increases. Results of a comparative analysis show that the proposed scheme outperforms other similar ARQ protocols.

  • Computing the Expected Maximum Number of Vertex-Disjoint s-t Paths in a Probabilistic Basically Series-Parallel Digraph

    Peng CHENG  Shigeru MASUYAMA  

     
    PAPER-Graphs, Networks and Matroids

      Vol:
    E76-A No:12
      Page(s):
    2089-2094

    In this paper, we propose a polynomial time algorithm for computing the expected maximum number of vertex-disjoint s-t paths in a probabilistic basically series-parallel directed graph and a probabilistic series-parallel undirected graph with distinguished source s and sink t(st), where each edge has a mutually independent failure probability and each vertex is assumed to be failure-free.

  • A Fuzzy Inference LSI for an Automotive Control

    Yoshihisa HARATA  Norikazu OHTA  Kiyoharu HAYAKAWA  Takashi SHIGEMATSU  Yasushi KITA  

     
    PAPER

      Vol:
    E76-C No:12
      Page(s):
    1780-1787

    Fuzzy control is suitable for automotive control, because fuzzy control achieves controllability as good as control by humankind. However, since automotive control requires milli-second response and learning control, and the fuzzy system in automobiles requires fewer components (built-in type), a custom fuzzy inference LSI is needed for automotive control. We then indicated requirements of a fuzzy inference LSI suitable for automotive control and fabricated a fuzzy inference LSI using 1.5 µm CMOS process technique. This fabricated fuzzy LSI is designed to utilize in various automotive control experiments such as engine control, cruise control, brake control and steering control. The number of input variables is six, the number of output variables is two, the maximum number of production rules is 256, and the inference time is 63 microseconds (under the condition of six inputs, two outputs and 256 rules). The features of the fuzzy LSI are high speed inference, a built-in type, learning control ability and a memory structure separating into a rule memory and a membership function memory. A fuzzy control system is implemented only by the addition of two devices: the fuzzy LSI and an EPROM. The fuzzy LSI was applied to a rough road durability test aiming at the automatic driving equivalent to the human driver operation. In the test, fuzzy control and linear control were compared in terms of the compensation steering degrees. Linear steering control had a high rate of compensation steering of less than thirty degrees. On the other hand, the accumulated steering compensation of less than twenty degrees in the fuzzy control was about one third that in the linear control. The fuzzy steering control had the same steering compensations as that of human steering. The fuzzy LSI fabricated for various experiments is too large (10.7 mm10.9 mm) to adopt as automotive parts. Therefore, we studied a smaller-sized fuzzy LSI by limiting functions, by changing the parallel processing into sequential processing and by thinning out the memory data of input membership functions. The number of input variables is four, the number of output variables is two, the maximum number of production rules is 160 and the expected inference time is 140 micro-seconds (in the worst case). The obtained chip is small enough (4.8 mm4.8 mm) for automotive applications. Since the chip contains all the memories that are needed to execute fuzzy inference, the chip can be built in a microprocessor as a fuzzy inference co-processor without any other circuits.

  • Technological Trends and Key Technologies in Intelligent Vehicles

    Takao SASAYAMA  

     
    INVITED PAPER

      Vol:
    E76-C No:12
      Page(s):
    1717-1726

    The technical trends of intelligent vehicles are discussed basing on the progress of technology of microelectronics, sensing and information processing. The concept of intelligent vehicles has started when the installation of computers on vehicles became possible in 1970s. The functions of computerized cars increased gradually with the progress of technology of microelectronics, sensing and information processing responding to the demands of the society. The first issues we had to challenge with the capability of electronic systems were the environmental and the energy resources problems. The R & D works of these purposes created many sophisticated computer control systems. Moreover, these works established the base of intelligent vehicles that contains various functions for drivability, safety, and information communications. On the other hand, many kinds of information and communication technology became useful to solve the issues on automobiles through infrastructure systems. United States, Europe, and Japan have started their own projects to realize such hierarchy management systems for traffic and vehicles. From the viewpoint of vehicle itself, it is the indispensable conditions and directions to implement the computer and telecommunications functions to the vehicles to establish clean, comfort, convenient, efficient and safe automobiles toward the next century.

  • Speech Recognition of lsolated Digits Using Simultaneous Generative Histogram

    Yasuhisa HAYASHI  Akio OGIHARA  Kunio FUKUNAGA  

     
    LETTER

      Vol:
    E76-A No:12
      Page(s):
    2052-2054

    We propose a recognition method for HMM using a simultaneous generative histogram. Proposed method uses the correlation between two features, which is expressed by a simultaneous generative histogram. Then output probabilities of integrated HMM are conditioned by the codeword of another feature. The proposed method is applied to isolated digit word recognition to confirm its validity.

29541-29560hit(30728hit)