The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ERG(874hit)

801-820hit(874hit)

  • On the Twisted Markov Chain of Importance Sampling Simulation

    Kenji NAKAGAWA  

     
    PAPER-Stochastic Process/Learning

      Vol:
    E79-A No:9
      Page(s):
    1423-1428

    The importance sampling simulation technique has been exploited to obtain an accurate estimate for a very small probability which is not tractable by the ordinary Monte Carlo simulation. In this paper, we will investigate the simulation for a sample average of an output sequence from a Markov chain. The optimal simulation distribution will be characterized by the Kullback-Leibler divergence of Markov chains and geometric properties of the importance sampling simulation will be presented. As a result, an effective computation method for the optimal simulation distribution will be obtained.

  • Convergence Analysis of Quantizing Method with Correlated Gaussian Data

    Kiyoshi TAKAHASHI  Noriyoshi KUROYANAGI  Shinsaku MORI  

     
    PAPER

      Vol:
    E79-A No:8
      Page(s):
    1157-1165

    In this paper the normalized lease mean square (NLMS) algorithm based on clipping input samples with an arbitrary threshold level is studied. The convergence characteristics of these clipping algorithms with correlated data are presented. In the clipping algorithm, the input samples are clipped only when the input samples are greater than or equal to the threshold level and otherwise the input samples are set to zero. The results of the analysis yield that the gain constant to ensure convergence, the speed of the convergence, and the misadjustment are functions of the threshold level. Furthermore an optimum threshold level is derived in terms of the convergence speed under the condition of the constant misadjustment.

  • Convergence Analysis of Processing Cost Reduction Method of NLMS Algorithm with Correlated Gaussian Data

    Kiyoshi TAKAHASHI  Noriyoshi KUROYANAGI  

     
    PAPER-Digital Signal Processing

      Vol:
    E79-A No:7
      Page(s):
    1044-1050

    Reduction of the complexity of the NLMS algorithm has recceived attention in the area of adaptive filtering. A processing cost reduction method, in which the component of the weight vector is updated when the absolute value of the sample is greater than or equal to an arbitrary threshold level, has been proposed. The convergence analysis of the processing cost reduction method with white Gaussian data has been derived. However, a convergence analysis of this method with correlated Gaussian data, which is important for an actual application, is not studied. In this paper, we derive the convergence cheracteristics of the processing cost reduction method with correlated Gaussian data. From the analytical results, it is shown that the range of the gain constant to insure convergence is independent of the correlation of input samples. Also, it is shown that the misadjustment is independent of the correlation of input samples. Moreover, it is shown that the convergence rate is a function of the threshold level and the eigenvalues of the covariance matrix of input samples as well as the gain constant.

  • A Recognition Method of Facility Drawings and Street Maps Utilizing the Facility Management Database

    Chikahito NAKAJIMA  Toshihiro YAZAWA  

     
    PAPER-Document Recognition and Analysis

      Vol:
    E79-D No:5
      Page(s):
    555-560

    This paper proposes a new approach for inputting handwritten Distribution Facility Drawings (DFD) and their maps into a computer automatically by using the Facility Management Database (FMD). Our recognition method makes use of external information for drawing/map recognition. It identifies each electric-pole symbol and support cable symbol on drawings simply by consulting the FMD. Other symbols such as transformers and electric wires can be placed on drawings automatically. In this positioning of graphic symbols, we present an automatic adjustment method of a symbol's position on the latest digital maps. When a contradiction is unsolved due to an inconsistency between the content of the DFD and the FMD, the system requests a manual feedback from the operator. Furthermore, it uses the distribution network of the DFD to recognize the street lines on the maps which aren't computerized. This can drastically reduce the cost for computerizing drawings and maps.

  • A Fast Block-Type Adaptive Filter Algorithm with Short Processing Delay

    Hector PEREZ-MEANA  Mariko NAKANO-MIYATAKE  Laura ORTIZ-BALBUENA  Alejandro MARTINEZ-GONZALEZ  Juan Carlos SANCHEZ-GARCIA  

     
    LETTER-Digital Signal Processing

      Vol:
    E79-A No:5
      Page(s):
    721-726

    This letter propose a fast frequency domain adaptive filter algorithm (FADF) for applications in which large order adaptive filters are required. Proposed FADF algorithm reduces the block delay of conventional FADF algorithms allowing a more efficient selection of the fast Fourier Transform (FFT) size. Proposed FADF algorithm also provides faster convergence rates than conventional FBAF algorithms by using a near-optimum convergence factor derived by using the FFT. Computer simulations using white and colored signals are given to show the desirable features of proposed scheme.

  • Theoretical Analysis of Synergistic Effects Using Space Diversity Reception and Adaptive Equalization in Digital Radio Systems

    Kojiro ARAKI  Shozo KOMAKI  

     
    PAPER-Radio Communication

      Vol:
    E79-B No:4
      Page(s):
    569-577

    The synergistic effects obtained by adopting both space diversity reception and adaptive equalization play a very important role in circuit outage reduction. This paper quantitatively analyzes these synergistic effects when dispersive and flat fading occur simultaneously. Analytical results show that the synergistic effects are of the same magnitude as the adaptive equalizer improvement factor when only dispersive fading causes outage. The synergistic effects gradually disappear when noise is the predominant cause of outage.

  • Fundamental Aspects of ESD Phenomena and Its Measurement Techniques

    Masamitsu HONDA  

     
    INVITED PAPER

      Vol:
    E79-B No:4
      Page(s):
    457-461

    This paper clarified fundamental aspects of both triboelectric processes and electrostatic discharge (ESD) phenomena to the electronic systems. A chance for ESD can occur if a charged metal object (steel piped chair, for example) contacts or collides with another metal objects at moderate speed. At metal-metal ESD event, the metal objects act as a radiation antenna in a very short time (some 100ps, for example) which emanates impulsive electromagnetic fields with unipolarity into the surrounding space. Because of ESD at low-voltage (3kV or less) conditions, the direction of electrons movement at the spark gap is always unidirectional and fixed. The spark gap works as a momentary switch and also as a "diode." The dominant fields radiated from the metal objects are impulsive electric fields or impulsive magnetic fields which depend on the metal object's electrical and geometric conditions. This impulsive electromagnetic fields penetrate electronic systems, causing electromagnetic interference (EMI) such as malfunctions or circuit upset. The difference between EMI actions in high-voltage ESD and low-voltage ESD is experimentally analyzed in terms of energy conversion/consumption. A series of experiments revealed that EMI actions due to the metal-metal ESD are not proportional to the charge voltage nor the discharge current. In order to capture single shot impulsive electromagnetic fields very close to the ESD point (wave source), a short monopole antenna as an ultra broad-band field sensor was devised. As for signal transmissions between the short monopole antenna and the instrument (receiver), micro/millimeter wave techniques were applied. The transmission line's minimum band width DC-18.5GHz is required for time domain measurements of low-voltage ESD.

  • Comparisons of Energy-Descent Optimization Algorithms for Maximum Clique Problems

    Nobuo FUNABIKI  Seishi NISHIKAWA  

     
    PAPER

      Vol:
    E79-A No:4
      Page(s):
    452-460

    A clique of a graph G(V,E) is a subset of V such that every pair of vertices is connected by an edge in E. Finding a maximum clique of an arbitrary graph is a well-known NP-complete problem. Recently, several polynomial time energy-descent optimization algorithms have been proposed for approximating the maximum clique problem, where they seek a solution by minimizing the energy function representing the constraints and the goal function. In this paper, we propose the binary neural network as an efficient synchronous energy-descent optimization algorithm. Through two types of random graphs, we compare the performance of four promising energy-descent optimization algorithms. The simulation results show that RaCLIQUE, the modified Boltzmann machine algorithm, is the best asynchronous algorithm for random graphs, while the binary neural network is the best one for k random cliques graphs.

  • Improvement of PECVD-SiNx for TFT Gate Insulator by Controlling Ion Bombardment Energy

    Yasuhiko KASAMA  Tadahiro OHMI  Koichi FUKUDA  Hirobumi FUKUI  Chisato IWASAKI  Shoichi ONO  

     
    PAPER-Device Issues

      Vol:
    E79-C No:3
      Page(s):
    398-406

    It has been revealed that ion bombardment energy and ion flux density play an essentially critical role in SiNx deposition process of PECVD in TFT-LCD production. Ion energy and ion flux density bombarding onto substrate surface are known to be extracted from waveform of RF applied to an electrode. Using this method, we investigated film quality of SiNx formed in the conventional parallel plate PECVD equipment. When N2 + H2 or N2 + Ar is employed as a carrier gas in source gas (SiH4 + NH3), we have defined normalized ion flux density as ion flux density divided by deposited SiNx molecule which must be increased to obtain high quality SiNx film while ion energy is suppressed at low level as not giving damages on the film surface. This technique has made it possible to securely form SiNx film (2500 ) featuring dielectric break-down field intensity of 8.5 MV/cm at 250 on a glass substrate with Cr gate interconnects of 1000 having vertical step struc-ture. One of the important factors to improve film quality of SiNx deposited in PECVD is to increase ion flux density while keeping ion bombardment energy low enough to protect growing surface against any damages. Using this technique inverse-staggered TFT-array featuring field effect mobility of 0.96 cm2/Vs has been demonstrated which gate insulator SiNx, non-doped a-Si: H and a-Si: H(n+) were formed continuously at the identical substrate temperature of 250.

  • Effects of 50 to 200-keV Electrons by BEASTLI Method on Semiconductor Devices

    Fumio MIZUNO  Satoru YAMADA  Tsunao ONO  

     
    PAPER-Device Issues

      Vol:
    E79-C No:3
      Page(s):
    392-397

    We studied effects of 50-200-keV electrons on semiconductor devices using BEASTLI (backscattered electron assisting LSI inspection) method. When irradiating semiconduc-tor devices with such high-energy electrons, we have to note two phenomena. The first is surface charging and the second is device damage. In our study of surface charging, we found that a net positive charge was formed on the device surface. The positive surface charges do not cause serious influence for observation so that we can inspect wafers without problems. The positive surface charging may be brought about because most incident electrons penetrate the device layer and reach the conducting substrate of the semiconductor device. For the device damage, we studied MOS devices which were sensitive to electron-beam irradiation. By applying a 400- annealing to electron-beam irradiated MOS devices, we could restore the initial characteris-tics of MOS devices. However, in order to recover hot-carrier degradation due to neutral traps, we had to apply a 900- annealing to the electron-beam irradiated MOS devices. Thus, BEASTLI could be successfully used by providing an apporopri-ate annealing to the electron-beam irradiated MOS devices.

  • A Proposal of Five-Degree-of-Freedom 3D Nonverbal Voice Interface

    Tatsuhiro YONEKURA  Rikako NARISAWA  Yoshiki WATANABE  

     
    PAPER-Human Communications and Ergonomics

      Vol:
    E79-A No:2
      Page(s):
    242-247

    This paper proposes a new emphasizing three-dimensional pointing device considering user friendliness and lack of cable clutter. The proposed method utilizes five degrees of freedom via the medium of non-verbal voice of human. That is, the spatial direction of the sound source, the type of the voice phoneme and the tone of the voice phoneme are utilized. The input voice is analyzed regarding the above factors and then taking proper effects as previously defined for human interface. In this paper the estimated spatial direction is used for three-dimensional movement for the virtual object as three degrees of freedom. Both of the type and the tone of the voice phoneme are used for remaining two degrees of freedom. Since vocalization of nonverbal human voice is an everyday task, and the intonation of the voice can be quite easily and intentionally controlled by human vocal ability, the proposed scheme is a new three-dimensional spatial interaction medium. In this sense, this paper realizes a cost-effective and handy nonverbal interface scheme without any artificial wearing materials which might give a physical and psychological fatigue. By using the prototype the authors evaluate the performance of the scheme from both of static and dynamic points of view and show some advantages of look and feel, and then prospect possibilities of the application for the proposed scheme.

  • Edge Detection Using Neural Network for Non-uniformly Illuminated Images

    Md. Shoaib BHUIYAN  Hiroshi MATSUO  Akira IWATA  Hideo FUJIMOTO  Makoto SATOH  

     
    PAPER-Bio-Cybernetics and Neurocomputing

      Vol:
    E79-D No:2
      Page(s):
    150-160

    Existing edge detection methods provide unsatisfactory results when contrast changes largely within an image due to non-uniform illumination. Koch et al. developed an energy function based upon the Hopfield neural network, whose coefficients were fixed by trial and error, and remain constant for the entire image, irrespective of the differences in intensity level. This paper presents an improved edge detection method for non-uniformly illuminated images. We propose that the energy function coefficients for an image with inconsistent illumination should not remain fixed, rather should vary as a second-order function of the intensity differences between pixels, and actually use a schedule of changing coefficients. The results, compared with those of existing methods, suggest a better strategy for edge detection depending upon both the dynamic range of the original image pixel values as well as their contrast.

  • On Multiple-Valued Separable Unordered Codes

    Yasunori NAGATA  Masao MUKAIDONO  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E79-D No:2
      Page(s):
    99-106

    In this paper, a new encoding/decoding scheme of multiple-valued separable balanced codes is presented. These codes have 2m information digits and m (R - 2) check digits in radices R 4, 2m - 1 information digits and m + 1 check digits in R = 3, where code-length n = Rm. In actual use of code-lengths and radices, it is shown that the presented codes are relatively efficient in comparison with multiple-valued Berger codes which are known as optimal unordered codes. Meanwhile, the optimality of multiple-valued Berger codes is discussed.

  • An Efficient Clustering Algorithm for Region Merging

    Takio KURITA  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1546-1551

    This paper proposes an efficient clustering algorithm for region merging. To speed up the search of the best pair of regions which is merged into one region, dissimilarity values of all possible pairs of regions are stored in a heap. Then the best pair can be found as the element of the root node of the binary tree corresponding to the heap. Since only adjacent pairs of regions are possible to be merged in image segmentation, this constraints of neighboring relations are represented by sorted linked lists. Then we can reduce the computation for updating the dissimilarity values and neighboring relations which are influenced by the merging of the best pair. The proposed algorithm is applied to the segmentations of a monochrome image and range images.

  • Spatial Profile of Blood Velocity Reconstructed from Telemetered Sonogram in Exercising Man

    Jufang HE  Yohsuke KINOUCHI  Hisao YAMAGUCHI  Hiroshi MIYAMOTO  

     
    PAPER

      Vol:
    E78-A No:12
      Page(s):
    1669-1676

    A continuous-wave ultrasonic Doppler system using wide field ultrasound transducers was applied to telemeter blood velocity from the carotid artery of exercising subjects. Velocity spectrogram was obtained by Hanning windowed fast Fourier transformation of the telemetered data. Distortion caused by a high-pass filter and transducers in the telemetry system was discussed in the paper. As the maximum Reynolds number in our experiment was 1478 which is smaller than the critical level of 2000, the blood flow should be laminar. Spatial velocity profiles were then reconstructed from the velocity spectrogram. In this paper, we defined a converging index Q of the velocity spectrum to measure the bluntness of the spatial velocity distribution across the blood vessel. Greater Q, the blunter the velocity profile will be. Simulation results for spatial velocity distributions of theoretical parabolic flow and Gaussian-distribution spectra with varied Q value showed that the cut-off effect by a high-pass filter of cut-off frequency fc=200Hz in our system could be ignored when the axial velocity is larger than 0.30 m/s and Q is greater than 2.0. Our experimental results, in contrast to those obtained from phantom systems by us and by Hein and O'Brien, indicate that the distribution of blood velocity is much blunter than previously thought. The Q index exceeded 10 during systole, whereas it was 0.5 in parabolic flow. The peak of Q index lagged behind that of axial blood velocity by approximately 0.02s. The phase delay of the Q index curve might be due to the time needed for the red blood cells to form the non-homogeneous distribution.

  • Disparity Selection in Binocular Pursuit

    Atsuko MAKI  Tomas UHLIN  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1591-1597

    This paper presents a technique for disparity selection in the context of binocular pursuit. For vergence control in binocular pursuit, it is a crucial problem to find the disparity which corresponds to the target among multiple disparities generally observed in a scene. To solve the problem of the selection, we propose an approach based on histogramming the disparities obtained in the scene. Here we use an extended phase-based disparity estimation algorithm. The idea is to slice the scene using the disparity histogram so that only the target remains. The slice is chosen around a peak in the histogram using prediction of the target disparity and target location obtained by back projection. The tracking of the peak enables robustness against other, possibly dominant, objects in the scene. The approach is investigated through experiments and shown to work appropriately.

  • Control of Magnetic Properties and Microstructure of Thin Film Recording Media under Ultraclean Sputtering Process

    Takehito SHIMATSU  Migaku TAKAHASHI  

     
    PAPER

      Vol:
    E78-C No:11
      Page(s):
    1550-1556

    The ultraclean sputtering process (UC-process) was newly introduced in the fabrication of Co62.5Ni30Cr7.5 and Co85.5Cr10.5Ta4 thin film media to establish a new concept in controlling microstructure. UC-process enables the realization of high coercive force Hc up to 2.7-3 kOe in both CoNiCr and CoCrTa media (15/50 nm magnetic/Cr thicknesses) without the decrement of saturation magnetization. The purification of the atmosphere during sputtering and the removal of the adsorbed oxygen impurity on the substrate surface play important roles in obtaining high Hc by applying the UC-process. This high Hc is mainly due to the realization of large magnetocrystalline anisotropy field of grains Hkgrain and low intergranular exchange coupling. UC-process realizes the adequate separation of grains by segregated grain boundaries even in media with thin Cr thickness of 2.5 nm, and enables grain size reduction without the remarkable increment in intergranular exchange coupling. In these media, the reduction of the grain size is most effective for the improvement of readback signal to media noise ratio S/Nm. In the media with grains sufficiently separated by segregated grain boundaries fabricated by the UC-process, control of grain size reduction and further increase in Hc/Hkgrain value through the decrement in intergranular magnetostatic coupling are required to obtain higher S/Nm value.

  • Deposition of Ba Ferrite Films for Perpendicular Magnetic Recording Media Using Mixed Sputtering Gas of Xe, Ar and O2

    Nobuhiro MATSUSHITA  Kenji NOMA  Shigeki NAKAGAWA  Masahiko NAOE  

     
    PAPER

      Vol:
    E78-C No:11
      Page(s):
    1562-1566

    Ba ferrite films were deposited epitaxially on ZnO underlayer from targets with composition of BaO-6.5Fe2O3 at substrate temperature of 600 using the facing targets sputtering apparatus. The gas mixture of Ar and Xe of 0.18 Pa and O2 of 0.02 Pa was used as the sputtering gas and the dependences of crystallographic and magnetic characteristics on the partial Xe pressure PXe(0.0-0.18 Pa) were investigated. Films deposited at various PXe were composed of BaM ferrite and spinel crystallites, and the minimum centerline average roughness Ra of 8.3 nm was obtained at PXe of 0.10 Pa. Since saturation 4πMs of 5.1 kG and perpendicular anisotropy constant Ku1 of 4.23105 Jm-3 were larger than those of bulk BaM ferrite of 4.8 kG and 3.30105 Jm-3, respectively, these films appeared promising for use as perpendicular recording media.

  • Unsupervised Speaker Adaptation Using All-Phoneme Ergodic Hidden Markov Network

    Yasunage MIYAZAWA  Jun-ichi TAKAMI  Shigeki SAGAYAMA  Shoichi MATSUNAGA  

     
    PAPER-Speech Processing and Acoustics

      Vol:
    E78-D No:8
      Page(s):
    1044-1050

    This paper proposes an unsupervised speaker adaptation method using an all-phoneme ergodic Hidden Markov Network" that combines allophonic (context-dependent phone) acoustic models with stochastic language constraints. Hidden Markov Network (HMnet) for allophone modeling and allophonic bigram probabilities derived from a large text database are combined to yield a single large ergodic HMM which represents arbitrary speech signals in a particular language so that the model parameters can be re-estimated using text-unknown speech samples with the Baum-Welch algorithm. When combined with the Vector Field Smoothing (VFS) technique, unsupervised speaker adaptation can be effectively performed. This method experimentally gave better performances compared with our previous unsupervised adaptation method which used conventional phonetic HMMs and phoneme bigram probabilities especially when the amount of training data was small.

  • Performance Improvement of Variable Stepsize NLMS

    Jirasak TANPREEYACHAYA  Ichi TAKUMI  Masayasu HATA  

     
    PAPER

      Vol:
    E78-A No:8
      Page(s):
    905-914

    Improvement of the convergence characteristics of the NLMS algorithm has received attention in the area of adaptive filtering. A new variable stepsize NLMS method, in which the stepsize is updated optimally by using variances of the measured error signal and the estimated noise, is proposed. The optimal control equation of the stepsize has been derived from a convergence characteristic approximation. A new condition to judge convergence is introduced in this paper to ensure the fastest initial convergence speed by providing precise timing to start estimating noise level. And further, some adaptive smoothing devices have been added into the ADF to overcome the saturation problem of the identification error caused by some random deviations. By the simulation, The initial convergence speed and the identification error in precise identification mode is improved significantly by more precise adjustment of stepsize without increasing in computational cost. The results are the best ever reported performanced. This variable stepsize NLMS-ADF also shows good effectiveness even in severe conditions, such as noisy or fast changing circumstances.

801-820hit(874hit)