Futoshi FURUTA Kazuo SAITOH Akira YOSHIDA Hideo SUZUKI
We have designed a superconductor-semiconductor hybrid analog-to-digital (A/D) converter and experimentally evaluated its performance at sampling frequencies up to 18.6 GHz. The A/D converter consists of a superconductor front-end circuit and a semiconductor back-end circuit. The front-end circuit includes a sigma-delta modulator and an interface circuit, which is for transmitting data signal to the semiconductor back-end circuit. The semiconductor back-end circuit performs decimation filtering. The design of the modulator was modified to reduce effects of integrator leak and thermal noise on signal-to-noise ratio (SNR). Using the improved modulator design, we achieved a bit-accuracy close to the ideal value. The hybrid architecture enabled us to reduce the integration scale of the front-end circuit to fewer than 500 junctions. This simplicity makes feasible a circuit based on a high TC superconductor as well as on a low TC superconductor. The experimental results show that the hybrid A/D converter operated perfectly and that SNR was 84.8 dB (bit accuracy~13.8 bit) at a band width of 9.1 MHz. This converter has the highest performance of all sigma-delta A/D converters.
In this paper, proactive data filtering (PDF) algorithm is proposed for data aggregation (or data fusion) in wireless sensor networks. The objective of the algorithm is to further reduce the energy consumption when sensor nodes perform data aggregation. In many applications, the sensor field will be overwhelmed by unnecessary and redundant sensory information when the sink node disseminates a query throughout the sensor field. In order to reduce the energy consumption, our scheme employs intelligent decision logic in the sensor node which delays or deactivates the transmission of its response. A performance evaluation shows that data aggregation with the PDF significantly improves energy-efficiency.
In this comment we point out that the mapping from carry-propagation adders to carry-save adders in the context of shift-and-add multiplication is inconsistent. Based on this it is shown that the implementation in Ref.[1] does not achieve any complexity reduction in practice.
Areeyata SRIPETCH Poompat SAENGUDOMLERT
In a power grid used to distribute electricity, optical fibers can be inserted inside overhead ground wires to form an optical network infrastructure for data communications. Dense wavelength division multiplexing (DWDM)-based optical networks present a promising approach to achieve a scalable backbone network for power grids. This paper proposes a complete optimization procedure for optical network designs based on an existing power grid. We design a network as a subgraph of the power grid and divide the network topology into two layers: backbone and access networks. The design procedure includes physical topology design, routing and wavelength assignment (RWA) and optical amplifier placement. We formulate the problem of topology design into two steps: selecting the concentrator nodes and their node members, and finding the connections among concentrators subject to the two-connectivity constraint on the backbone topology. Selection and connection of concentrators are done using integer linear programming (ILP). For RWA and optical amplifier placement problem, we solve these two problems together since they are closely related. Since the ILP for solving these two problems becomes intractable with increasing network size, we propose a simulated annealing approach. We choose a neighborhood structure based on path-switching operations using k shortest paths for each source and destination pair. The optimal number of optical amplifiers is solved based on local search among these neighbors. We solve and present some numerical results for several randomly generated power grid topologies.
Hideyuki ICHIHARA Tomoyuki SAIKI Tomoo INOUE
Test compression / decompression scheme for reducing the test application time and memory requirement of an LSI tester has been proposed. In the scheme, the employed coding algorithms are tailored to a given test data, so that the tailored coding algorithm can highly compress the test data. However, these methods have some drawbacks, e.g., the coding algorithm is ineffective in extra test data except for the given test data. In this paper, we introduce an embedded decompressor that is reconfigurable according to coding algorithms and given test data. Its reconfigurability can overcome the drawbacks of conventional decompressors with keeping high compression ratio. Moreover, we propose an architecture of reconfigurable decompressors for four variable-length codings. In the proposed architecture, the common functions for four codings are implemented as fixed (or non-reconfigurable) components so as to reduce the configuration data, which is stored on an ATE and sent to a CUT. Experimental results show that (1) the configuration data size becomes reasonably small by reducing the configuration part of the decompressor, (2) the reconfigurable decompressor is effective for SoC testing in respect of the test data size, and (3) it can achieve an optimal compression of test data by Huffman coding.
Yoshinobu HIGAMI Kewal K. SALUJA Hiroshi TAKAHASHI Shin-ya KOBAYASHI Yuzo TAKAMATSU
This paper presents methods for detecting transistor short faults using logic level fault simulation and test generation. The paper considers two types of transistor level faults, namely strong shorts and weak shorts, which were introduced in our previous research. These faults are defined based on the values of outputs of faulty gates. The proposed fault simulation and test generation are performed using gate-level tools designed to deal with stuck-at faults, and no transistor-level tools are required. In the test generation process, a circuit is modified by inserting inverters, and a stuck-at test generator is used. The modification of a circuit does not mean a design-for-testability technique, as the modified circuit is used only during the test generation process. Further, generated test patterns are compacted by fault simulation. Also, since the weak short model involves uncertainty in its behavior, we define fault coverage and fault efficiency in three different way, namely, optimistic, pessimistic and probabilistic and assess them. Finally, experimental results for ISCAS benchmark circuits are used to demonstrate the effectiveness of the proposed methods.
Younghwan JIN Jihyeon KWON Yuro LEE Dongchan LEE Jaemin AHN
In this paper, we analyze the effects of IQ (In-phase/Quadrature-phase) imbalance at both transmitter and receiver of OFDM (Orthogonal Frequency Division Multiplexing) system and show that more diversity gain can be achieved even though there are unwanted IQ imbalance. When mixed sub-carriers within an OFDM symbol due to the IQ imbalance undergo frequency selective channels, additional diversity effects are expected during the demodulation process. Simulation results on the symbol error rate (SER) performance with ML (Maximum Likelihood) and OSIC (Ordered Successive Interference Cancellation) receiver show that significant performance gain can be achieved with the diversity gain caused by the IQ imbalance combined with the frequency selective channels.
Satoshi KOBASHIKAWA Satoshi TAKAHASHI
Users require speech recognition systems that offer rapid response and high accuracy concurrently. Speech recognition accuracy is degraded by additive noise, imposed by ambient noise, and convolutional noise, created by space transfer characteristics, especially in distant talking situations. Against each type of noise, existing model adaptation techniques achieve robustness by using HMM-composition and CMN (cepstral mean normalization). Since they need an additive noise sample as well as a user speech sample to generate the models required, they can not achieve rapid response, though it may be possible to catch just the additive noise in a previous step. In the previous step, the technique proposed herein uses just the additive noise to generate an adapted and normalized model against both types of noise. When the user's speech sample is captured, only online-CMN need be performed to start the recognition processing, so the technique offers rapid response. In addition, to cover the unpredictable S/N values possible in real applications, the technique creates several S/N HMMs. Simulations using artificial speech data show that the proposed technique increased the character correct rate by 11.62% compared to CMN.
The author developed a GaAs wideband IQ modulator IC, which is utilized in RF signal source instruments with direct-conversion architecture. The layout is fully symmetric to obtain a temperature-stable operation. However, the actual temperature drift of EVM (Error Vector Magnitude) is greater in some frequency and temperature ranges than the first generation IC of the same architecture. For applications requiring the precision of electric instrumentation, temperature drift is highly critical. This paper clarifies that linear phase error is the dominant factor causing the temperature drift. It also identifies that such temperature drift of linear phase error is due to equivalent series impedance, especially parasitic capacitance of the phase shifter. This effect is verified by comparing the SSB measurements to a mathematical simulation using an empirical temperature-dependent small-signal FET model.
In this paper, signal processing techniques which can be applied to automatic speech recognition to improve its robustness are reviewed. The choice of signal processing techniques is strongly dependent on the scenario of the applications. The analysis of scenario and the choice of suitable signal processing techniques are shown through two examples.
Makoto SAKAI Norihide KITAOKA Seiichi NAKAGAWA
To precisely model the time dependency of features is one of the important issues for speech recognition. Segmental unit input HMM with a dimensionality reduction method has been widely used to address this issue. Linear discriminant analysis (LDA) and heteroscedastic extensions, e.g., heteroscedastic linear discriminant analysis (HLDA) or heteroscedastic discriminant analysis (HDA), are popular approaches to reduce dimensionality. However, it is difficult to find one particular criterion suitable for any kind of data set in carrying out dimensionality reduction while preserving discriminative information. In this paper, we propose a new framework which we call power linear discriminant analysis (PLDA). PLDA can be used to describe various criteria including LDA, HLDA, and HDA with one control parameter. In addition, we provide an efficient selection method using a control parameter without training HMMs nor testing recognition performance on a development data set. Experimental results show that the PLDA is more effective than conventional methods for various data sets.
Keiichi FUNAKI Tatsuhiko KINJO
Complex speech analysis for an analytic speech signal can accurately estimate the spectrum in low frequencies since the analytic signal provides spectrum only over positive frequencies. The remarkable feature makes it possible to realize more accurate F0 estimation using complex residual signal extracted by complex-valued speech analysis. We have already proposed F0 estimation using complex LPC residual, in which the autocorrelation function weighted by AMDF was adopted as the criterion. The method adopted MMSE-based complex LPC analysis and it has been reported that it can estimate more accurate F0 for IRS filtered speech corrupted by white Gauss noise although it can not work better for the IRS filtered speech corrupted by pink noise. In this paper, robust complex speech analysis based on ELS (Extended Least Square) method is introduced in order to overcome the drawback. The experimental results for additive white Gauss or pink noise demonstrate that the proposed algorithm based on robust ELS-based complex AR analysis can perform better than other methods.
This letter propose a new H∞ smoother (HIS) with a finite impulse response (FIR) structure for discrete-time state-space models. This smoother is called an H∞ FIR smoother (HIFS). Constraints such as linearity, quasi-deadbeat property, FIR structure, and independence of the initial state information are required in advance. Among smoothers with these requirements, we choose the HIFS to optimize H∞ performance criterion. The HIFS is obtained by solving the linear matrix inequality (LMI) problem with a parametrization of a linear equality constraint. It is shown through simulation that the proposed HIFS is more robust against uncertainties and faster in convergence than the conventional HIS.
Kenichi KOBAYASHI Tomoaki OHTSUKI Toshinobu KANEKO
Multiple-Input Multiple-Output (MIMO) systems can achieve high data-rate and high capacity transmission. In MIMO systems, eigen-beam space division multiplexing (E-SDM) that achieves much higher capacity by weighting at the transmitter based on feedback channel state information (CSI) has been studied. Early studies for E-SDM have assumed perfect CSI at the transmitter. However, in practice, the CSI fed back to the transmitter from the receiver becomes outdated due to the time-varying nature of the channels and feedback delay. Therefore, an outdated E-SDM cannot achieve the full performance possible. In this paper, we evaluate the performance of E-SDM with methods for reducing performance degradation due to feedback delay. We use three methods: 1) method that predicts CSI at future times when it will be used and feeds the predicted CSI back to the transmitter (denoted hereafter as channel prediction); 2), 3) method that uses the receive weight based on zero-forcing (ZF) or minimum mean square error (MMSE) criterion instead of those based on singular value decomposition (SVD) criterion (denoted hereafter as ZF or MMSE-based receive weight). We also propose methods that combine channel prediction with ZF or MMSE-based receive weight. Simulation results show that bit error rate (BER) degradation of E-SDM in the presence of feedback delay is reduced by using methods for reducing performance degradation due to feedback delay. We also show that methods that combine channel prediction with ZF or MMSE-based receive weight can achieve good BER even when the large feedback delay exists.
Mutsuo HIDAKA Shuichi NAGASAWA Kenji HINODE Tetsuro SATOH
We developed an Nb-based fabrication process for single flux quantum (SFQ) circuits in a Japanese government project that began in September 2002 and ended in March 2007. Our conventional process, called the Standard Process (SDP), was improved by overhauling all the process steps and routine process checks for all wafers. Wafer yield with the improved SDP dramatically increased from 50% to over 90%. We also developed a new fabrication process for SFQ circuits, called the Advanced Process (ADP). The specifications for ADP are nine planarized Nb layers, a minimum Josephson junction (JJ) size of 11 µm, a line width of 0.8 µm, a JJ critical current density of 10 kA/cm2, a 2.4 Ω Mo sheet resistance, and vertically stacked superconductive contact holes. We fabricated an eight-bit SFQ shift register, a one million SQUID array and a 16-kbit RAM by using the ADP. The shift register was operated up to 120 GHz and no short or open circuits were detected in the one million SQUID array. We confirmed correct memory operations by the 16-kbit RAM and a 5.7 times greater integration level compared to that possible with the SDP.
Wilaiporn LEE Suwich KUNARUTTANAPRUK Somchai JITAPUNKUL
This paper proposes a novel technique in designing the optimum pulse shape for ultra wideband (UWB) systems under the presence of timing jitter. In the UWB systems, pulse transmission power and timing jitter tolerance are crucial keys to communications success. While there is a strong desire to maximize both of them, one must be traded off against the other. In the literature, much effort has been devoted to separately optimize each of them without considering the drawback to the other. In this paper, both factors are jointly considered. The proposed pulse attains the adequate power to survive the noise floor and at the same time provides good resistance to the timing jitter. The proposed pulse also meets the power spectral mask restriction as prescribed by the Federal Communications Commission (FCC) for indoor UWB systems. Simulation results confirm the advantages of the proposed pulse over other previously known UWB pulses. Parameters of the proposed optimization algorithm are also investigated in this paper.
Ultra fast switching speed of superconducting digital circuits enable realization of Digital Signal Processors with performance unattainable by any other technology. Based on rapid-single-flux technology (RSFQ) logic, these integrated circuits are capable of delivering high computation capacity up to 30 GOPS on a single processor and very short latency of 0.1 ns. There are two main applications of such hardware for practical telecommunication systems: filters for superconducting ADCs operating with digital RF data and recursive filters at baseband. The later of these allows functions such as multiuser detection for 3G WCDMA, equalization and channel precoding for 4G OFDM MIMO, and general blind detection. The performance gain is an increase in the cell capacity, quality of service, and transmitted data rate. The current status of the development of the RSFQ baseband DSP is discussed. Major components with operating speed of 30 GHz have been developed. Designs, test results, and future development of the complete systems including cryopackaging and CMOS interface are reviewed.
Norihide KITAOKA Souta HAMAGUCHI Seiichi NAKAGAWA
To achieve high recognition performance for a wide variety of noise and for a wide range of signal-to-noise ratio, this paper presents methods for integration of four noise reduction algorithms: spectral subtraction with smoothing of time direction, temporal domain SVD-based speech enhancement, GMM-based speech estimation and KLT-based comb-filtering. In this paper, we proposed two types of combination methods of noise suppression algorithms: selection of front-end processor and combination of results from multiple recognition processes. Recognition results on the CENSREC-1 task showed the effectiveness of our proposed methods.
An effective feature compensation method is developed for reliable speech recognition in real-life in-vehicle environments. The CU-Move corpus, used for evaluation, contains a range of speech and noise signals collected for a number of speakers under actual driving conditions. PCGMM-based feature compensation, considered in this paper, utilizes parallel model combination to generate noise-corrupted speech model by combining clean speech and the noise model. In order to address unknown time-varying background noise, an interpolation method of multiple environmental models is employed. To alleviate computational expenses due to multiple models, an Environment Transition Model is employed, which is motivated from Noise Language Model used in Environmental Sniffing. An environment dependent scheme of mixture sharing technique is proposed and shown to be more effective in reducing the computational complexity. A smaller environmental model set is determined by the environment transition model for mixture sharing. The proposed scheme is evaluated on the connected single digits portion of the CU-Move database using the Aurora2 evaluation toolkit. Experimental results indicate that our feature compensation method is effective for improving speech recognition in real-life in-vehicle conditions. A reduction of 73.10% of the computational requirements was obtained by employing the environment dependent mixture sharing scheme with only a slight change in recognition performance. This demonstrates that the proposed method is effective in maintaining the distinctive characteristics among the different environmental models, even when selecting a large number of Gaussian components for mixture sharing.
Sungwook KIM Myungwhan CHOI Sungchun KIM
New multimedia services over cellular/WLAN overlay networks require different Quality of Service (QoS) levels. Therefore, an efficient network management system is necessary in order to realize QoS sensitive multimedia services while enhancing network performance. In this paper, we propose a new online network management framework for overlay networks. Our online approach to network management exhibits dynamic adaptability, flexibility, and responsiveness to the traffic conditions in multimedia networks. Simulation results indicate that our proposed framework can strike the appropriate balance between performance criteria under widely varying diverse traffic loads.