The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SPE(2504hit)

161-180hit(2504hit)

  • Generative Moment Matching Network-Based Neural Double-Tracking for Synthesized and Natural Singing Voices

    Hiroki TAMARU  Yuki SAITO  Shinnosuke TAKAMICHI  Tomoki KORIYAMA  Hiroshi SARUWATARI  

     
    PAPER-Speech and Hearing

      Pubricized:
    2019/12/23
      Vol:
    E103-D No:3
      Page(s):
    639-647

    This paper proposes a generative moment matching network (GMMN)-based post-filtering method for providing inter-utterance pitch variation to singing voices and discusses its application to our developed mixing method called neural double-tracking (NDT). When a human singer sings and records the same song twice, there is a difference between the two recordings. The difference, which is called inter-utterance variation, enriches the performer's musical expression and the audience's experience. For example, it makes every concert special because it never recurs in exactly the same manner. Inter-utterance variation enables a mixing method called double-tracking (DT). With DT, the same phrase is recorded twice, then the two recordings are mixed to give richness to singing voices. However, in synthesized singing voices, which are commonly used to create music, there is no inter-utterance variation because the synthesis process is deterministic. There is also no inter-utterance variation when only one voice is recorded. Although there is a signal processing-based method called artificial DT (ADT) to layer singing voices, the signal processing results in unnatural sound artifacts. To solve these problems, we propose a post-filtering method for randomly modulating synthesized or natural singing voices as if the singer sang again. The post-filter built with our method models the inter-utterance pitch variation of human singing voices using a conditional GMMN. Evaluation results indicate that 1) the proposed method provides perceptible and natural inter-utterance variation to synthesized singing voices and that 2) our NDT exhibits higher double-trackedness than ADT when applied to both synthesized and natural singing voices.

  • Cross-Corpus Speech Emotion Recognition Based on Deep Domain-Adaptive Convolutional Neural Network

    Jiateng LIU  Wenming ZHENG  Yuan ZONG  Cheng LU  Chuangao TANG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2019/11/07
      Vol:
    E103-D No:2
      Page(s):
    459-463

    In this letter, we propose a novel deep domain-adaptive convolutional neural network (DDACNN) model to handle the challenging cross-corpus speech emotion recognition (SER) problem. The framework of the DDACNN model consists of two components: a feature extraction model based on a deep convolutional neural network (DCNN) and a domain-adaptive (DA) layer added in the DCNN utilizing the maximum mean discrepancy (MMD) criterion. We use labeled spectrograms from source speech corpus combined with unlabeled spectrograms from target speech corpus as the input of two classic DCNNs to extract the emotional features of speech, and train the model with a special mixed loss combined with a cross-entrophy loss and an MMD loss. Compared to other classic cross-corpus SER methods, the major advantage of the DDACNN model is that it can extract robust speech features which are time-frequency related by spectrograms and narrow the discrepancies between feature distribution of source corpus and target corpus to get better cross-corpus performance. Through several cross-corpus SER experiments, our DDACNN achieved the state-of-the-art performance on three public emotion speech corpora and is proved to handle the cross-corpus SER problem efficiently.

  • Automatic Construction of a Large-Scale Speech Recognition Database Using Multi-Genre Broadcast Data with Inaccurate Subtitle Timestamps

    Jeong-Uk BANG  Mu-Yeol CHOI  Sang-Hun KIM  Oh-Wook KWON  

     
    PAPER-Speech and Hearing

      Pubricized:
    2019/11/13
      Vol:
    E103-D No:2
      Page(s):
    406-415

    As deep learning-based speech recognition systems are spotlighted, the need for large-scale speech databases for acoustic model training is increasing. Broadcast data can be easily used for database construction, since it contains transcripts for the hearing impaired. However, the subtitle timestamps have not been used to extract speech data because they are often inaccurate due to the inherent characteristics of closed captioning. Thus, we propose to build a large-scale speech database from multi-genre broadcast data with inaccurate subtitle timestamps. The proposed method first extracts the most likely speech intervals by removing subtitle texts with low subtitle quality index, concatenating adjacent subtitle texts into a merged subtitle text, and adding a margin to the timestamp of the merged subtitle text. Next, a speech recognizer is used to extract a hypothesis text of a speech segment corresponding to the merged subtitle text, and then the hypothesis text obtained from the decoder is recursively aligned with the merged subtitle text. Finally, the speech database is constructed by selecting the sub-parts of the merged subtitle text that match the hypothesis text. Our method successfully refines a large amount of broadcast data with inaccurate subtitle timestamps, taking about half of the time compared with the previous methods. Consequently, our method is useful for broadcast data processing, where bulk speech data can be collected every hour.

  • Constant-Q Deep Coefficients for Playback Attack Detection

    Jichen YANG  Longting XU  Bo REN  

     
    LETTER-Speech and Hearing

      Pubricized:
    2019/11/14
      Vol:
    E103-D No:2
      Page(s):
    464-468

    Under the framework of traditional power spectrum based feature extraction, in order to extract more discriminative information for playback attack detection, this paper proposes a feature by making use of deep neural network to describe the nonlinear relationship between power spectrum and discriminative information. Namely, constant-Q deep coefficients (CQDC). It relies on constant-Q transform, deep neural network and discrete cosine transform. In which, constant-Q transform is used to convert signal from the time domain into the frequency domain because it is a long-term transform that can provide more frequency detail, deep neural network is used to extract more discriminative information to discriminate playback speech from genuine speech and discrete cosine transform is used to decorrelate among the feature dimensions. ASVspoof 2017 corpus version 2.0 is used to evaluate the performance of CQDC. The experimental results show that CQDC outperforms the existing power spectrum obtained from constant-Q transform based features, and equal error can reduce from 19.18% to 51.56%. In addition, we found that discriminative information of CQDC hides in all frequency bins, which is different from commonly used features.

  • A Cell Probe-Based Method for Vehicle Speed Estimation Open Access

    Chi-Hua CHEN  

     
    LETTER

      Vol:
    E103-A No:1
      Page(s):
    265-267

    Information and communication technologies have improved the quality of intelligent transportation systems (ITS). By estimating from cellular floating vehicle data (CFVD) is more cost-effective, and easier to acquire than traditional ways. This study proposes a cell probe (CP)-based method to analyse the cellular network signals (e.g., call arrival, handoff, and location update), and regression models are trained for vehicle speed estimation. In experiments, this study compares the practical traffic information of vehicle detector (VD) with the estimated traffic information by the proposed methods. The experiment results show that the accuracy of vehicle speed estimation by CP-based method is 97.63%. Therefore, the CP-based method can be used to estimate vehicle speed from CFVD for ITS.

  • Unbiased Interference Suppression Method Based on Spectrum Compensation Open Access

    Jian WU  Xiaomei TANG  Zengjun LIU  Baiyu LI  Feixue WANG  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2019/07/16
      Vol:
    E103-B No:1
      Page(s):
    52-59

    The major weakness of global navigation satellite system receivers is their vulnerability to intentional and unintentional interference. Frequency domain interference suppression (FDIS) technology is one of the most useful countermeasures. The pseudo-range measurement is unbiased after FDIS filtering given an ideal analog channel. However, with the influence of the analog modules used in RF front-end, the amplitude response and phase response of the channel equivalent filter are non-ideal, which bias the pseudo-range measurement after FDIS filtering and the bias varies along with the frequency of the interference. This paper proposes an unbiased interference suppression method based on signal estimation and spectrum compensation. The core idea is to use the parameters calculated from the tracking loop to estimate and reconstruct the desired signal. The estimated signal is filtered by the equivalent filter of actual channel, then it is used for compensating the spectrum loss caused by the FDIS method in the frequency domain. Simulations show that the proposed algorithm can reduce the pseudo-range measurement bias significantly, even for channels with asymmetrical group delay and multiple interference sources at any location.

  • Visible Light V2V Communication and Ranging System Prototypes Using Spread Spectrum Techniques Open Access

    Akira John SUZUKI  Masahiro YAMAMOTO  Kiyoshi MIZUI  

     
    PAPER

      Vol:
    E103-A No:1
      Page(s):
    243-251

    There is currently much interest in the development of Optic Wireless and Visible Light Communication (VLC) systems in the ITS field. Research in VLC and boomerang systems in particular often remain at a theoretical or computer-simulated level. This paper reports the 3-stage development of a boomerang prototype communication and ranging system using visible light V2V communication via LEDs and photodiodes, with direct-sequence spread spectrum techniques. The system uses simple and widely available components aiming for a low-cost frugal innovation approach. Results show that while we have to improve the prototype distance measurement unit due to a margin of error, simultaneous communication and ranging is possible with our newly designed prototype. The benefits of further research and development of boomerang technology prototypes are confirmed.

  • Distributed Collaborative Spectrum Sensing Using 1-Bit Compressive Sensing in Cognitive Radio Networks

    Shengnan YAN  Mingxin LIU  Jingjing SI  

     
    LETTER-Communication Theory and Signals

      Vol:
    E103-A No:1
      Page(s):
    382-388

    In cognitive radio (CR) networks, spectrum sensing is an essential task for enabling dynamic spectrum sharing. However, the problem becomes quite challenging in wideband spectrum sensing due to high sampling pressure, limited power and computing resources, and serious channel fading. To overcome these challenges, this paper proposes a distributed collaborative spectrum sensing scheme based on 1-bit compressive sensing (CS). Each secondary user (SU) performs local 1-bit CS and obtains support estimate information from the signal reconstruction. To utilize joint sparsity and achieve spatial diversity, the support estimate information among the network is fused via the average consensus technique based on distributed computation and one-hop communications. Then the fused result on support estimate is used as priori information to guide the next local signal reconstruction, which is implemented via our proposed weighted binary iterative hard thresholding (BIHT) algorithm. The local signal reconstruction and the distributed fusion of support information are alternately carried out until reliable spectrum detection is achieved. Simulations testify the effectiveness of our proposed scheme in distributed CR networks.

  • Convolutional Neural Networks for Pilot-Induced Cyclostationarity Based OFDM Signals Spectrum Sensing in Full-Duplex Cognitive Radio

    Hang LIU  Xu ZHU  Takeo FUJII  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2019/07/16
      Vol:
    E103-B No:1
      Page(s):
    91-102

    The spectrum sensing of the orthogonal frequency division multiplexing (OFDM) system in cognitive radio (CR) has always been challenging, especially for user terminals that utilize the full-duplex (FD) mode. We herein propose an advanced FD spectrum-sensing scheme that can be successfully performed even when severe self-interference is encountered from the user terminal. Based on the “classification-converted sensing” framework, the cyclostationary periodogram generated by OFDM pilots is exhibited in the form of images. These images are subsequently plugged into convolutional neural networks (CNNs) for classifications owing to the CNN's strength in image recognition. More importantly, to realize spectrum sensing against residual self-interference, noise pollution, and channel fading, we used adversarial training, where a CR-specific, modified training database was proposed. We analyzed the performances exhibited by the different architectures of the CNN and the different resolutions of the input image to balance the detection performance with computing capability. We proposed a design plan of the signal structure for the CR transmitting terminal that can fit into the proposed spectrum-sensing scheme while benefiting from its own transmission. The simulation results prove that our method has excellent sensing capability for the FD system; furthermore, our method achieves a higher detection accuracy than the conventional method.

  • Blind Bandwidth Extension with a Non-Linear Function and Its Evaluation on Automatic Speaker Verification

    Ryota KAMINISHI  Haruna MIYAMOTO  Sayaka SHIOTA  Hitoshi KIYA  

     
    PAPER

      Pubricized:
    2019/10/25
      Vol:
    E103-D No:1
      Page(s):
    42-49

    This study evaluates the effects of some non-learning blind bandwidth extension (BWE) methods on state-of-the-art automatic speaker verification (ASV) systems. Recently, a non-linear bandwidth extension (N-BWE) method has been proposed as a blind, non-learning, and light-weight BWE approach. Other non-learning BWEs have also been developed in recent years. For ASV evaluations, most data available to train ASV systems is narrowband (NB) telephone speech. Meanwhile, wideband (WB) data have been used to train the state-of-the-art ASV systems, such as i-vector, d-vector, and x-vector. This can cause sampling rate mismatches when all datasets are used. In this paper, we investigate the influence of sampling rate mismatches in the x-vector-based ASV systems and how non-learning BWE methods perform against them. The results showed that the N-BWE method improved the equal error rate (EER) on ASV systems based on the x-vector when the mismatches were present. We researched the relationship between objective measurements and EERs. Consequently, the N-BWE method produced the lowest EERs on both ASV systems and obtained the lower RMS-LSD value and the higher STOI score.

  • On the Detection of Malicious Behaviors against Introspection Using Hardware Architectural Events

    Huaizhe ZHOU  Haihe BA  Yongjun WANG  Tie HONG  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2019/10/09
      Vol:
    E103-D No:1
      Page(s):
    177-180

    The arms race between offense and defense in the cloud impels the innovation of techniques for monitoring attacks and unauthorized activities. The promising technique of virtual machine introspection (VMI) becomes prevalent for its tamper-resistant capability. However, some elaborate exploitations are capable of invalidating VMI-based tools by breaking the assumption of a trusted guest kernel. To achieve a more reliable and robust introspection, we introduce a practical approach to monitor and detect attacks that attempt to subvert VMI in this paper. Our approach combines supervised machine learning and hardware architectural events to identify those malicious behaviors which are targeted at VMI techniques. To demonstrate the feasibility, we implement a prototype named HyperMon on the Xen hypervisor. The results of our evaluation show the effectiveness of HyperMon in detecting malicious behaviors with an average accuracy of 90.51% (AUC).

  • On the Complementary Role of DNN Multi-Level Enhancement for Noisy Robust Speaker Recognition in an I-Vector Framework

    Xingyu ZHANG  Xia ZOU  Meng SUN  Penglong WU  Yimin WANG  Jun HE  

     
    LETTER-Speech and Hearing

      Vol:
    E103-A No:1
      Page(s):
    356-360

    In order to improve the noise robustness of automatic speaker recognition, many techniques on speech/feature enhancement have been explored by using deep neural networks (DNN). In this work, a DNN multi-level enhancement (DNN-ME), which consists of the stages of signal enhancement, cepstrum enhancement and i-vector enhancement, is proposed for text-independent speaker recognition. Given the fact that these enhancement methods are applied in different stages of the speaker recognition pipeline, it is worth exploring the complementary role of these methods, which benefits the understanding of the pros and cons of the enhancements of different stages. In order to use the capabilities of DNN-ME as much as possible, two kinds of methods called Cascaded DNN-ME and joint input of DNNs are studied. Weighted Gaussian mixture models (WGMMs) proposed in our previous work is also applied to further improve the model's performance. Experiments conducted on the Speakers in the Wild (SITW) database have shown that DNN-ME demonstrated significant superiority over the systems with only a single enhancement for noise robust speaker recognition. Compared with the i-vector baseline, the equal error rate (EER) was reduced from 5.75 to 4.01.

  • Blind Detection Algorithm Based on Spectrum Sharing and Coexistence for Machine-to-Machine Communication

    Yun ZHANG  Bingrui LI  Shujuan YU  Meisheng ZHAO  

     
    PAPER-Analog Signal Processing

      Vol:
    E103-A No:1
      Page(s):
    297-302

    In this paper, we propose a new scheme which uses blind detection algorithm for recovering the conventional user signal in a system which the sporadic machine-to-machine (M2M) communication share the same spectrum with the conventional user. Compressive sensing techniques are used to estimate the M2M devices signals. Based on the Hopfield neural network (HNN), the blind detection algorithm is used to recover the conventional user signal. The simulation results show that the conventional user signal can be effectively restored under an unknown channel. Compared with the existing methods, such as using the training sequence to estimate the channel in advance, the blind detection algorithm used in this paper with no need for identifying the channel, and can directly detect the transmitted signal blindly.

  • Non-Blind Speech Watermarking Method Based on Spread-Spectrum Using Linear Prediction Residue

    Reiya NAMIKAWA  Masashi UNOKI  

     
    LETTER

      Pubricized:
    2019/10/23
      Vol:
    E103-D No:1
      Page(s):
    63-66

    We propose a method of non-blind speech watermarking based on direct spread spectrum (DSS) using a linear prediction scheme to solve sound distortion due to spread spectrum. Results of evaluation simulations revealed that the proposed method had much lower sound-quality distortion than the DSS method while having almost the same bit error ratios (BERs) against various attacks as the DSS method.

  • Spectra Restoration of Bone-Conducted Speech via Attention-Based Contextual Information and Spectro-Temporal Structure Constraint Open Access

    Changyan ZHENG  Tieyong CAO  Jibin YANG  Xiongwei ZHANG  Meng SUN  

     
    LETTER-Digital Signal Processing

      Vol:
    E102-A No:12
      Page(s):
    2001-2007

    Compared with acoustic microphone (AM) speech, bone-conducted microphone (BCM) speech is much immune to background noise, but suffers from severe loss of information due to the characteristics of the human-body transmission channel. In this letter, a new method for the speaker-dependent BCM speech enhancement is proposed, in which we focus our attention on the spectra restoration of the distorted speech. In order to better infer the missing components, an attention-based bidirectional Long Short-Term Memory (AB-BLSTM) is designed to optimize the use of contextual information to model the relationship between the spectra of BCM speech and its corresponding clean AM speech. Meanwhile, a structural error metric, Structural SIMilarity (SSIM) metric, originated from image processing is proposed to be the loss function, which provides the constraint of the spectro-temporal structures in recovering of the spectra. Experiments demonstrate that compared with approaches based on conventional DNN and mean square error (MSE), the proposed method can better recover the missing phonemes and obtain spectra with spectro-temporal structure more similar to the target one, which leads to great improvement on objective metrics.

  • Ternary Convolutional Codes with Optimum Distance Spectrum

    Shungo MIYAGI  Motohiko ISAKA  

     
    LETTER-Coding Theory

      Vol:
    E102-A No:12
      Page(s):
    1688-1690

    This letter presents ternary convolutional codes and their punctured codes with optimum distance spectrum.

  • A Low Area Overhead Design Method for High-Performance General-Synchronous Circuits with Speculative Execution

    Shimpei SATO  Eijiro SASSA  Yuta UKON  Atsushi TAKAHASHI  

     
    PAPER

      Vol:
    E102-A No:12
      Page(s):
    1760-1769

    In order to obtain high-performance circuits in advanced technology nodes, design methodology has to take the existence of large delay variations into account. Clock scheduling and speculative execution have overheads to realize them, but have potential to improve the performance by averaging the imbalance of maximum delay among paths and by utilizing valid data available earlier than worst-case scenarios, respectively. In this paper, we propose a high-performance digital circuit design method with speculative executions with less overhead by utilizing clock scheduling with delay insertions effectively. The necessity of speculations that cause overheads is effectively reduced by clock scheduling with delay insertion. Experiments show that a generated circuit achieves 26% performance improvement with 1.3% area overhead compared to a circuit without clock scheduling and without speculative execution.

  • A Novel Three-Point Windowed Interpolation DFT Method for Frequency Measurement of Real Sinusoid Signal

    Kai WANG  Yiting GAO  Lin ZHOU  

     
    PAPER-Digital Signal Processing

      Vol:
    E102-A No:12
      Page(s):
    1940-1945

    The windowed interpolation DFT methods have been utilized to estimate the parameters of a single frequency and multi-frequency signal. Nevertheless, they do not work well for the real-valued sinusoids with closely spaced positive- and negative- frequency. In this paper, we describe a novel three-point windowed interpolation DFT method for frequency measurement of real-valued sinusoid signal. The exact representation of the windowed DFT with maximum sidelobe decay window (MSDW) is constructed. The spectral superposition of positive- and negative-frequency is considered and calculated to improve the estimation performance. The simulation results match with the theoretical values well. In addition, computer simulations demonstrate that the proposed algorithm provides high estimation accuracy and good noise suppression capability.

  • High Performance Application Specific Stream Architecture for Hardware Acceleration of HOG-SVM on FPGA

    Piyumal RANAWAKA  Mongkol EKPANYAPONG  Adriano TAVARES  Mathew DAILEY  Krit ATHIKULWONGSE  Vitor SILVA  

     
    PAPER

      Vol:
    E102-A No:12
      Page(s):
    1792-1803

    Conventional sequential processing on software with a general purpose CPU has become significantly insufficient for certain heavy computations due to the high demand of processing power to deliver adequate throughput and performance. Due to many reasons a high degree of interest could be noted for high performance real time video processing on embedded systems. However, embedded processing platforms with limited performance could least cater the processing demand of several such intensive computations in computer vision domain. Therefore, hardware acceleration could be noted as an ideal solution where process intensive computations could be accelerated using application specific hardware integrated with a general purpose CPU. In this research we have focused on building a parallelized high performance application specific architecture for such a hardware accelerator for HOG-SVM computation implemented on Zynq 7000 FPGA. Histogram of Oriented Gradients (HOG) technique combined with a Support Vector Machine (SVM) based classifier is versatile and extremely popular in computer vision domain in contrast to high demand for processing power. Due to the popularity and versatility, various previous research have attempted on obtaining adequate throughput on HOG-SVM. This research with a high throughput of 240FPS on single scale on VGA frames of size 640x480 out performs the best case performance on a single scale of previous research by approximately a factor of 3-4. Further it's an approximately 15x speed up over the GPU accelerated software version with the same accuracy. This research has explored the possibility of using a novel architecture based on deep pipelining, parallel processing and BRAM structures for achieving high performance on the HOG-SVM computation. Further the above developed (video processing unit) VPU which acts as a hardware accelerator will be integrated as a co-processing peripheral to a host CPU using a novel custom accelerator structure with on chip buses in a System-On-Chip (SoC) fashion. This could be used to offload the heavy video stream processing redundant computations to the VPU whereas the processing power of the CPU could be preserved for running light weight applications. This research mainly focuses on the architectural techniques used to achieve higher performance on the hardware accelerator and on the novel accelerator structure used to integrate the accelerator with the host CPU.

  • New Sub-Band Adaptive Volterra Filter for Identification of Loudspeaker

    Satoshi KINOSHITA  Yoshinobu KAJIKAWA  

     
    PAPER-Digital Signal Processing

      Vol:
    E102-A No:12
      Page(s):
    1946-1955

    Adaptive Volterra filters (AVFs) are usually used to identify nonlinear systems, such as loudspeaker systems, and ordinary adaptive algorithms can be used to update the filter coefficients of AVFs. However, AVFs require huge computational complexity even if the order of the AVF is constrained to the second order. Improving calculation efficiency is therefore an important issue for the real-time implementation of AVFs. In this paper, we propose a novel sub-band AVF with high calculation efficiency for second-order AVFs. The proposed sub-band AVF consists of four parts: input signal transformation for a single sub-band AVF, tap length determination to improve calculation efficiency, switching the number of sub-bands while maintaining the estimation accuracy, and an automatic search for an appropriate number of sub-bands. The proposed sub-band AVF can improve calculation efficiency for which the dominant nonlinear components are concentrated in any frequency band, such as loudspeakers. A simulation result demonstrates that the proposed sub-band AVF can realize higher estimation accuracy than conventional efficient AVFs.

161-180hit(2504hit)