The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

701-720hit(3945hit)

  • Semi-Supervised Speech Enhancement Combining Nonnegative Matrix Factorization and Robust Principal Component Analysis

    Yonggang HU  Xiongwei ZHANG  Xia ZOU  Meng SUN  Yunfei ZHENG  Gang MIN  

     
    LETTER-Speech and Hearing

      Vol:
    E100-A No:8
      Page(s):
    1714-1719

    Nonnegative matrix factorization (NMF) is one of the most popular machine learning tools for speech enhancement. The supervised NMF-based speech enhancement is accomplished by updating iteratively with the prior knowledge of the clean speech and noise spectra bases. However, in many real-world scenarios, it is not always possible for conducting any prior training. The traditional semi-supervised NMF (SNMF) version overcomes this shortcoming while the performance degrades. In this letter, without any prior knowledge of the speech and noise, we present an improved semi-supervised NMF-based speech enhancement algorithm combining techniques of NMF and robust principal component analysis (RPCA). In this approach, fixed speech bases are obtained from the training samples chosen from public dateset offline. The noise samples used for noise bases training, instead of characterizing a priori as usual, can be obtained via RPCA algorithm on the fly. This letter also conducts a study on the assumption whether the time length of the estimated noise samples may have an effect on the performance of the algorithm. Three metrics, including PESQ, SDR and SNR are applied to evaluate the performance of the algorithms by making experiments on TIMIT with 20 noise types at various signal-to-noise ratio levels. Extensive experimental results demonstrate the superiority of the proposed algorithm over the competing speech enhancement algorithm.

  • Mutual Kernel Matrix Completion

    Rachelle RIVERO  Richard LEMENCE  Tsuyoshi KATO  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/05/17
      Vol:
    E100-D No:8
      Page(s):
    1844-1851

    With the huge influx of various data nowadays, extracting knowledge from them has become an interesting but tedious task among data scientists, particularly when the data come in heterogeneous form and have missing information. Many data completion techniques had been introduced, especially in the advent of kernel methods — a way in which one can represent heterogeneous data sets into a single form: as kernel matrices. However, among the many data completion techniques available in the literature, studies about mutually completing several incomplete kernel matrices have not been given much attention yet. In this paper, we present a new method, called Mutual Kernel Matrix Completion (MKMC) algorithm, that tackles this problem of mutually inferring the missing entries of multiple kernel matrices by combining the notions of data fusion and kernel matrix completion, applied on biological data sets to be used for classification task. We first introduced an objective function that will be minimized by exploiting the EM algorithm, which in turn results to an estimate of the missing entries of the kernel matrices involved. The completed kernel matrices are then combined to produce a model matrix that can be used to further improve the obtained estimates. An interesting result of our study is that the E-step and the M-step are given in closed form, which makes our algorithm efficient in terms of time and memory. After completion, the (completed) kernel matrices are then used to train an SVM classifier to test how well the relationships among the entries are preserved. Our empirical results show that the proposed algorithm bested the traditional completion techniques in preserving the relationships among the data points, and in accurately recovering the missing kernel matrix entries. By far, MKMC offers a promising solution to the problem of mutual estimation of a number of relevant incomplete kernel matrices.

  • High-Accuracy and Area-Efficient Stochastic FIR Digital Filters Based on Hybrid Computation

    Shunsuke KOSHITA  Naoya ONIZAWA  Masahide ABE  Takahiro HANYU  Masayuki KAWAMATA  

     
    PAPER-VLSI Architecture

      Pubricized:
    2017/05/22
      Vol:
    E100-D No:8
      Page(s):
    1592-1602

    This paper presents FIR digital filters based on stochastic/binary hybrid computation with reduced hardware complexity and high computational accuracy. Recently, some attempts have been made to apply stochastic computation to realization of digital filters. Such realization methods lead to significant reduction of hardware complexity over the conventional filter realizations based on binary computation. However, the stochastic digital filters suffer from lower computational accuracy than the digital filters based on binary computation because of the random error fluctuations that are generated in stochastic bit streams, stochastic multipliers, and stochastic adders. This becomes a serious problem in the case of FIR filter realizations compared with the IIR counterparts because FIR filters usually require larger number of multiplications and additions than IIR filters. To improve the computational accuracy, this paper presents a stochastic/binary hybrid realization, where multipliers are realized using stochastic computation but adders are realized using binary computation. In addition, a coefficient-scaling technique is proposed to further improve the computational accuracy of stochastic FIR filters. Furthermore, the transposed structure is applied to the FIR filter realization, leading to reduction of hardware complexity. Evaluation results demonstrate that our method achieves at most 40dB improvement in minimum stopband attenuation compared with the conventional pure stochastic design.

  • Investigation on Non-Orthogonal Multiple Access with Reduced Complexity Maximum Likelihood Receiver and Dynamic Resource Allocation

    Yousuke SANO  Kazuaki TAKEDA  Satoshi NAGATA  Takehiro NAKAMURA  Xiaohang CHEN  Anxin LI  Xu ZHANG  Jiang HUILING  Kazuhiko FUKAWA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/02/08
      Vol:
    E100-B No:8
      Page(s):
    1301-1311

    Non-orthogonal multiple access (NOMA) is a promising multiple access scheme for further improving the spectrum efficiency compared to orthogonal multiple access (OMA) in the 5th Generation (5G) mobile communication systems. As inter-user interference cancellers for NOMA, two kinds of receiver structures are considered. One is the reduced complexity-maximum likelihood receiver (R-ML) and the other is the codeword level interference canceller (CWIC). In this paper, we show that the R-ML is superior to the CWIC in terms of scheduling flexibility. In addition, we propose a link to system (L2S) mapping scheme for the R-ML to conduct a system level evaluation, and show that the proposed scheme accurately predicts the block error rate (BLER) performance of the R-ML. The proposed L2S mapping scheme also demonstrates that the system level throughput performance of the R-ML is higher than that for the CWIC thanks to the scheduling flexibility.

  • Node-to-Node Disjoint Paths Problem in Möbius Cubes

    David KOCIK  Keiichi KANEKO  

     
    PAPER-Dependable Computing

      Pubricized:
    2017/04/25
      Vol:
    E100-D No:8
      Page(s):
    1837-1843

    The Möbius cube is a variant of the hypercube. Its advantage is that it can connect the same number of nodes as a hypercube but with almost half the diameter of the hypercube. We propose an algorithm to solve the node-to-node disjoint paths problem in n-Möbius cubes in polynomial-order time of n. We provide a proof of correctness of the algorithm and estimate that the time complexity is O(n2) and the maximum path length is 3n-5.

  • Design of a High-Throughput Sliding Block Viterbi Decoder for IEEE 802.11ac WLAN Systems

    Kai-Feng XIA  Bin WU  Tao XIONG  Cheng-Ying CHEN  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:8
      Page(s):
    1606-1614

    This paper presents a high-throughput sliding block Viterbi decoder for IEEE 802.11ac systems. A 64-state bidirectional sliding block Viterbi method is proposed to meet the speed requirement of the system. The decoder throughput goes up to 640Mbps, which can be further increased by adding the block parallelism. Moreover, a modified add-compare-select (ACS) unit is designed to enhance the working frequency. The modified ACS unit obtains nearly 26% speed-up, compared to the conventional ACS unit. However, the area overhead and power dissipation are almost the same. The decoder is designed in a SMIC 0.13µm technology, and it occupies 1.96mm2 core area and 105mW power consumption with an energy efficiency of 0.1641nJ/bit with a 1.2V voltage supply.

  • Development of Wireless Access and Flexible Networking Technologies for 5G Cellular Systems Open Access

    Seiichi SAMPEI  

     
    INVITED PAPER-Wireless Communication Technologies

      Pubricized:
    2017/02/08
      Vol:
    E100-B No:8
      Page(s):
    1174-1180

    This paper discusses key technologies specific for fifth generation (5G) cellular systems which are expected to connect internet of things (IoT) based vertical sectors. Because services for 5G will be expanded drastically, from information transfer services to mission critical and massive connection IoT connection services for vertical sectors, and requirement for cellular systems becomes quite different compared to that of fourth generation (4G) systems, after explanation for the service and technical trends for 5G, key wireless access technologies will be discussed, especially, from the view point of what is new and how import. In addition to the introduction of new technologies for wireless access, flexibility of networking is also discussed because it can cope with QoS support services, especially to cope with end-to-end latency constraint conditions. Therefore, this paper also discuss flexible network configuration using mobile edge computing (MEC) based on software defined network (SDN) and network slicing.

  • Fine-Grained Analysis of Compromised Websites with Redirection Graphs and JavaScript Traces

    Yuta TAKATA  Mitsuaki AKIYAMA  Takeshi YAGI  Takeshi YADA  Shigeki GOTO  

     
    PAPER-Internet Security

      Pubricized:
    2017/05/18
      Vol:
    E100-D No:8
      Page(s):
    1714-1728

    An incident response organization such as a CSIRT contributes to preventing the spread of malware infection by analyzing compromised websites and sending abuse reports with detected URLs to webmasters. However, these abuse reports with only URLs are not sufficient to clean up the websites. In addition, it is difficult to analyze malicious websites across different client environments because these websites change behavior depending on a client environment. To expedite compromised website clean-up, it is important to provide fine-grained information such as malicious URL relations, the precise position of compromised web content, and the target range of client environments. In this paper, we propose a new method of constructing a redirection graph with context, such as which web content redirects to malicious websites. The proposed method analyzes a website in a multi-client environment to identify which client environment is exposed to threats. We evaluated our system using crawling datasets of approximately 2,000 compromised websites. The result shows that our system successfully identified malicious URL relations and compromised web content, and the number of URLs and the amount of web content to be analyzed were sufficient for incident responders by 15.0% and 0.8%, respectively. Furthermore, it can also identify the target range of client environments in 30.4% of websites and a vulnerability that has been used in malicious websites by leveraging target information. This fine-grained analysis by our system would contribute to improving the daily work of incident responders.

  • A Balanced Decision Tree Based Heuristic for Linear Decomposition of Index Generation Functions

    Shinobu NAGAYAMA  Tsutomu SASAO  Jon T. BUTLER  

     
    PAPER-Logic Design

      Pubricized:
    2017/05/19
      Vol:
    E100-D No:8
      Page(s):
    1583-1591

    Index generation functions model content-addressable memory, and are useful in virus detectors and routers. Linear decompositions yield simpler circuits that realize index generation functions. This paper proposes a balanced decision tree based heuristic to efficiently design linear decompositions for index generation functions. The proposed heuristic finds a good linear decomposition of an index generation function by using appropriate cost functions and a constraint to construct a balanced tree. Since the proposed heuristic is fast and requires a small amount of memory, it is applicable even to large index generation functions that cannot be solved in a reasonable time by existing heuristics. This paper shows time and space complexities of the proposed heuristic, and experimental results using some large examples to show its efficiency.

  • A Fast Updatable Implementation of Index Generation Functions Using Multiple IGUs

    Tsutomu SASAO  

     
    PAPER-Logic Design

      Pubricized:
    2017/05/19
      Vol:
    E100-D No:8
      Page(s):
    1574-1582

    This paper presents a method to realize index generation functions using multiple Index Generation Units (IGUs). The architecture implements index generation functions more efficiently than a single IGU when the number of registered vectors is very large. This paper proves that independent linear transformations are necessary in IGUs for efficient realization. Experimental results confirm this statement. Finally, it shows a fast update method to IGUs.

  • A Systematic Methodology for Design and Worst-Case Error Analysis of Approximate Array Multipliers

    Takahiro YAMAMOTO  Ittetsu TANIGUCHI  Hiroyuki TOMIYAMA  Shigeru YAMASHITA  Yuko HARA-AZUMI  

     
    LETTER

      Vol:
    E100-A No:7
      Page(s):
    1496-1499

    Approximate computing is considered as a promising approach to design of power- or area-efficient digital circuits. This paper proposes a systematic methodology for design and worst-case accuracy analysis of approximate array multipliers. Our methodology systematically designs a series of approximate array multipliers with different area, delay, power and accuracy characteristics so that an LSI designer can select the one which best fits to the requirements of her/his applications. Our experiments explore the trade-offs among area, delay, power and accuracy of the approximate multipliers.

  • CF3: Test Suite for Arithmetic Optimization of C Compilers

    Yusuke HIBINO  Hirofumi IKEO  Nagisa ISHIURA  

     
    LETTER

      Vol:
    E100-A No:7
      Page(s):
    1511-1512

    This letter presents a test suite CF3 designed to find bugs in arithmetic optimizers of C compilers. It consists of 13,720 test programs containing all the expression patterns covering all the permutations of 3 operators from 14 operators. CF3 detected more than 70 errors in GCC 4.2-4.5 within 2 hours.

  • A New Bayesian Network Structure Learning Algorithm Mechanism Based on the Decomposability of Scoring Functions

    Guoliang LI  Lining XING  Zhongshan ZHANG  Yingwu CHEN  

     
    PAPER-Graphs and Networks

      Vol:
    E100-A No:7
      Page(s):
    1541-1551

    Bayesian networks are a powerful approach for representation and reasoning under conditions of uncertainty. Of the many good algorithms for learning Bayesian networks from data, the bio-inspired search algorithm is one of the most effective. In this paper, we propose a hybrid mutual information-modified binary particle swarm optimization (MI-MBPSO) algorithm. This technique first constructs a network based on MI to improve the quality of the initial population, and then uses the decomposability of the scoring function to modify the BPSO algorithm. Experimental results show that, the proposed hybrid algorithm outperforms various other state-of-the-art structure learning algorithms.

  • Distributed Optimization with Incomplete Information for Heterogeneous Cellular Networks

    Haibo DAI  Chunguo LI  Luxi YANG  

     
    LETTER-Numerical Analysis and Optimization

      Vol:
    E100-A No:7
      Page(s):
    1578-1582

    In this letter, we propose two robust and distributed game-based algorithms, which are the modifications of two algorithms proposed in [1], to solve the joint base station selection and resource allocation problem with imperfect information in heterogeneous cellular networks (HCNs). In particular, we repeatedly sample the received payoffs in the exploitation stage of each algorithm to guarantee the convergence when the payoffs of some users (UEs) in [1] cannot accurately be acquired for some reasons. Then, we derive the rational sampling number and prove the convergence of the modified algorithms. Finally, simulation results demonstrate that two modified algorithms achieve good convergence performances and robustness in the incomplete information scheme.

  • Well-Balanced Successive Simple-9 for Inverted Lists Compression

    Kun JIANG  Yuexiang YANG  Qinghua ZHENG  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2017/04/17
      Vol:
    E100-D No:7
      Page(s):
    1416-1424

    The growth in the amount of information available on the Internet and thousands of user queries per second brings huge challenges to the index update and query processing of search engines. Index compression is partially responsible for the current performance achievements of existing search engines. The selection of the index compression algorithms must weigh three factors, i.e., compression ratio, compression speed and decompression speed. In this paper, we study the well-known Simple-9 compression, in which exist many branch operations, table lookup and data transfer operations when processing each 32-bit machine word. To enhance the compression and decompression performance of Simple-9 algorithm, we propose a successive storage structure and processing metric to compress two successive Simple-9 encoded sequence of integers in a single data processing procedure, thus the name Successive Simple-9 (SSimple-9). In essence, the algorithm shortens the process of branch operations, table lookup and data transfer operations when compressing the integer sequence. More precisely, we initially present the data storage format and mask table of SSimple-9 algorithm. Then, for each mode in the mask table, we design and hard-code the main steps of the compression and decompression processes. Finally, analysis and comparison on the experimental results of the simulation and TREC datasets show the compression and decompression efficiency speedup of the proposed SSimple-9 algorithm.

  • The Structural Vulnerability Analysis of Power Grids Based on Second-Order Centrality

    Zhong-Jian KANG  Yi-Jia ZHANG  Xin-Ling GUO  Zhe-Ming LU  

     
    LETTER-Systems and Control

      Vol:
    E100-A No:7
      Page(s):
    1567-1570

    The application of complex network theory to power grid analysis has been a hot topic in recent years, which mainly manifests itself in four aspects. The first aspect is to model power system networks. The second aspect is to reveal the topology of the grid itself. The third aspect is to reveal the inherent vulnerability and weakness of the power network itself and put forward the pertinent improvement measures to provide guidance for the construction of power grid. The last aspect is to analyze the mechanism of cascading failure and establish the cascading fault model of large power failure. In the past ten years, by using the complex network theory, many researchers have investigated the structural vulnerability of power grids from the point of view of topology. This letter studies the structural vulnerability of power grids according to the effect of selective node removal. We apply several kinds of node centralities including recently-presented second-order centrality (SOC) to guide the node removal attack. We test the effectiveness of all these centralities in guiding the node removal based on several IEEE power grids. Simulation results show that, compared with other node centralities, the SOC is relatively effective in guiding the node removal and can destroy the power grid with negative degree-degree correlation in less steps.

  • Design of an Application Specific Instruction Set Processor for Real-Time Object Detection Using AdaBoost Algorithm

    Shanlin XIAO  Tsuyoshi ISSHIKI  Dongju LI  Hiroaki KUNIEDA  

     
    PAPER

      Vol:
    E100-A No:7
      Page(s):
    1384-1395

    Object detection is at the heart of nearly all the computer vision systems. Standard off-the-shelf embedded processors are hard to meet the trade-offs among performance, power consumption and flexibility required by object detection applications. Therefore, this paper presents an Application Specific Instruction set Processor (ASIP) for object detection using AdaBoost-based learning algorithm with Haar-like features as weak classifiers. Algorithm optimizations are employed to reduce memory bandwidth requirements without losing reliability. In the proposed ASIP, Single Instruction Multiple Data (SIMD) architecture is adopted for fully exploiting data-level parallelism inherent to the target algorithm. With adding pipeline stages, application-specific hardware components and custom instructions, the AdaBoost algorithm is accelerated by a factor of 13.7x compared to the optimized pure software implementation. Compared with ARM946 and TMS320C64+, our ASIP shows 32x and 7x better throughput, 10x and 224x better area efficiency, 6.8x and 18.8x better power efficiency, respectively. Furthermore, compared to hard-wired designs, evaluation results show an advantage of the proposed architecture in terms of chip area efficiency while maintain a reliable performance and achieve real-time object detection at 32fps on VGA video.

  • A Comprehensive Method to Improve Loudness Compensation and High-Frequency Speech Intelligibility for Digital Hearing Aids

    Zhaoyang GUO  Bo WANG  Xin'an WANG  

     
    LETTER-Speech and Hearing

      Vol:
    E100-A No:7
      Page(s):
    1552-1556

    A comprehensive method applying a nonlinear frequency compression (FC) as complementary to multi-band loudness compensation is proposed, which is able to improve loudness compensation and simultaneously increase high-frequency speech intelligibility for digital hearing aids. The proposed nonlinear FC (NLFC) improves the conventional methods in the aspect that the compression ratio (CR) is adjusted based on the speech intelligibility percentage in different frequency ranges. Then, an adaptive wide dynamic range compression (AWDRC) with a time-varying CR is applied to achieve adaptive loudness compensation. The experimental test results show that the mean speech identification is improved in comparison with the state-of-art methods.

  • A Spatiotemporal Statistical Model for Eyeballs of Human Embryos

    Masashi KISHIMOTO  Atsushi SAITO  Tetsuya TAKAKUWA  Shigehito YAMADA  Hiroshi MATSUZOE  Hidekata HONTANI  Akinobu SHIMIZU  

     
    PAPER-Biological Engineering

      Pubricized:
    2017/04/17
      Vol:
    E100-D No:7
      Page(s):
    1505-1515

    During the development of a human embryo, the position of eyes moves medially and caudally in the viscerocranium. A statistical model of this process can play an important role in embryology by facilitating qualitative analyses of change. This paper proposes an algorithm to construct a spatiotemporal statistical model for the eyeballs of a human embryo. The proposed modeling algorithm builds a statistical model of the spatial coordinates of the eyeballs independently for each Carnegie stage (CS) by using principal component analysis (PCA). In the process, a q-Gaussian distribution with a model selection scheme based on the Aaike information criterion is used to handle a non-Gaussian distribution with a small sample size. Subsequently, it seamlessly interpolates the statistical models of neighboring CSs, and we present 10 interpolation methods. We also propose an estimation algorithm for the CS using our spatiotemporal statistical model. A set of images of eyeballs in human embryos from the Kyoto Collection was used to train the model and assess its performance. The modeling results suggested that information geometry-based interpolation under the assumption of a q-Gaussian distribution is the best modeling method. The average error in CS estimation was 0.409. We proposed an algorithm to construct a spatiotemporal statistical model of the eyeballs of a human embryo and tested its performance using the Kyoto Collection.

  • Reconfiguration of Steiner Trees in an Unweighted Graph

    Haruka MIZUTA  Takehiro ITO  Xiao ZHOU  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E100-A No:7
      Page(s):
    1532-1540

    We study a reconfiguration problem for Steiner trees in an unweighted graph, which determines whether there exists a sequence of Steiner trees that transforms a given Steiner tree into another one by exchanging a single edge at a time. In this paper, we show that the problem is PSPACE-complete even for split graphs, while solvable in linear time for interval graphs and for cographs.

701-720hit(3945hit)