The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

5321-5340hit(42807hit)

  • ECG-Based Heartbeat Classification Using Two-Level Convolutional Neural Network and RR Interval Difference

    Yande XIANG  Jiahui LUO  Taotao ZHU  Sheng WANG  Xiaoyan XIANG  Jianyi MENG  

     
    PAPER-Biological Engineering

      Pubricized:
    2018/01/12
      Vol:
    E101-D No:4
      Page(s):
    1189-1198

    Arrhythmia classification based on electrocardiogram (ECG) is crucial in automatic cardiovascular disease diagnosis. The classification methods used in the current practice largely depend on hand-crafted manual features. However, extracting hand-crafted manual features may introduce significant computational complexity, especially in the transform domains. In this study, an accurate method for patient-specific ECG beat classification is proposed, which adopts morphological features and timing information. As to the morphological features of heartbeat, an attention-based two-level 1-D CNN is incorporated in the proposed method to extract different grained features automatically by focusing on various parts of a heartbeat. As to the timing information, the difference between previous and post RR intervels is computed as a dynamic feature. Both the extracted morphological features and the interval difference are used by multi-layer perceptron (MLP) for classifing ECG signals. In addition, to reduce memory storage of ECG data and denoise to some extent, an adaptive heartbeat normalization technique is adopted which includes amplitude unification, resolution modification, and signal difference. Based on the MIT-BIH arrhythmia database, the proposed classification method achieved sensitivity Sen=93.4% and positive predictivity Ppr=94.9% in ventricular ectopic beat (VEB) detection, sensitivity Sen=86.3% and positive predictivity Ppr=80.0% in supraventricular ectopic beat (SVEB) detection, and overall accuracy OA=97.8% under 6-bit ECG signal resolution. Compared with the state-of-the-art automatic ECG classification methods, these results show that the proposed method acquires comparable accuracy of heartbeat classification though ECG signals are represented by lower resolution.

  • A Study on Quick Device Discovery for Fully Distributed D2D Networks

    Huan-Bang LI  Ryu MIURA  Fumihide KOJIMA  

     
    PAPER

      Pubricized:
    2017/09/19
      Vol:
    E101-B No:3
      Page(s):
    628-636

    Device-to-device (D2D) networks are expected to play a number of roles, such as increasing frequency spectrum efficiency and improving throughput at hot-spots. In this paper, our interest is on the potential of D2D on reducing delivery latency. To enable fast D2D network forming, quick device discovery is essential. For quickening device discovery, we propose a method of defining and using common channel and group channels so as to avoid the channel scan uncertainty faced by the conventional method. Rules for using the common channel and group channels are designed. We evaluate and compare the discovery performance of the proposed method with conventional method by using the superframe structure defined in IEEE 802.15.8 and a general discovery procedure. IEEE 802.15.8 is a standard under development for fully distributed D2D communications. A Netlogo simulator is used to perform step by step MAC simulations. The simulation results verify the effectiveness of the proposed method.

  • A General Low-Cost Fast Hybrid Reconfiguration Architecture for FPGA-Based Self-Adaptive System

    Rui YAO  Ping ZHU  Junjie DU  Meiqun WANG  Zhaihe ZHOU  

     
    PAPER-Computer System

      Pubricized:
    2017/12/18
      Vol:
    E101-D No:3
      Page(s):
    616-626

    Evolvable hardware (EHW) based on field-programmable gate arrays (FPGAs) opens up new possibilities towards building efficient adaptive system. State of the art EHW systems based on virtual reconfiguration and dynamic partial reconfiguration (DPR) both have their limitations. The former has a huge area overhead and circuit delay, and the later has slow configuration speed and low flexibility. Therefore a general low-cost fast hybrid reconfiguration architecture is proposed in this paper, which merges the high flexibility of virtual reconfiguration and the low resource cost of DPR. Moreover, the bitstream relocation technology is introduced to save the bitstream storage space, and the discrepancy configuration technology is adopted to reduce reconfiguration time. And an embedded RAM core is adopted to store bitstreams which accelerate the reconfiguration speed further. The proposed architecture is evaluated by the online evolution of digital image filter implemented on the Xilinx Virtex-6 FPGA development board ML605. And the experimental results show that our system has lower resource overhead, higher operating frequency, faster reconfiguration speed and less bitstream storage space in comparison with the previous works.

  • A Heuristic for Constructing Smaller Automata Based on Suffix Sorting and Its Application in Network Security

    Inbok LEE  Victor C. VALGENTI  Min S. KIM  Sung-il OH  

     
    LETTER

      Pubricized:
    2017/12/19
      Vol:
    E101-D No:3
      Page(s):
    613-615

    In this paper we show a simple heuristic for constructing smaller automata for a set of regular expressions, based on suffix sorting: finding common prefixes and suffixes in regular expressions and merging them. It is an important problem in network security. We applied our approach to random and real-world regular expressions. Experimental results showed that our approach yields up to 12 times enhancement in throughput.

  • Pipelined Squarer for Unsigned Integers of Up to 12 Bits

    Seongjin CHOI  Hyeong-Cheol OH  

     
    LETTER-Computer System

      Pubricized:
    2017/12/06
      Vol:
    E101-D No:3
      Page(s):
    795-798

    This paper proposes and analyzes a pipelining scheme for a hardware squarer that can square unsigned integers of up to 12 bits. Each stage is designed and adjusted such that stage delays are well balanced and that the critical path delay of the design does not exceed the reference value which is set up based on the analysis. The resultant design has the critical path delay of approximately 3.5 times a full-adder delay. In an implementation using an Intel Stratix V FPGA, the design operates at approximately 23% higher frequency than the comparable pipelined squarer provided in the Intel library.

  • Symbol Error Probability Performance of Rectangular QAM with MRC Reception over Generalized α-µ Fading Channels

    Furqan Haider QURESHI  Qasim Umar KHAN  Shahzad Amin SHEIKH  Muhammad ZEESHAN  

     
    PAPER-Communication Theory and Signals

      Vol:
    E101-A No:3
      Page(s):
    577-584

    In this paper, a new and an accurate symbol error probability's analytical model of Rectangular Quadrature Amplitude Modulation in α-µ fading channel is presented for single-user single-input multi-output environment, which can be easily extended to generalized fading channels. The maximal-ratio combining technique is utilized at the receiving end and unified moment generating functions are used to derivate the results. The fading mediums considered are independent and non-identical. The mathematical model presented is applicable for slow and frequency non-selective fading channels only. The final expression is presented in terms of Meijer G-function; it contains single integrals with finite limits to evaluate the mathematical expressions with numerical techniques. The beauty of the model will help evaluate symbol error probability of rectangular quadrature amplitude modulation with spatial diversity over various fading mediums not addressed in this article. To check for the validity of derived analytical expressions, comparison is made between theoretical and simulation results at the end.

  • Self-Paced Learning with Statistics Uncertainty Prior

    Lihua GUO  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/12/13
      Vol:
    E101-D No:3
      Page(s):
    812-816

    Self-paced learning (SPL) gradually trains the data from easy to hard, and includes more data into the training process in a self-paced manner. The advantage of SPL is that it has an ability to avoid bad local minima, and the system can improve the generalization performance. However, SPL's system needs an expert to judge the complexity of data at the beginning of training. Generally, this expert does not exist in the beginning, and is learned by gradually training the samples. Based on this consideration, we add an uncertainty of complexity judgment into SPL's system, and propose a self-paced learning with uncertainty prior (SPUP). For efficiently solving our system optimization function, an iterative optimization and statistical simulated annealing method are introduced. The final experimental results indicate that our SPUP has more robustness to the outlier and achieves higher accuracy and less error than SPL.

  • Joint Bandwidth Scheduling and Routing Method for Large File Transfer with Time Constraint and Its Implementation

    Kazuhiko KINOSHITA  Masahiko AIHARA  Shiori KONO  Nariyoshi YAMAI  Takashi WATANABE  

     
    PAPER-Network

      Pubricized:
    2017/09/04
      Vol:
    E101-B No:3
      Page(s):
    763-771

    In recent years, the number of requests to transfer large files via large high-speed computer networks has been increasing rapidly. Typically, these requests are handled in the “best effort” manner which results in unpredictable completion times. In this paper, we consider a model where a transfer request either must be completed by a user-specified deadline or must be rejected if its deadline cannot be satisfied. We propose a bandwidth scheduling method and a routing method for reducing the call-blocking probability in a bandwidth-guaranteed network. Finally, we show their excellent performance by simulation experiments.

  • FOREWORD Open Access

    Hideki TODE  

     
    FOREWORD

      Vol:
    E101-B No:3
      Page(s):
    603-603
  • Weyl Spreading Sequence Optimizing CDMA

    Hirofumi TSUDA  Ken UMENO  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/09/11
      Vol:
    E101-B No:3
      Page(s):
    897-908

    This paper shows an optimal spreading sequence in the Weyl sequence class, which is similar to the set of the Oppermann sequences for asynchronous CDMA systems. Sequences in Weyl sequence class have the desired property that the order of cross-correlation is low. Therefore, sequences in the Weyl sequence class are expected to minimize the inter-symbol interference. We evaluate the upper bound of cross-correlation and odd cross-correlation of spreading sequences in the Weyl sequence class and construct the optimization problem: minimize the upper bound of the absolute values of cross-correlation and odd cross-correlation. Since our optimization problem is convex, we can derive the optimal spreading sequences as the global solution of the problem. We show their signal to interference plus noise ratio (SINR) in a special case. From this result, we propose how the initial elements are assigned, that is, how spreading sequences are assigned to each users. In an asynchronous CDMA system, we also numerically compare our spreading sequences with other ones, the Gold codes, the Oppermann sequences, the optimal Chebyshev spreading sequences and the SP sequences in Bit Error Rate. Our spreading sequence, which yields the global solution, has the highest performance among the other spreading sequences tested.

  • Full-Automatic Optic Disc Boundary Extraction Based on Active Contour Model with Multiple Energies

    Yuan GAO  Chengdong WU  Xiaosheng YU  Wei ZHOU  Jiahui WU  

     
    LETTER-Vision

      Vol:
    E101-A No:3
      Page(s):
    658-661

    Efficient optic disc (OD) segmentation plays a significant role in retinal image analysis and retinal disease screening. In this paper, we present a full-automatic segmentation approach called double boundary extraction for the OD segmentation. The proposed approach consists of the following two stages: first, we utilize an unsupervised learning technology and statistical method based on OD boundary information to obtain the initial contour adaptively. Second, the final optic disc boundary is extracted using the proposed LSO model. The performance of the proposed method is tested on the public DIARETDB1 database and the experimental results demonstrate the effectiveness and advantage of the proposed method.

  • Zone-Based Energy Aware Data Collection Protocol for WSNs

    Alberto GALLEGOS  Taku NOGUCHI  Tomoko IZUMI  Yoshio NAKATANI  

     
    PAPER-Network

      Pubricized:
    2017/08/28
      Vol:
    E101-B No:3
      Page(s):
    750-762

    In this paper we propose the Zone-based Energy Aware data coLlection (ZEAL) protocol. ZEAL is designed to be used in agricultural applications for wireless sensor networks. In these type of applications, all data is often routed to a single point (named “sink” in sensor networks). The overuse of the same routes quickly depletes the energy of the nodes closer to the sink. In order to minimize this problem, ZEAL automatically creates zones (groups of nodes) independent from each other based on the trajectory of one or more mobile sinks. In this approach the sinks collects data queued in sub-sinks in each zone. Unlike existing protocols, ZEAL accomplish its routing tasks without using GPS modules for location awareness or synchronization mechanisms. Additionally, ZEAL provides an energy saving mechanism on the network layer that puts zones to sleep when there are no mobile sinks nearby. To evaluate ZEAL, it is compared with the Maximum Amount Shortest Path (MASP) protocol. Our simulations using the ns-3 network simulator show that ZEAL is able to collect a larger number of packets with significantly less energy in the same amount of time.

  • Blind Source Separation and Equalization Based on Support Vector Regression for MIMO Systems

    Chao SUN  Ling YANG  Juan DU  Fenggang SUN  Li CHEN  Haipeng XI  Shenglei DU  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2017/08/28
      Vol:
    E101-B No:3
      Page(s):
    698-708

    In this paper, we first propose two batch blind source separation and equalization algorithms based on support vector regression (SVR) for linear time-invariant multiple input multiple output (MIMO) systems. The proposed algorithms combine the conventional cost function of SVR with error functions of classical on-line algorithm for blind equalization: both error functions of constant modulus algorithm (CMA) and radius directed algorithm (RDA) are contained in the penalty term of SVR. To recover all sources simultaneously, the cross-correlations of equalizer outputs are included in the cost functions. Simulation experiments show that the proposed algorithms can recover all sources successfully and compensate channel distortion simultaneously. With the use of iterative re-weighted least square (IRWLS) solution of SVR, the proposed algorithms exhibit low computational complexity. Compared with traditional algorithms, the new algorithms only require fewer samples to achieve convergence and perform a lower residual interference. For multilevel signals, the single algorithms based on constant modulus property usually show a relatively high residual error, then we propose two dual-mode blind source separation and equalization schemes. Between them, the dual-mode scheme based on SVR merely requires fewer samples to achieve convergence and further reduces the residual interference.

  • Polynomial-Space Exact Algorithms for the Bipartite Traveling Salesman Problem

    Mohd SHAHRIZAN OTHMAN  Aleksandar SHURBEVSKI  Hiroshi NAGAMOCHI  

     
    LETTER

      Pubricized:
    2017/12/19
      Vol:
    E101-D No:3
      Page(s):
    611-612

    Given an edge-weighted bipartite digraph G=(A,B;E), the Bipartite Traveling Salesman Problem (BTSP) asks to find the minimum cost of a Hamiltonian cycle of G, or determine that none exists. When |A|=|B|=n, the BTSP can be solved using polynomial space in O*(42nnlog n) time by using the divide-and-conquer algorithm of Gurevich and Shelah (SIAM Journal of Computation, 16(3), pp.486-502, 1987). We adapt their algorithm for the bipartite case, and show an improved time bound of O*(42n), saving the nlog n factor.

  • Efficient Query Dissemination Scheme for Wireless Heterogeneous Sensor Networks

    Sungjun KIM  Daehee KIM  Sunshin AN  

     
    LETTER-Mobile Information Network and Personal Communications

      Vol:
    E101-A No:3
      Page(s):
    649-653

    In this paper, we define a wireless sensor network with multiple types of sensors as a wireless heterogeneous sensor network (WHSN), and propose an efficient query dissemination scheme (EDT) in the WHSN. The EDT based on total dominant pruning can forward queries to only the nodes with data requested by the user, thereby reducing unnecessary packet transmission. We show that the EDT is suitable for the WHSN environment through a variety of simulations.

  • Regulated Transport Network Design Using Geographical Resolution

    Shohei KAMAMURA  Aki FUKUDA  Rie HAYASHI  Yoshihiko UEMATSU  

     
    PAPER-Network

      Pubricized:
    2017/08/28
      Vol:
    E101-B No:3
      Page(s):
    805-815

    This paper proposes a regulated transport network design algorithm for IP over a dense wavelength division multiplex (DWDM) network. When designing an IP over DWDM network, the network operator should consider not only cost-effectiveness and physical constraints such as wavelength colors and chromatic dispersion but also operational policies such as resilience, quality, stability, and operability. For considering the above polices, we propose to separate the network design algorithm based on a geographical resolution; the policy-based regulated intra-area is designed based on this resolution, and the cost-optimal inter-area is then designed separately, and finally merged. This approach does not necessarily yield a strict optimal solution, but it covers network design work done by humans, which takes a vast amount of time and requires a high skill level. For efficient geographical resolution, we also present fast graph mining algorithm, which can solve NP-hard subgraph isomorphism problem within the practical time. We prove the sufficiency of the resulting network design for the above polices by visualizing the topology, and also prove that the penalty of applying the approach is trivial.

  • Deep Neural Network Based Monaural Speech Enhancement with Low-Rank Analysis and Speech Present Probability

    Wenhua SHI  Xiongwei ZHANG  Xia ZOU  Meng SUN  Wei HAN  Li LI  Gang MIN  

     
    LETTER-Noise and Vibration

      Vol:
    E101-A No:3
      Page(s):
    585-589

    A monaural speech enhancement method combining deep neural network (DNN) with low rank analysis and speech present probability is proposed in this letter. Low rank and sparse analysis is first applied on the noisy speech spectrogram to get the approximate low rank representation of noise. Then a joint feature training strategy for DNN based speech enhancement is presented, which helps the DNN better predict the target speech. To reduce the residual noise in highly overlapping regions and high frequency domain, speech present probability (SPP) weighted post-processing is employed to further improve the quality of the speech enhanced by trained DNN model. Compared with the supervised non-negative matrix factorization (NMF) and the conventional DNN method, the proposed method obtains improved speech enhancement performance under stationary and non-stationary conditions.

  • Comparative Study between Two Approaches Using Edit Operations and Code Differences to Detect Past Refactorings

    Takayuki OMORI  Katsuhisa MARUYAMA  

     
    PAPER-Software Engineering

      Pubricized:
    2017/11/27
      Vol:
    E101-D No:3
      Page(s):
    644-658

    Understanding which refactoring transformations were performed is in demand in modern software constructions. Traditionally, many researchers have been tackling understanding code changes with history data derived from version control systems. In those studies, problems of the traditional approach are pointed out, such as entanglement of multiple changes. To alleviate the problems, operation histories on IDEs' code editors are available as a new source of software evolution data nowadays. By replaying such histories, we can investigate past code changes in a fine-grained level. However, the prior studies did not provide enough evidence of their effectiveness for detecting refactoring transformations. This paper describes an experiment in which participants detect refactoring transformations performed by other participants after investigating the code changes with an operation-replay tool and diff tools. The results show that both approaches have their respective factors that pose misunderstanding and overlooking of refactoring transformations. Two negative factors on divided operations and generated compound operations were observed in the operation-based approach, whereas all the negative factors resulted from three problems on tangling, shadowing, and out-of-order of code changes in the difference-based approach. This paper also shows seven concrete examples of participants' mistakes in both approaches. These findings give us hints for improving existing tools for understanding code changes and detecting refactoring transformations.

  • Effects of Automated Transcripts on Non-Native Speakers' Listening Comprehension

    Xun CAO  Naomi YAMASHITA  Toru ISHIDA  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2017/11/24
      Vol:
    E101-D No:3
      Page(s):
    730-739

    Previous research has shown that transcripts generated by automatic speech recognition (ASR) technologies can improve the listening comprehension of non-native speakers (NNSs). However, we still lack a detailed understanding of how ASR transcripts affect listening comprehension of NNSs. To explore this issue, we conducted two studies. The first study examined how the current presentation of ASR transcripts impacted NNSs' listening comprehension. 20 NNSs engaged in two listening tasks, each in different conditions: C1) audio only and C2) audio+ASR transcripts. The participants pressed a button whenever they encountered a comprehension problem, and explained each problem in the subsequent interviews. From our data analysis, we found that NNSs adopted different strategies when using the ASR transcripts; some followed the transcripts throughout the listening; some only checked them when necessary. NNSs also appeared to face difficulties following imperfect and slightly delayed transcripts while listening to speech - many reported difficulties concentrating on listening/reading or shifting between the two. The second study explored how different display methods of ASR transcripts affected NNSs' listening experiences. We focused on two display methods: 1) accuracy-oriented display which shows transcripts only after the completion of speech input analysis, and 2) speed-oriented display which shows the interim analysis results of speech input. We conducted a laboratory experiment with 22 NNSs who engaged in two listening tasks with ASR transcripts presented via the two display methods. We found that the more the NNSs paid attention to listening to the audio, the more they tended to prefer the speed-oriented transcripts, and vice versa. Mismatched transcripts were found to have negative effects on NNSs' listening comprehension. Our findings have implications for improving the presentation methods of ASR transcripts to more effectively support NNSs.

  • Corpus Expansion for Neural CWS on Microblog-Oriented Data with λ-Active Learning Approach

    Jing ZHANG  Degen HUANG  Kaiyu HUANG  Zhuang LIU  Fuji REN  

     
    PAPER-Natural Language Processing

      Pubricized:
    2017/12/08
      Vol:
    E101-D No:3
      Page(s):
    778-785

    Microblog data contains rich information of real-world events with great commercial values, so microblog-oriented natural language processing (NLP) tasks have grabbed considerable attention of researchers. However, the performance of microblog-oriented Chinese Word Segmentation (CWS) based on deep neural networks (DNNs) is still not satisfying. One critical reason is that the existing microblog-oriented training corpus is inadequate to train effective weight matrices for DNNs. In this paper, we propose a novel active learning method to extend the scale of the training corpus for DNNs. However, due to a large amount of partially overlapped sentences in the microblogs, it is difficult to select samples with high annotation values from raw microblogs during the active learning procedure. To select samples with higher annotation values, parameter λ is introduced to control the number of repeatedly selected samples. Meanwhile, various strategies are adopted to measure the overall annotation values of a sample during the active learning procedure. Experiments on the benchmark datasets of NLPCC 2015 show that our λ-active learning method outperforms the baseline system and the state-of-the-art method. Besides, the results also demonstrate that the performances of the DNNs trained on the extended corpus are significantly improved.

5321-5340hit(42807hit)