The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ATI(18690hit)

2801-2820hit(18690hit)

  • Design of New Spatial Modulation Scheme Based on Quaternary Quasi-Orthogonal Sequences

    Hojun KIM  Yulong SHANG  Taejin JUNG  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/06/02
      Vol:
    E100-B No:12
      Page(s):
    2129-2138

    In this paper, we propose a new spatial modulation (SM) scheme based on quaternary quasi-orthogonal sequences (Q-QOSs), referred to as Q-QOS-SM. First, the conventional SM and generalized-SM (GSM) schemes are reinterpreted as a new transmission scheme based on a spatial modulation matrix (SMM), whose column indices are used for the mapping of spatial-information bits unlike the conventional ones. Next, by adopting the SMM comprising the Q-QOSs, the proposed Q-QOS-SM that guarantees twice the number of spatial bits at a transmitter compared with the SM with a constraint of transmit antennas, is designed. From the computer-simulation results, the Q-QOS-SM is shown to achieve a greatly improved throughput compared with the conventional SM and GSM schemes, especially, for a large number of the receive antennas. This finding is mainly because the new scheme offers a much higher minimum Euclidean distance than the other schemes.

  • Blur Map Generation Based on Local Natural Image Statistics for Partial Blur Segmentation

    Natsuki TAKAYAMA  Hiroki TAKAHASHI  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/09/05
      Vol:
    E100-D No:12
      Page(s):
    2984-2992

    Partial blur segmentation is one of the most interesting topics in computer vision, and it has practical value. The generation of blur maps is a crucial part of partial blur segmentation because partial blur segmentation involves producing a blur map and applying a segmentation algorithm to the blur map. In this study, we address two important issues in order to improve the discrimination of blur maps: (1) estimating a robust local blur feature to consider variations in the intensity amplitude and (2) a scheme for generating blur maps. We propose the ANGHS (Amplitude-Normalized Gradient Histogram Span) as a local blur feature. ANGHS represents the heavy-tailedness of a gradient distribution, where it is calculated from an image gradient normalized using the intensity amplitude. ANGHS is robust to variations in the intensity amplitude, and it can handle local regions in a more appropriate manner than previously proposed local blur features. Blur maps are affected by local blur features but also by the contents and sizes of local regions, and the assignment of blur feature values to pixels. Thus, multiple-sized grids and the EAI (Edge-Aware Interpolation) are employed in each task to improve the discrimination of blur maps. The discrimination of the generated blur maps is evaluated visually and statistically using numerous partial blur images. Comparisons with the results obtained by state-of-the-art methods demonstrate the high discrimination of the blur maps generated using the proposed method.

  • Digital Frequency Discriminator (DFD) Improvement by LO Leakage and I/Q Imbalance Compensation

    Won CHOI  Kyung Heon KOO  

     
    PAPER-Navigation, Guidance and Control Systems

      Pubricized:
    2017/05/26
      Vol:
    E100-B No:12
      Page(s):
    2164-2171

    This study presents the design of a phase correlator for a digital frequency discriminator (DFD) that operates in the 2.0-6.0GHz frequency range. The accuracy of frequency discrimination as determined by the isolation of the correlator mixer was analyzed, and LO-RF isolation was found to have a significant effect on the frequency discrimination error by deriving various analytic equations related to the LO-RF isolation and phase performance. We propose a novel technique (phase sector compensation) to improve the accuracy of frequency discrimination. The phase sector compensation technique improved phase error by canceling the DC offset of the I and Q signals for only the frequency bands where the mixer's LO-RF isolation was below a specified limit. In the 2.0-6.0GHz range, the phase error of the designed phase correlator was decreased from 4.57° to 4.23° (RMS), and the frequency accuracy was improved from 1.02MHz to 0.95MHz (RMS). In the 4.8-6.0GHz range, the RMS phase error was improved from 5.59° to 4.12°, the frequency accuracy was improved from 1.24MHz to 0.92MHz, and the performance of the DFD correlator was improved by 26.3% in the frequency sector where LO-RF isolation was poor. Overall, the DFD correlator performance was improved by LO leakage compensation.

  • Modality Selection Attacks and Modality Restriction in Likelihood-Ratio Based Biometric Score Fusion

    Takao MURAKAMI  Yosuke KAGA  Kenta TAKAHASHI  

     
    PAPER-Biometrics

      Vol:
    E100-A No:12
      Page(s):
    3023-3037

    The likelihood-ratio based score level fusion (LR fusion) scheme is known as one of the most promising multibiometric fusion schemes. This scheme verifies a user by computing a log-likelihood ratio (LLR) for each modality, and comparing the total LLR to a threshold. It can happen in practice that genuine LLRs tend to be less than 0 for some modalities (e.g., the user is a “goat”, who is inherently difficult to recognize, for some modalities; the user suffers from temporary physical conditions such as injuries and illness). The LR fusion scheme can handle such cases by allowing the user to select a subset of modalities at the authentication phase and setting LLRs corresponding to missing query samples to 0. A recent study, however, proposed a modality selection attack, in which an impostor inputs only query samples whose LLRs are greater than 0 (i.e., takes an optimal strategy), and proved that this attack degrades the overall accuracy even if the genuine user also takes this optimal strategy. In this paper, we investigate the impact of the modality selection attack in more details. Specifically, we investigate whether the overall accuracy is improved by eliminating “goat” templates, whose LLRs tend to be less than 0 for genuine users, from the database (i.e., restricting modality selection). As an overall performance measure, we use the KL (Kullback-Leibler) divergence between a genuine score distribution and an impostor's one. We first prove the modality restriction hardly increases the KL divergence when a user can select a subset of modalities (i.e., selective LR fusion). We second prove that the modality restriction increases the KL divergence when a user needs to input all biometric samples (i.e., non-selective LR fusion). We conduct experiments using three real datasets (NIST BSSR1 Set1, Biosecure DS2, and CASIA-Iris-Thousand), and discuss directions of multibiometric fusion systems.

  • On Asymptotically Good Ramp Secret Sharing Schemes

    Olav GEIL  Stefano MARTIN  Umberto MARTÍNEZ-PEÑAS  Ryutaroh MATSUMOTO  Diego RUANO  

     
    PAPER-Cryptography and Information Security

      Vol:
    E100-A No:12
      Page(s):
    2699-2708

    Asymptotically good sequences of linear ramp secret sharing schemes have been intensively studied by Cramer et al. in terms of sequences of pairs of nested algebraic geometric codes [4]-[8], [10]. In those works the focus is on full privacy and full reconstruction. In this paper we analyze additional parameters describing the asymptotic behavior of partial information leakage and possibly also partial reconstruction giving a more complete picture of the access structure for sequences of linear ramp secret sharing schemes. Our study involves a detailed treatment of the (relative) generalized Hamming weights of the considered codes.

  • Tracing Werewolf Game by Using Extended BDI Model

    Naoyuki NIDE  Shiro TAKATA  

     
    PAPER-Information Network

      Pubricized:
    2017/09/15
      Vol:
    E100-D No:12
      Page(s):
    2888-2896

    The Werewolf game is a kind of role-playing game in which players have to guess other players' roles from their speech acts (what they say). In this game, players have to estimate other players' beliefs and intentions, and try to modify others' intentions. The BDI model is a suitable one for this game, because it explicitly has notions of mental states, i.e. beliefs, desires and intentions. On the other hand, in this game, players' beliefs are not completely known. Consequently, in many cases it is difficult for players to choose a unique strategy; in other words, players frequently have to maintain probabilistic intentions. However, the conventional BDI model does not have the notion of probabilistic mental states. In this paper, we propose an extension of BDI logic that can handle probabilistic mental states and use it to model some situations in the Werewolf game. We also show examples of deductions concerning those situations. We expect that this study will serve as a basis for developing a Werewolf game agent based on BDI logic in the future.

  • Gauss-Seidel HALS Algorithm for Nonnegative Matrix Factorization with Sparseness and Smoothness Constraints

    Takumi KIMURA  Norikazu TAKAHASHI  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:12
      Page(s):
    2925-2935

    Nonnegative Matrix Factorization (NMF) with sparseness and smoothness constraints has attracted increasing attention. When these properties are considered, NMF is usually formulated as an optimization problem in which a linear combination of an approximation error term and some regularization terms must be minimized under the constraint that the factor matrices are nonnegative. In this paper, we focus our attention on the error measure based on the Euclidean distance and propose a new iterative method for solving those optimization problems. The proposed method is based on the Hierarchical Alternating Least Squares (HALS) algorithm developed by Cichocki et al. We first present an example to show that the original HALS algorithm can increase the objective value. We then propose a new algorithm called the Gauss-Seidel HALS algorithm that decreases the objective value monotonically. We also prove that it has the global convergence property in the sense of Zangwill. We finally verify the effectiveness of the proposed algorithm through numerical experiments using synthetic and real data.

  • Exponentially Weighted Distance-Based Detection for Radiometric Identification

    Yong Qiang JIA  Lu GAN  Hong Shu LIAO  

     
    LETTER-Measurement Technology

      Vol:
    E100-A No:12
      Page(s):
    3086-3089

    Radio signals show characteristics of minute differences, which result from various idiosyncratic hardware properties between different radio emitters. A robust detector based on exponentially weighted distances is proposed to detect the exact reference instants of the burst communication signals. Based on the exact detection of the reference instant, in which the radio emitter finishes the power-up ramp and enters the first symbol of its preamble, the features of the radio fingerprint can be extracted from the transient signal section and the steady-state signal section for radiometric identification. Experiments on real data sets demonstrate that the proposed method not only has a higher accuracy that outperforms correlation-based detection, but also a better robustness against noise. The comparison results of different detectors for radiometric identification indicate that the proposed detector can improve the classification accuracy of radiometric identification.

  • A Novel Robust Adaptive Beamforming Algorithm Based on Total Least Squares and Compressed Sensing

    Di YAO  Xin ZHANG  Qiang YANG  Weibo DENG  

     
    LETTER-Digital Signal Processing

      Vol:
    E100-A No:12
      Page(s):
    3049-3053

    An improved beamformer, which uses joint estimation of the reconstructed interference-plus-noise (IPN) covariance matrix and array steering vector (ASV), is proposed. It can mitigate the problem of performance degradation in situations where the desired signal exists in the sample covariance matrix and the steering vector pointing has large errors. In the proposed method, the covariance matrix is reconstructed by weighted sum of the exterior products of the interferences' ASV and their individual power to reject the desired signal component, the coefficients of which can be accurately estimated by the compressed sensing (CS) and total least squares (TLS) techniques. Moreover, according to the theorem of sequential vector space projection, the actual ASV is estimated from an intersection of two subspaces by applying the alternating projection algorithm. Simulation results are provided to demonstrate the performance of the proposed beamformer, which is clearly better than the existing robust adaptive beamformers.

  • Speech-Act Classification Using a Convolutional Neural Network Based on POS Tag and Dependency-Relation Bigram Embedding

    Donghyun YOO  Youngjoong KO  Jungyun SEO  

     
    LETTER-Natural Language Processing

      Pubricized:
    2017/08/23
      Vol:
    E100-D No:12
      Page(s):
    3081-3084

    In this paper, we propose a deep learning based model for classifying speech-acts using a convolutional neural network (CNN). The model uses some bigram features including parts-of-speech (POS) tags and dependency-relation bigrams, which represent syntactic structural information in utterances. Previous classification approaches using CNN have commonly exploited word embeddings using morpheme unigrams. However, the proposed model first extracts two different bigram features that well reflect the syntactic structure of utterances and then represents them as a vector representation using a word embedding technique. As a result, the proposed model using bigram embeddings achieves an accuracy of 89.05%. Furthermore, the accuracy of this model is relatively 2.8% higher than that of competitive models in previous studies.

  • An Efficient GPU Implementation of CKY Parsing Using the Bitwise Parallel Bulk Computation Technique

    Toru FUJITA  Koji NAKANO  Yasuaki ITO  Daisuke TAKAFUJI  

     
    PAPER-GPU computing

      Pubricized:
    2017/08/04
      Vol:
    E100-D No:12
      Page(s):
    2857-2865

    The main contribution of this paper is to present an efficient GPU implementation of bulk computation of the CKY parsing for a context-free grammar, which determines if a context-free grammar derives each of a lot of input strings. The bulk computation is to execute the same algorithm for a lot of inputs in turn or at the same time. The CKY parsing is to determine if a context-free grammar derives a given string. We show that the bulk computation of the CKY parsing can be implemented in the GPU efficiently using Bitwise Parallel Bulk Computation (BPBC) technique. We also show the rule minimization technique and the dynamic scheduling method for further acceleration of the CKY parsing on the GPU. The experimental results using NVIDIA TITAN X GPU show that our implementation of the bitwise-parallel CKY parsing for strings of length 32 takes 395µs per string with 131072 production rules for 512 non-terminal symbols.

  • Cost Aware Offloading Selection and Resource Allocation for Cloud Based Multi-Robot Systems

    Yuan SUN  Xing-she ZHOU  Gang YANG  

     
    LETTER-Software System

      Pubricized:
    2017/08/28
      Vol:
    E100-D No:12
      Page(s):
    3022-3026

    In this letter, we investigate the computation offloading problem in cloud based multi-robot systems, in which user weights, communication interference and cloud resource limitation are jointly considered. To minimize the system cost, two offloading selection and resource allocation algorithms are proposed. Numerical results show that the proposed algorithms both can greatly reduce the overall system cost, and the greedy selection based algorithm even achieves near-optimal performance.

  • Resample-Based Hybrid Multi-Hypothesis Scheme for Distributed Compressive Video Sensing

    Can CHEN  Dengyin ZHANG  Jian LIU  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2017/09/08
      Vol:
    E100-D No:12
      Page(s):
    3073-3076

    Multi-hypothesis prediction technique, which exploits inter-frame correlation efficiently, is widely used in block-based distributed compressive video sensing. To solve the problem of inaccurate prediction in multi-hypothesis prediction technique at a low sampling rate and enhance the reconstruction quality of non-key frames, we present a resample-based hybrid multi-hypothesis scheme for block-based distributed compressive video sensing. The innovations in this paper include: (1) multi-hypothesis reconstruction based on measurements reorganization (MR-MH) which integrates side information into the original measurements; (2) hybrid multi-hypothesis (H-MH) reconstruction which mixes multiple multi-hypothesis reconstructions adaptively by resampling each reconstruction. Experimental results show that the proposed scheme outperforms the state-of-the-art technique at the same low sampling rate.

  • A Survey on Recommendation Methods Beyond Accuracy Open Access

    Jungkyu HAN  Hayato YAMANA  

     
    SURVEY PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2017/08/23
      Vol:
    E100-D No:12
      Page(s):
    2931-2944

    In recommending to another individual an item that one loves, accuracy is important, however in most cases, focusing only on accuracy generates less satisfactory recommendations. Studies have repeatedly pointed out that aspects that go beyond accuracy — such as the diversity and novelty of the recommended items — are as important as accuracy in making a satisfactory recommendation. Despite their importance, there is no global consensus about definitions and evaluations regarding beyond-accuracy aspects, as such aspects closely relate to the subjective sensibility of user satisfaction. In addition, devising algorithms for this purpose is difficult, because algorithms concurrently pursue the aspects in trade-off relation (i.e., accuracy vs. novelty). In the aforementioned situation, for researchers initiating a study in this domain, it is important to obtain a systematically integrated view of the domain. This paper reports the results of a survey of about 70 studies published over the last 15 years, each of which addresses recommendations that consider beyond-accuracy aspects. From this survey, we identify diversity, novelty, and coverage as important aspects in achieving serendipity and popularity unbiasedness — factors that are important to user satisfaction and business profits, respectively. The five major groups of algorithms that tackle the beyond-accuracy aspects are multi-objective, modified collaborative filtering (CF), clustering, graph, and hybrid; we then classify and describe algorithms as per this typology. The off-line evaluation metrics and user studies carried out by the studies are also described. Based on the survey results, we assert that there is a lot of room for research in the domain. Especially, personalization and generalization are considered important issues that should be addressed in future research (e.g., automatic per-user-trade-off among the aspects, and properly establishing beyond-accuracy aspects for various types of applications or algorithms).

  • A Study on the Market Impact of the Rule for Investment Diversification at the Time of a Market Crash Using a Multi-Agent Simulation

    Atsushi NOZAKI  Takanobu MIZUTA  Isao YAGI  

     
    PAPER-Information Network

      Pubricized:
    2017/09/15
      Vol:
    E100-D No:12
      Page(s):
    2878-2887

    As financial products have grown in complexity and level of risk compounding in recent years, investors have come to find it difficult to assess investment risk. Furthermore, companies managing mutual funds are increasingly expected to perform risk control and thus prevent assumption of unforeseen risk by investors. A related revision to the investment fund legal system in Japan led to establishing what is known as “the rule for investment diversification” in December 2014, without a clear discussion of its expected effects on market price formation having taken place. In this paper, we therefore used an artificial market to investigate its effects on price formation in financial markets where investors follow the rule at the time of a market crash that is caused by the collapse of an asset fundamental price. As results, we found the possibility that when the fundamental price of one asset collapses and its market price also collapses, some asset market prices also fall, whereas other asset market prices rise for a market in which investors follow the rule for investment diversification.

  • Triple Prediction from Texts by Using Distributed Representations of Words

    Takuma EBISU  Ryutaro ICHISE  

     
    PAPER-Natural Language Processing

      Pubricized:
    2017/09/12
      Vol:
    E100-D No:12
      Page(s):
    3001-3009

    Knowledge graphs have been shown to be useful to many tasks in artificial intelligence. Triples of knowledge graphs are traditionally structured by human editors or extracted from semi-structured information; however, editing is expensive, and semi-structured information is not common. On the other hand, most such information is stored as text. Hence, it is necessary to develop a method that can extract knowledge from texts and then construct or populate a knowledge graph; this has been attempted in various ways. Currently, there are two approaches to constructing a knowledge graph. One is open information extraction (Open IE), and the other is knowledge graph embedding; however, neither is without problems. Stanford Open IE, the current best such system, requires labeled sentences as training data, and knowledge graph embedding systems require numerous triples. Recently, distributed representations of words have become a hot topic in the field of natural language processing, since this approach does not require labeled data for training. These require only plain text, but Mikolov showed that it can perform well with the word analogy task, answering questions such as, “a is to b as c is to __?.” This can be considered as a knowledge extraction task from a text for finding the missing entity of a triple. However, the accuracy is not sufficiently high when applied in a straightforward manner to relations in knowledge graphs, since the method uses only one triple as a positive example. In this paper, we analyze why distributed representations perform such tasks well; we also propose a new method for extracting knowledge from texts that requires much less annotated data. Experiments show that the proposed method achieves considerable improvement compared with the baseline; in particular, the improvement in HITS@10 was more than doubled for some relations.

  • Deep Learning-Based Fault Localization with Contextual Information

    Zhuo ZHANG  Yan LEI  Qingping TAN  Xiaoguang MAO  Ping ZENG  Xi CHANG  

     
    LETTER-Software Engineering

      Pubricized:
    2017/09/08
      Vol:
    E100-D No:12
      Page(s):
    3027-3031

    Fault localization is essential for solving the issue of software faults. Aiming at improving fault localization, this paper proposes a deep learning-based fault localization with contextual information. Specifically, our approach uses deep neural network to construct a suspiciousness evaluation model to evaluate the suspiciousness of a statement being faulty, and then leverages dynamic backward slicing to extract contextual information. The empirical results show that our approach significantly outperforms the state-of-the-art technique Dstar.

  • A Static Packet Scheduling Approach for Fast Collective Communication by Using PSO

    Takashi YOKOTA  Kanemitsu OOTSU  Takeshi OHKAWA  

     
    PAPER-Interconnection networks

      Pubricized:
    2017/07/14
      Vol:
    E100-D No:12
      Page(s):
    2781-2795

    Interconnection network is one of the inevitable components in parallel computers, since it is responsible to communication capabilities of the systems. It affects the system-level performance as well as the physical and logical structure of the systems. Although many studies are reported to enhance the interconnection network technology, we have to discuss many issues remaining. One of the most important issues is congestion management. In an interconnection network, many packets are transferred simultaneously and the packets interfere to each other in the network. Congestion arises as a result of the interferences. Its fast spreading speed seriously degrades communication performance and it continues for long time. Thus, we should appropriately control the network to suppress the congested situation for maintaining the maximum performance. Many studies address the problem and present effective methods, however, the maximal performance in an ideal situation is not sufficiently clarified. Solving the ideal performance is, in general, an NP-hard problem. This paper introduces particle swarm optimization (PSO) methodology to overcome the problem. In this paper, we first formalize the optimization problem suitable for the PSO method and present a simple PSO application as naive models. Then, we discuss reduction of the size of search space and introduce three practical variations of the PSO computation models as repetitive model, expansion model, and coding model. We furthermore introduce some non-PSO methods for comparison. Our evaluation results reveal high potentials of the PSO method. The repetitive and expansion models achieve significant acceleration of collective communication performance at most 1.72 times faster than that in the bursty communication condition.

  • New Perfect Gaussian Integer Sequences from Cyclic Difference Sets

    Tao LIU  Chengqian XU  Yubo LI  Kai LIU  

     
    LETTER-Information Theory

      Vol:
    E100-A No:12
      Page(s):
    3067-3070

    In this letter, three constructions of perfect Gaussian integer sequences are constructed based on cyclic difference sets. Sufficient conditions for constructing perfect Gaussian integer sequences are given. Compared with the constructions given by Chen et al. [12], the proposed constructions relax the restrictions on the parameters of the cyclic difference sets, and new perfect Gaussian integer sequences will be obtained.

  • A New Rapid and Accurate Synchronization Scheme Based on PMF-FFT for High Dynamic GPS Receiver

    Huiling HOU  Kang WU  Yijun CHEN  Xuwen LIANG  

     
    LETTER-Spread Spectrum Technologies and Applications

      Vol:
    E100-A No:12
      Page(s):
    3075-3080

    In this letter, a new rapid and accurate synchronization scheme based on PMF-FFT for high dynamic GPS receiver is proposed, with a fine Doppler frequency estimation inserted between the acquisition and tracking modules. Fine Doppler estimation is firstly achieved through a simple interpolation of the PMF-FFT outputs in terms of LSE criterion. Then a high dynamic tracking loop based on UKF is designed to verify the synchronization speed and accuracy. Numerical results show that the fine frequency estimation can closely approach the CRB, and the high dynamic receiver can obtain fine synchronization rapidly just through a very narrow bandwidth. The simplicity and low complexity give the proposed scheme a strong and practical-oriented ability, even for weak GPS signals.

2801-2820hit(18690hit)