The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Al(20498hit)

2901-2920hit(20498hit)

  • Sequentially Iterative Equalizer Based on Kalman Filtering and Smoothing for MIMO Systems under Frequency Selective Fading Channels

    Sangjoon PARK  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/09/19
      Vol:
    E101-B No:3
      Page(s):
    909-914

    This paper proposes a sequentially iterative equalizer based on Kalman filtering and smoothing (SIEKFS) for multiple-input multiple-output (MIMO) systems under frequency selective fading channels. In the proposed SIEKFS, an iteration consists of sequentially executed subiterations, and each subiteration performs equalization and detection procedures of the symbols transmitted from a specific transmit antenna. During this subiteration, all available observations for the transmission block are utilized in the equalization procedures. Furthermore, the entire soft estimate of the desired symbols to be detected does not participate in the equalization procedures of the desired symbols, i.e., the proposed SIEKFS performs input-by-input equalization procedures for a priori information nulling. Therefore, compared with the original iterative equalizer based on Kalman filtering and smoothing, which performs symbol-by-symbol equalization procedures, the proposed SIEKFS can also perform iterative equalization based on the Kalman framework and turbo principle, with a significant reduction in computation complexity. Simulation results verify that the proposed SIEKFS achieves suboptimum error performance as the size of the antenna configuration and the number of iterations increase.

  • FCReducer: Locating Symmetric Cryptographic Functions on the Memory

    Ryoya FURUKAWA  Ryoichi ISAWA  Masakatu MORII  Daisuke INOUE  Koji NAKAO  

     
    PAPER-Information Network

      Pubricized:
    2017/12/14
      Vol:
    E101-D No:3
      Page(s):
    685-697

    Malicious software (malware) poses various significant challenges. One is the need to retrieve plain-text messages transmitted between malware and herders through an encrypted network channel. Those messages (e.g., commands for malware) can be a useful hint to reveal their malicious activities. However, the retrieving is challenging even if the malware is executed on an analysis computer. To assist analysts in retrieving the plain-text from the memory, this paper presents FCReducer(Function Candidate Reducer), which provides a small candidate set of cryptographic functions called by malware. Given this set, an analyst checks candidates to locate cryptographic functions. If the decryption function is found, she then obtains its output as the plain-text. Although existing systems such as CipherXRay have been proposed to locate cryptographic functions, they heavily rely on fine-grained dynamic taint analysis (DTA). This makes them weak against under-tainting, which means failure of tracking data propagation. To overcome under-tainting, FCReducer conducts coarse-grained DTA and generates a typical data dependency graph of functions in which the root function accesses an encrypted message. This does not require fine-grained DTA. FCReducer then applies a community detection method such as InfoMap to the graph for detecting a community of functions that plays a role in decryption or encryption. The functions in this community are provided as candidates. With experiments using 12 samples including four malware specimens, we confirmed that FCReducer reduced, for example, 4830 functions called by Zeus malware to 0.87% as candidates. We also propose a heuristic to reduce candidates more greatly.

  • The Simplified REV Method Combined with Hadamard Group Division for Phased Array Calibration

    Tao XIE  Jiang ZHU  Jinjun LUO  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2017/08/28
      Vol:
    E101-B No:3
      Page(s):
    847-855

    The rotating element electric field vector (REV) method is a classical measurement technique for phased array calibration. Compared with other calibration methods, it requires only power measurements. Thus, the REV method is more reliable for operating phased array calibration systems. However, since the phase of each element must be rotated from 0 to 2π, the conventional REV method requires a large number of measurements. Moreover, the power of composite electric field vector doesn't vary significantly because only a single element's phase is rotated. Thus, it can be easily degraded by the receiver noise. A simplified REV method combined with Hadamard group division is proposed in this paper. In the proposed method, only power measurements are required. All the array elements are divided into different groups according to the group matrix derived from the normalized Hadamard matrix. The phases of all the elements in the same group are rotated at the same time, and the composite electric field vector of this group is obtained by the simplified REV method. Hence, the relative electric fields of all elements can be obtained by a matrix equation. Compared with the conventional REV method, the proposed method can not only reduce the number of measurements but also improve the measurement accuracy under the particular range of signal to noise ratio(SNR) at the receiver, especially under low and moderate SNRs.

  • Performance Comparison of Subjective Quality Assessment Methods for 4k Video

    Kimiko KAWASHIMA  Kazuhisa YAMAGISHI  Takanori HAYASHI  

     
    PAPER-Multimedia Systems for Communications

      Pubricized:
    2017/08/29
      Vol:
    E101-B No:3
      Page(s):
    933-945

    Many subjective quality assessment methods have been standardized. Experimenters can select a method from these methods in accordance with the aim of the planned subjective assessment experiment. It is often argued that the results of subjective quality assessment are affected by range effects that are caused by the quality distribution of the assessment videos. However, there are no studies on the double stimulus continuous quality-scale (DSCQS) and absolute category rating with hidden reference (ACR-HR) methods that investigate range effects in the high-quality range. Therefore, we conduct experiments using high-quality assessment videos (high-quality experiment) and low-to-high-quality assessment videos (low-to-high-quality experiment) and compare the DSCQS and ACR-HR methods in terms of accuracy, stability, and discrimination ability. Regarding accuracy, we find that the mean opinion scores of the DSCQS and ACR-HR methods were marginally affected by range effects, although almost all common processed video sequences showed no significant difference for the high- and low-to-high-quality experiments. Second, the DSCQS and ACR-HR methods were equally stable in the low-to-high-quality experiment, whereas the DSCQS method was more stable than the ACR-HR method in the high-quality experiment. Finally, the DSCQS method had higher discrimination ability than the ACR-HR method in the low-to-high-quality experiment, whereas both methods had almost the same discrimination ability for the high-quality experiment. We thus determined that the DSCQS method is better at minimizing the range effects than the ACR-HR method in the high-quality range.

  • Resource Management Architecture of Metro Aggregation Network for IoT Traffic Open Access

    Akira MISAWA  Masaru KATAYAMA  

     
    INVITED PAPER

      Pubricized:
    2017/09/19
      Vol:
    E101-B No:3
      Page(s):
    620-627

    IoT (Internet of Things) services are emerging and the bandwidth requirements for rich media communication services are increasing exponentially. We propose a virtual edge architecture comprising computation resource management layers and path bandwidth management layers for easy addition and reallocation of new service node functions. These functions are performed by the Virtualized Network Function (VNF), which accommodates terminals covering a corresponding access node to realize fast VNF migration. To increase network size for IoT traffic, VNF migration is limited to the VNF that contains the active terminals, which leads to a 20% reduction in the computation of VNF migration. Fast dynamic bandwidth allocation for dynamic bandwidth paths is realized by proposed Hierarchical Time Slot Allocation of Optical Layer 2 Switch Network, which attain the minimum calculation time of less than 1/100.

  • Deep Attention Residual Hashing

    Yang LI  Zhuang MIAO  Ming HE  Yafei ZHANG  Hang LI  

     
    LETTER-Image

      Vol:
    E101-A No:3
      Page(s):
    654-657

    How to represent images into highly compact binary codes is a critical issue in many computer vision tasks. Existing deep hashing methods typically focus on designing loss function by using pairwise or triplet labels. However, these methods ignore the attention mechanism in the human visual system. In this letter, we propose a novel Deep Attention Residual Hashing (DARH) method, which directly learns hash codes based on a simple pointwise classification loss function. Compared to previous methods, our method does not need to generate all possible pairwise or triplet labels from the training dataset. Specifically, we develop a new type of attention layer which can learn human eye fixation and significantly improves the representation ability of hash codes. In addition, we embedded the attention layer into the residual network to simultaneously learn discriminative image features and hash codes in an end-to-end manner. Extensive experiments on standard benchmarks demonstrate that our method preserves the instance-level similarity and outperforms state-of-the-art deep hashing methods in the image retrieval application.

  • Improving DOA Estimation and Preventing Target Split Using Automotive Radar Sensor Arrays

    Heemang SONG  Seunghoon CHO  Kyung-Jin YOU  Hyun-Chool SHIN  

     
    LETTER-Digital Signal Processing

      Vol:
    E101-A No:3
      Page(s):
    590-594

    In this paper, we propose an automotive radar sensor compensation method improving direction of arrival (DOA) and preventing target split tracking. Amplitude and phase mismatching and mutual coupling between radar sensor arrays cause an inaccuracy problem in DOA estimation. By quantifying amplitude and phase distortion levels for each angle, we compensate the sensor distortion. Applying the proposed method to Bartlett, Capon and multiple signal classification (MUSIC) algorithms, we experimentally demonstrate the performance improvement using both experimental data from the chamber and real data obtained in actual road.

  • GPU-Accelerated Stochastic Simulation of Biochemical Networks

    Pilsung KANG  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2017/12/20
      Vol:
    E101-D No:3
      Page(s):
    786-790

    We present a GPU (graphics processing unit) accelerated stochastic algorithm implementation for simulating biochemical reaction networks using the latest NVidia architecture. To effectively utilize the massive parallelism offered by the NVidia Pascal hardware, we apply a set of performance tuning methods and guidelines such as exploiting the architecture's memory hierarchy in our algorithm implementation. Based on our experimentation results as well as comparative analysis using CPU-based implementations, we report our initial experiences on the performance of modern GPUs in the context of scientific computing.

  • Extraction of Library Update History Using Source Code Reuse Detection

    Kanyakorn JEWMAIDANG  Takashi ISHIO  Akinori IHARA  Kenichi MATSUMOTO  Pattara LEELAPRUTE  

     
    LETTER-Software Engineering

      Pubricized:
    2017/12/20
      Vol:
    E101-D No:3
      Page(s):
    799-802

    This paper proposes a method to extract and visualize a library update history in a project. The method identifies reused library versions by comparing source code in a product with existing versions of the library so that developers can understand when their own copy of a library has been copied, modified, and updated.

  • Polynomial Time Learnability of Graph Pattern Languages Defined by Cographs

    Takayoshi SHOUDAI  Yuta YOSHIMURA  Yusuke SUZUKI  Tomoyuki UCHIDA  Tetsuhiro MIYAHARA  

     
    PAPER

      Pubricized:
    2017/12/19
      Vol:
    E101-D No:3
      Page(s):
    582-592

    A cograph (complement reducible graph) is a graph which can be generated by disjoint union and complement operations on graphs, starting with a single vertex graph. Cographs arise in many areas of computer science and are studied extensively. With the goal of developing an effective data mining method for graph structured data, in this paper we introduce a graph pattern expression, called a cograph pattern, which is a special type of cograph having structured variables. Firstly, we show that a problem whether or not a given cograph pattern g matches a given cograph G is NP-complete. From this result, we consider the polynomial time learnability of cograph pattern languages defined by cograph patterns having variables labeled with mutually different labels, called linear cograph patterns. Secondly, we present a polynomial time matching algorithm for linear cograph patterns. Next, we give a polynomial time algorithm for obtaining a minimally generalized linear cograph pattern which explains given positive data. Finally, we show that the class of linear cograph pattern languages is polynomial time inductively inferable from positive data.

  • Multiple Matrix Rank Minimization Approach to Audio Declipping

    Ryohei SASAKI  Katsumi KONISHI  Tomohiro TAKAHASHI  Toshihiro FURUKAWA  

     
    LETTER-Speech and Hearing

      Pubricized:
    2017/12/06
      Vol:
    E101-D No:3
      Page(s):
    821-825

    This letter deals with an audio declipping problem and proposes a multiple matrix rank minimization approach. We assume that short-time audio signals satisfy the autoregressive (AR) model and formulate the declipping problem as a multiple matrix rank minimization problem. To solve this problem, an iterative algorithm is provided based on the iterative partial matrix shrinkage (IPMS) algorithm. Numerical examples show its efficiency.

  • Adaptive Extrinsic Information Scaling for Concatenated Zigzag Codes Based on Max-Log-APP

    Hao ZHENG  Xingan XU  Changwei LV  Yuanfang SHANG  Guodong WANG  Chunlin JI  

     
    LETTER-Coding Theory

      Vol:
    E101-A No:3
      Page(s):
    627-631

    Concatenated zigzag (CZ) codes are classified as one kind of parallel-concatenated codes with powerful performance and low complexity. This kind of codes has flexible implementation methods and a good application prospect. We propose a modified turbo-type decoder and adaptive extrinsic information scaling method based on the Max-Log-APP (MLA) algorithm, which can provide a performance improvement also under the relatively low decoding complexity. Simulation results show that the proposed method can effectively help the sub-optimal MLA algorithm to approach the optimal performance. Some contrasts with low-density parity-check (LDPC) codes are also presented in this paper.

  • A Sub-1-µs Start-Up Time, Fully-Integrated 32-MHz Relaxation Oscillator for Low-Power Intermittent Systems

    Hiroki ASANO  Tetsuya HIROSE  Taro MIYOSHI  Keishi TSUBAKI  Toshihiro OZAKI  Nobutaka KUROKI  Masahiro NUMA  

     
    PAPER-Electronic Circuits

      Vol:
    E101-C No:3
      Page(s):
    161-169

    This paper presents a fully integrated 32-MHz relaxation oscillator (ROSC) capable of sub-1-µs start-up time operation for low-power intermittent VLSI systems. The proposed ROSC employs current mode architecture that is different from conventional voltage mode architecture. This enables compact and fast switching speed to be achieved. By designing transistor sizes equally between one in a bias circuit and another in a voltage to current converter, the effect of process variation can be minimized. A prototype chip in a 0.18-µm CMOS demonstrated that the ROSC generates a stable clock frequency of 32.6 MHz within 1-µs start-up time. Measured line regulation and temperature coefficient were ±0.69% and ±0.38%, respectively.

  • Classification of Utterances Based on Multiple BLEU Scores for Translation-Game-Type CALL Systems

    Reiko KUWA  Tsuneo KATO  Seiichi YAMAMOTO  

     
    PAPER-Speech and Hearing

      Pubricized:
    2017/12/04
      Vol:
    E101-D No:3
      Page(s):
    750-757

    This paper proposes a classification method of second-language-learner utterances for interactive computer-assisted language learning systems. This classification method uses three types of bilingual evaluation understudy (BLEU) scores as features for a classifier. The three BLEU scores are calculated in accordance with three subsets of a learner corpus divided according to the quality of utterances. For the purpose of overcoming the data-sparseness problem, this classification method uses the BLEU scores calculated using a mixture of word and part-of-speech (POS)-tag sequences converted from word sequences based on a POS-replacement rule according to which words are replaced with POS tags in n-grams. Experiments of classifying English utterances by Japanese demonstrated that the proposed classification method achieved classification accuracy of 78.2% which was 12.3 points higher than a baseline with one BLEU score.

  • On the Properties and Applications of Inconsistent Neighborhood in Neighborhood Rough Set Models

    Shujiao LIAO  Qingxin ZHU  Rui LIANG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/12/20
      Vol:
    E101-D No:3
      Page(s):
    709-718

    Rough set theory is an important branch of data mining and granular computing, among which neighborhood rough set is presented to deal with numerical data and hybrid data. In this paper, we propose a new concept called inconsistent neighborhood, which extracts inconsistent objects from a traditional neighborhood. Firstly, a series of interesting properties are obtained for inconsistent neighborhoods. Specially, some properties generate new solutions to compute the quantities in neighborhood rough set. Then, a fast forward attribute reduction algorithm is proposed by applying the obtained properties. Experiments undertaken on twelve UCI datasets show that the proposed algorithm can get the same attribute reduction results as the existing algorithms in neighborhood rough set domain, and it runs much faster than the existing ones. This validates that employing inconsistent neighborhoods is advantageous in the applications of neighborhood rough set. The study would provide a new insight into neighborhood rough set theory.

  • Analysis of a Sufficient Condition on the Optimality of a Decoded Codeword of Soft-Decision Decodings for Binary Linear Codes on a 4-Level Quantization over an AWGN Channel

    Takuya KUSAKA  

     
    PAPER-Coding Theory

      Vol:
    E101-A No:3
      Page(s):
    570-576

    In this paper, a study of a sufficient condition on the optimality of a decoded codeword of soft-decision decodings for binary linear codes is shown for a quantized case. A typical uniform 4-level quantizer for soft-decision decodings is employed for the analysis. Simulation results on the (64,42,8) Reed-Muller code indicates that the condition is effective for SN ratios at 3[dB] or higher for any iterative style optimum decodings.

  • A Bandwidth Allocation Scheme to Improve Fairness and Link Utilization in Data Center Networks

    Yusuke ITO  Hiroyuki KOGA  Katsuyoshi IIDA  

     
    PAPER

      Pubricized:
    2017/09/19
      Vol:
    E101-B No:3
      Page(s):
    679-687

    Cloud computing, which enables users to enjoy various Internet services provided by data centers (DCs) at anytime and anywhere, has attracted much attention. In cloud computing, however, service quality degrades with user distance from the DC, which is unfair. In this study, we propose a bandwidth allocation scheme based on collectable information to improve fairness and link utilization in DC networks. We have confirmed the effectiveness of this approach through simulation evaluations.

  • UCB-SC: A Fast Variant of KL-UCB-SC for Budgeted Multi-Armed Bandit Problem

    Ryo WATANABE  Junpei KOMIYAMA  Atsuyoshi NAKAMURA  Mineichi KUDO  

     
    LETTER-Mathematical Systems Science

      Vol:
    E101-A No:3
      Page(s):
    662-667

    We propose a policy UCB-SC for budgeted multi-armed bandits. The policy is a variant of recently proposed KL-UCB-SC. Unlike KL-UCB-SC, which is computationally prohibitive, UCB-SC runs very fast while keeping KL-UCB-SC's asymptotical optimality when reward and cost distributions are Bernoulli with means around 0.5, which are verified both theoretically and empirically.

  • A Color Restoration Method for Irreversible Thermal Paint Based on Atmospheric Scattering Model

    Zhan WANG  Ping-an DU  Jian LIU  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2017/12/08
      Vol:
    E101-D No:3
      Page(s):
    826-829

    Irreversible thermal paints or temperature sensitive paints are a kind of special temperature sensor which can indicate the temperature grad by judging the color change and is widely used for off-line temperature measurement during aero engine test. Unfortunately, the hot gases flow within the engine during measuring always make the paint color degraded, which means a serious saturation reduction and contrast loss of the paint colors. This phenomenon makes it more difficult to interpret the thermal paint test results. Present contrast enhancement algorithms can significantly increase the image contrast but can't protect the hue feature of the paint images effectively, which always cause color shift. In this paper, we propose a color restoration method for thermal paint image. This method utilizes the atmospheric scattering model to restore the lost contrast and saturation information, so that the hue can be protected and the temperature can be precisely interpreted based on the image.

  • An Efficient Parallel Coding Scheme in Erasure-Coded Storage Systems

    Wenrui DONG  Guangming LIU  

     
    PAPER-Computer System

      Pubricized:
    2017/12/12
      Vol:
    E101-D No:3
      Page(s):
    627-643

    Erasure codes have been considered as one of the most promising techniques for data reliability enhancement and storage efficiency in modern distributed storage systems. However, erasure codes often suffer from a time-consuming coding process which makes them nearly impractical. The opportunity to solve this problem probably rely on the parallelization of erasure-code-based application on the modern multi-/many-core processors to fully take advantage of the adequate hardware resources on those platforms. However, the complicated data allocation and limited I/O throughput pose a great challenge on the parallelization. To address this challenge, we propose a general multi-threaded parallel coding approach in this work. The approach consists of a general multi-threaded parallel coding model named as MTPerasure, and two detailed parallel coding algorithms, named as sdaParallel and ddaParallel, respectively, adapting to different I/O circumstances. MTPerasure is a general parallel coding model focusing on the high level data allocation, and it is applicable for all erasure codes and can be implemented without any modifications of the low level coding algorithms. The sdaParallel divides the data into several parts and the data parts are allocated to different threads statically in order to eliminate synchronization latency among multiple threads, which improves the parallel coding performance under the dummy I/O mode. The ddaParallel employs two threads to execute the I/O reading and writing on the basis of small pieces independently, which increases the I/O throughput. Furthermore, the data pieces are assigned to the coding thread dynamically. A special thread scheduling algorithm is also proposed to reduce thread migration latency. To evaluate our proposal, we parallelize the popular open source library jerasure based on our approach. And a detailed performance comparison with the original sequential coding program indicates that the proposed parallel approach outperforms the original sequential program by an extraordinary speedups from 1.4x up to 7x, and achieves better utilization of the computation and I/O resources.

2901-2920hit(20498hit)