The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E99-D No.4  (Publication Date:2016/04/01)

    Special Section on Information and Communication System Security
  • FOREWORD Open Access

    Toshihiro YAMAUCHI  

     
    FOREWORD

      Page(s):
    785-786
  • Cyber Physical Security for Industrial Control Systems and IoT Open Access

    Kazukuni KOBARA  

     
    INVITED PAPER

      Pubricized:
    2016/01/13
      Page(s):
    787-795

    Cyber-attacks and cybersecurity used to be the issues for those who use Internet and computers. The issues, however, are expanding to anyone who does not even use them directly. The society is gradually and heavily depending on networks and computers. They are not closed within a cyberspace anymore and having interaction with our real world with sensors and actuators. Such systems are known as CPS (Cyber Physical Systems), IoT/E (Internet of Things/Everything), Industry 4.0, Industrial Internet, M2M, etc. No matter what they are called, exploitation of any of these systems may cause a serious influence to our real life and appropriate countermeasures must be taken to mitigate the risks. In this paper, cybersecurity in ICS (Industrial Control Systems) is reviewed as a leading example of cyber physical security for critical infrastructures. Then as a future aspect of it, IoT security for consumers is explained.

  • A New Scheme of Blockcipher Hash

    Rashed MAZUMDER  Atsuko MIYAJI  

     
    PAPER-Cryptography and cryptographic protocols

      Pubricized:
    2016/01/13
      Page(s):
    796-804

    A cryptographic hash is an important tool in the area of a modern cryptography. It comprises a compression function, where the compression function can be built by a scratch or blockcipher. There are some familiar schemes of blockcipher compression function such as Weimar, Hirose, Tandem, Abreast, Nandi, ISA-09. Interestingly, the security proof of all the mentioned schemes are based on the ideal cipher model (ICM), which depends on ideal environment. Therefore, it is desired to use such a proof technique model, which is close to the real world such as weak cipher model (WCM). Hence, we proposed an (n, 2n) blockcipher compression function, which is secure under the ideal cipher model, weak cipher model and extended weak cipher model (ext.WCM). Additionally, the majority of the existing schemes need multiple key schedules, where the proposed scheme and the Hirose-DM follow single key scheduling property. The efficiency-rate of our scheme is r=1/2. Moreover, the number of blockcipher call of this scheme is 2 and it runs in parallel.

  • FPGA Implementation of Various Elliptic Curve Pairings over Odd Characteristic Field with Non Supersingular Curves

    Yasuyuki NOGAMI  Hiroto KAGOTANI  Kengo IOKIBE  Hiroyuki MIYATAKE  Takashi NARITA  

     
    PAPER-Cryptography and cryptographic protocols

      Pubricized:
    2016/01/13
      Page(s):
    805-815

    Pairing-based cryptography has realized a lot of innovative cryptographic applications such as attribute-based cryptography and semi homomorphic encryption. Pairing is a bilinear map constructed on a torsion group structure that is defined on a special class of elliptic curves, namely pairing-friendly curve. Pairing-friendly curves are roughly classified into supersingular and non supersingular curves. In these years, non supersingular pairing-friendly curves have been focused on from a security reason. Although non supersingular pairing-friendly curves have an ability to bridge various security levels with various parameter settings, most of software and hardware implementations tightly restrict them to achieve calculation efficiencies and avoid implementation difficulties. This paper shows an FPGA implementation that supports various parameter settings of pairings on non supersingular pairing-friendly curves for which Montgomery reduction, cyclic vector multiplication algorithm, projective coordinates, and Tate pairing have been combinatorially applied. Then, some experimental results with resource usages are shown.

  • D2-POR: Direct Repair and Dynamic Operations in Network Coding-Based Proof of Retrievability

    Kazumasa OMOTE  Phuong-Thao TRAN  

     
    PAPER-Cryptography and cryptographic protocols

      Pubricized:
    2016/01/13
      Page(s):
    816-829

    Proof of Retrievability (POR) is a protocol by which a client can distribute his/her data to cloud servers and can check if the data stored in the servers is available and intact. After that, network coding-based POR has been applied to improve network throughput. Although many network coding-based PORs have been proposed, most of them have not achieved the following practical features: direct repair and dynamic operations. In this paper, we propose the D2-POR scheme (Direct repair and Dynamic operations in network coding-based POR) to address these shortcomings. When a server is corrupted, the D2-POR can support the direct repair in which the data stored in the corrupted server can be repaired using the data directly provided by healthy servers. The client is thus free from the burden of data repair. Furthermore, the D2-POR allows the client to efficiently perform dynamic operations, i.e., modification, insertion and deletion.

  • A Security Enhancement Technique for Wireless Communications Using Secret Sharing and Physical Layer Secrecy Transmission

    Shoichiro YAMASAKI  Tomoko K. MATSUSHIMA  

     
    PAPER-Network security

      Pubricized:
    2016/01/13
      Page(s):
    830-838

    Secret sharing is a method of information protection for security. The information is divided into n shares and reconstructed from any k shares, but no knowledge of the information is revealed from k-1 shares. Physical layer security is a method of achieving favorable reception conditions at the destination terminal in wireless communications. In this study, we propose a security enhancement technique for wireless packet communications. The technique uses secret sharing and physical layer security to exchange a secret encryption key. The encryption key for packet information is set as the secret information in secret sharing, and the secret information is divided into n shares. Each share is located in the packet header. The base station transmits the packets to the destination terminal by using physical layer security based on precoded multi-antenna transmission. With this transmission scheme, the destination terminal can receive more than k shares without error and perfectly recover the secret information. In addition, an eavesdropper terminal can receive less than k-1 shares without error and recover no secret information. In this paper, we propose a protection technique using secret sharing based on systematic Reed-Solomon codes. The technique establishes an advantageous condition for the destination terminal to recover the secret information. The evaluation results by numerical analysis and computer simulation show the validity of the proposed technique.

  • A Novel Protocol-Feature Attack against Tor's Hidden Service

    Rui WANG  Qiaoyan WEN  Hua ZHANG  Xuelei LI  

     
    PAPER-Network security

      Pubricized:
    2016/01/13
      Page(s):
    839-849

    Tor is the most popular and well-researched low-latency anonymous communication network provides sender privacy to Internet users. It also provides recipient privacy by making TCP services available through “hidden service”, which allowing users not only to access information anonymously but also to publish information anonymously. However, based on our analysis of the hidden service protocol, we found a special combination of cells, which is the basic transmission unit over Tor, transmitted during the circuit creation procedure that could be used to degrade the anonymity. In this paper, we investigate a novel protocol-feature based attack against Tor's hidden service. The main idea resides in fact that an attacker could monitor traffic and manipulate cells at the client side entry router, and an adversary at the hidden server side could cooperate to reveal the communication relationship. Compared with other existing attacks, our attack reveals the client of a hidden service and does not rely on traffic analysis or watermarking techniques. We manipulate Tor cells at the entry router to generate the protocol-feature. Once our controlled entry onion routers detect such a feature, we can confirm the IP address of the client. We implemented this attack against hidden service and conducted extensive theoretical analysis and experiments over Tor network. The experiment results validate that our attack can achieve high rate of detection rate with low false positive rate.

  • Defending DDoS Attacks in Software-Defined Networking Based on Legitimate Source and Destination IP Address Database

    Xiulei WANG  Ming CHEN  Changyou XING  Tingting ZHANG  

     
    PAPER-Network security

      Pubricized:
    2016/01/13
      Page(s):
    850-859

    The availability is an important issue of software-defined networking (SDN). In this paper, the experiments based on a SDN testbed showed that the resource utilization of the data plane and control plane changed drastically when DDoS attacks happened. This is mainly because the DDoS attacks send a large number of fake flows to network in a short time. Based on the observation and analysis, a DDoS defense mechanism based on legitimate source and destination IP address database is proposed in this paper. Firstly, each flow is abstracted as a source-destination IP address pair and a legitimate source-destination IP address pair database (LSDIAD) is established by historical normal traffic trace. Then the proportion of new source-destination IP address pair in the traffic per unit time is cumulated by non-parametric cumulative sum (CUSUM) algorithm to detect the DDoS attacks quickly and accurately. Based on the alarm from the non-parametric CUSUM, the attack flows will be filtered and redirected to a middle box network for deep analysis via south-bound API of SDN. An on-line updating policy is adopted to keep the LSDIAD timely and accurate. This mechanism is mainly implemented in the controller and the simulation results show that this mechanism can achieve a good performance in protecting SDN from DDoS attacks.

  • MineSpider: Extracting Hidden URLs Behind Evasive Drive-by Download Attacks

    Yuta TAKATA  Mitsuaki AKIYAMA  Takeshi YAGI  Takeo HARIU  Shigeki GOTO  

     
    PAPER-Web security

      Pubricized:
    2016/01/13
      Page(s):
    860-872

    Drive-by download attacks force users to automatically download and install malware by redirecting them to malicious URLs that exploit vulnerabilities of the user's web browser. In addition, several evasion techniques, such as code obfuscation and environment-dependent redirection, are used in combination with drive-by download attacks to prevent detection. In environment-dependent redirection, attackers profile the information on the user's environment, such as the name and version of the browser and browser plugins, and launch a drive-by download attack on only certain targets by changing the destination URL. When malicious content detection and collection techniques, such as honeyclients, are used that do not match the specific environment of the attack target, they cannot detect the attack because they are not redirected. Therefore, it is necessary to improve analysis coverage while countering these adversarial evasion techniques. We propose a method for exhaustively analyzing JavaScript code relevant to redirections and extracting the destination URLs in the code. Our method facilitates the detection of attacks by extracting a large number of URLs while controlling the analysis overhead by excluding code not relevant to redirections. We implemented our method in a browser emulator called MINESPIDER that automatically extracts potential URLs from websites. We validated it by using communication data with malicious websites captured during a three-year period. The experimental results demonstrated that MINESPIDER extracted 30,000 new URLs from malicious websites in a few seconds that conventional methods missed.

  • Automating URL Blacklist Generation with Similarity Search Approach

    Bo SUN  Mitsuaki AKIYAMA  Takeshi YAGI  Mitsuhiro HATADA  Tatsuya MORI  

     
    PAPER-Web security

      Pubricized:
    2016/01/13
      Page(s):
    873-882

    Modern web users may encounter a browser security threat called drive-by-download attacks when surfing on the Internet. Drive-by-download attacks make use of exploit codes to take control of user's web browser. Many web users do not take such underlying threats into account while clicking URLs. URL Blacklist is one of the practical approaches to thwarting browser-targeted attacks. However, URL Blacklist cannot cope with previously unseen malicious URLs. Therefore, to make a URL blacklist effective, it is crucial to keep the URLs updated. Given these observations, we propose a framework called automatic blacklist generator (AutoBLG) that automates the collection of new malicious URLs by starting from a given existing URL blacklist. The primary mechanism of AutoBLG is expanding the search space of web pages while reducing the amount of URLs to be analyzed by applying several pre-filters such as similarity search to accelerate the process of generating blacklists. AutoBLG consists of three primary components: URL expansion, URL filtration, and URL verification. Through extensive analysis using a high-performance web client honeypot, we demonstrate that AutoBLG can successfully discover new and previously unknown drive-by-download URLs from the vast web space.

  • A Healthcare Information System for Secure Delivery and Remote Management of Medical Records

    Hyoung-Kee CHOI  Ki-Eun SHIN  Hyoungshick KIM  

     
    PAPER-Privacy protection in information systems

      Pubricized:
    2016/01/13
      Page(s):
    883-890

    With the rapid merger of healthcare business and information technology, more healthcare institutions and medical practices are sharing information. Since these records often contain patients' sensitive personal information, Healthcare Information Systems (HISs) should be properly designed to manage these records in a secure manner. We propose a novel security design for the HIS complying with the security and privacy rules. The proposed system defines protocols to ensure secure delivery of medical records over insecure public networks and reliable management of medical record in the remote server without incurring excessive costs to implement services for security. We demonstrate the practicality of the proposed system through a security analysis and performance evaluation.

  • Examining Privacy Leakage from Online Used Markets in Korea

    Hyunsu MUN  Youngseok LEE  

     
    LETTER-Privacy protection in information systems

      Pubricized:
    2016/01/13
      Page(s):
    891-894

    Online used markets such as eBay, Yahoo Auction, and Craigslist have been popular due to the web services. Compared to the shopping mall websites like eBay or Yahoo Auction, web community-style used markets often expose the private information of sellers. In Korea, the most popular online used market is a website called “Joonggonara” with more than 13 million users, and it uses an informal posting format that does not protect the users' privacy identifiable information. In this work, we examine the privacy leakage from the online used markets in Korea, and show that 45.9% and 74.0% of sample data expose cellular phone numbers and email addresses, respectively. In addition, we demonstrate that the private information can be maliciously exploited to identify a subscriber of the social network service.

  • Special Section on Data Engineering and Information Management
  • FOREWORD Open Access

    Toshiyuki AMAGASA  

     
    FOREWORD

      Page(s):
    895-895
  • BLM-Rank: A Bayesian Linear Method for Learning to Rank and Its GPU Implementation

    Huifeng GUO  Dianhui CHU  Yunming YE  Xutao LI  Xixian FAN  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    896-905

    Ranking as an important task in information systems has many applications, such as document/webpage retrieval, collaborative filtering and advertising. The last decade has witnessed a growing interest in the study of learning to rank as a means to leverage training information in a system. In this paper, we propose a new learning to rank method, i.e. BLM-Rank, which uses a linear function to score samples and models the pairwise preference of samples relying on their scores under a Bayesian framework. A stochastic gradient approach is adopted to maximize the posterior probability in BLM-Rank. For industrial practice, we have also implemented the proposed algorithm on Graphic Processing Unit (GPU). Experimental results on LETOR have demonstrated that the proposed BLM-Rank method outperforms the state-of-the-art methods, including RankSVM-Struct, RankBoost, AdaRank-NDCG, AdaRank-MAP and ListNet. Moreover, the results have shown that the GPU implementation of the BLM-Rank method is ten-to-eleven times faster than its CPU counterpart in the training phase, and one-to-four times faster in the testing phase.

  • Named Entity Oriented Difference Analysis of News Articles and Its Application

    Keisuke KIRITOSHI  Qiang MA  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    906-917

    To support the efficient gathering of diverse information about a news event, we focus on descriptions of named entities (persons, organizations, locations) in news articles. We extend the stakeholder mining proposed by Ogawa et al. and extract descriptions of named entities in articles. We propose three measures (difference in opinion, difference in details, and difference in factor coverage) to rank news articles on the basis of analyzing differences in descriptions of named entities. On the basis of these three measurements, we develop a news app on mobile devices to help users to acquire diverse reports for improving their understanding of the news. For the current article a user is reading, the proposed news app will rank and provide its related articles from different perspectives by the three ranking measurements. One of the notable features of our system is to consider the access history to provide the related news articles. In other words, we propose a context-aware re-ranking method for enhancing the diversity of news reports presented to users. We evaluate our three measurements and the re-ranking method with a crowdsourcing experiment and a user study, respectively.

  • The Efficient Algorithms for Constructing Enhanced Quadtrees Using MapReduce

    Hongyeon KIM  Sungmin KANG  Seokjoo LEE  Jun-Ki MIN  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    918-926

    MapReduce is considered as the de facto framework for storing and processing massive data due to its fascinating features: simplicity, flexibility, fault tolerance and scalability. However, since the MapReduce framework does not provide an efficient access method to data (i.e., an index), whole data should be retrieved even though a user wants to access a small portion of data. Thus, in this paper, we devise an efficient algorithm constructing quadtrees with MapReduce. Our proposed algorithms reduce the index construction time by utilizing a sampling technique to partition a data set. To improve the query performance, we extend the quadtree construction algorithm in which the adjacent nodes of a quadtree are integrated when the number of points located in the nodes is less than the predefined threshold. Furthermore, we present an effective algorithm for incremental update. Our experimental results show the efficiency of our proposed algorithms in diverse environments.

  • Modeling Joint Representation with Tri-Modal Deep Belief Networks for Query and Question Matching

    Nan JIANG  Wenge RONG  Baolin PENG  Yifan NIE  Zhang XIONG  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    927-935

    One of the main research tasks in community question answering (cQA) is finding the most relevant questions for a given new query, thereby providing useful knowledge for users. The straightforward approach is to capitalize on textual features, or a bag-of-words (BoW) representation, to conduct the matching process between queries and questions. However, these approaches have a lexical gap issue which means that, if lexicon matching fails, they cannot model the semantic meaning. In addition, latent semantic models, like latent semantic analysis (LSA), attempt to map queries to its corresponding semantically similar questions through a lower dimension representation. But alas, LSA is a shallow and linear model that cannot model highly non-linear correlations in cQA. Moreover, both BoW and semantic oriented solutions utilize a single dictionary to represent the query, question, and answer in the same feature space. However, the correlations between them, as we observe from data, imply that they lie in entirely different feature spaces. In light of these observations, this paper proposes a tri-modal deep belief network (tri-DBN) to extract a unified representation for the query, question, and answer, with the hypothesis that they locate in three different feature spaces. Besides, we compare the unified representation extracted by our model with other representations using the Yahoo! Answers queries on the dataset. Finally, Experimental results reveal that the proposed model captures semantic meaning both within and between queries, questions, and answers. In addition, the results also suggest that the joint representation extracted via the proposed method can improve the performance of cQA archives searching.

  • A Sensor-Based Data Visualization System for Training Blood Pressure Measurement by Auscultatory Method

    Chooi-Ling GOH  Shigetoshi NAKATAKE  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    936-943

    Blood pressure measurement by auscultatory method is a compulsory skill that is required by all healthcare practitioners. During the measurement, they must concentrate on recognizing the Korotkoff sounds, looking at the sphygmomanometer scale, and constantly deflating the cuff pressure simultaneously. This complex operation is difficult for the new learners and they need a lot of practice with the supervisor in order to guide them on their measurements. However, the supervisor is not always available and consequently, they always face the problem of lack of enough training. In order to help them mastering the skill of measuring blood pressure by auscultatory method more efficiently and effectively, we propose using a sensor device to capture the signals of Korotkoff sounds and cuff pressure during the measurement, and display the signal changes on a visualization tool through wireless connection. At the end of the measurement, the learners can verify their skill on deflation speed and recognition of Korotkoff sounds using the graphical view, and compare their measurements with the machine instantly. By using this device, the new learners do not need to wait for their supervisor for training but can practice with their colleagues more frequently. As a result, they will be able to acquire the skill in a shorter time and be more confident with their measurements.

  • An Algorithm for All-Pairs Regular Path Problem on External Memory Graphs

    Nobutaka SUZUKI  Kosetsu IKEDA  Yeondae KWON  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    944-958

    In this paper, we consider solving the all-pairs regular path problem on large graphs efficiently. Let G be a graph and r be a regular path query, and consider finding the answers of r on G. If G is so small that it fits in main memory, it suffices to load entire G into main memory and traverse G to find paths matching r. However, if G is too large and cannot fit in main memory, we need another approach. In this paper, we propose a novel approach based on external memory algorithm. Our algorithm finds the answers matching r by scanning the node list of G sequentially. We made a small experiment, which suggests that our algorithm can solve the problem efficiently.

  • Incorporation of Target Specific Knowledge for Sentiment Analysis on Microblogging

    Yongyos KAEWPITAKKUN  Kiyoaki SHIRAI  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    959-968

    Sentiment analysis of microblogging has become an important classification task because a large amount of user-generated content is published on the Internet. In Twitter, it is common that a user expresses several sentiments in one tweet. Therefore, it is important to classify the polarity not of the whole tweet but of a specific target about which people express their opinions. Moreover, the performance of the machine learning approach greatly depends on the domain of the training data and it is very time-consuming to manually annotate a large set of tweets for a specific domain. In this paper, we propose a method for sentiment classification at the target level by incorporating the on-target sentiment features and user-aware features into the classifier trained automatically from the data createdfor the specific target. An add-on lexicon, extended target list, and competitor list are also constructed as knowledge sources for the sentiment analysis. None of the processes in the proposed framework require manual annotation. The results of our experiment show that our method is effective and improves on the performance of sentiment classification compared to the baselines.

  • Automatic Erroneous Data Detection over Type-Annotated Linked Data

    Md-Mizanur RAHOMAN  Ryutaro ICHISE  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    969-978

    These days, the Web contains a huge volume of (semi-)structured data, called Linked Data (LD). However, LD suffer in data quality, and this poor data quality brings the need to identify erroneous data. Because manual erroneous data checking is impractical, automatic erroneous data detection is necessary. According to the data publishing guidelines of LD, data should use (already defined) ontology which populates type-annotated LD. Usually, the data type annotation helps in understanding the data. However, in our observation, the data type annotation could be used to identify erroneous data. Therefore, to automatically identify possible erroneous data over the type-annotated LD, we propose a framework that uses a novel nearest-neighbor based error detection technique. We conduct experiments of our framework on DBpedia, a type-annotated LD dataset, and found that our framework shows better performance of error detection in comparison with state-of-the-art framework.

  • Efficient Algorithm for Math Formula Semantic Search

    Shunsuke OHASHI  Giovanni Yoko KRISTIANTO  Goran TOPIC  Akiko AIZAWA  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    979-988

    Mathematical formulae play an important role in many scientific domains. Regardless of the importance of mathematical formula search, conventional keyword-based retrieval methods are not sufficient for searching mathematical formulae, which are structured as trees. The increasing number as well as the structural complexity of mathematical formulae in scientific articles lead to the necessity for large-scale structure-aware formula search techniques. In this paper, we formulate three types of measures that represent distinctive features of semantic similarity of math formulae, and develop efficient hash-based algorithms for the approximate calculation. Our experiments using NTCIR-11 Math-2 Task dataset, a large-scale test collection for math information retrieval with about 60-million formulae, show that the proposed method improves the search precision while also keeps the scalability and runtime efficiency high.

  • History-Pattern Encoding for Large-Scale Dynamic Multidimensional Datasets and Its Evaluations

    Masafumi MAKINO  Tatsuo TSUJI  Ken HIGUCHI  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    989-999

    In this paper, we present a new encoding/decoding method for dynamic multidimensional datasets and its implementation scheme. Our method encodes an n-dimensional tuple into a pair of scalar values even if n is sufficiently large. The method also encodes and decodes tuples using only shift and and/or register instructions. One of the most serious problems in multidimensional array based tuple encoding is that the size of an encoded result may often exceed the machine word size for large-scale tuple sets. This problem is efficiently resolved in our scheme. We confirmed the advantages of our scheme by analytical and experimental evaluations. The experimental evaluations were conducted to compare our constructed prototype system with other systems; (1) a system based on a similar encoding scheme called history-offset encoding, and (2) PostgreSQL RDBMS. In most cases, both the storage and retrieval costs of our system significantly outperformed those of the other systems.

  • A Scheme for Fast k-Concealment Anonymization

    Ryosuke KOYANAGI  Ryo FURUKAWA  Tsubasa TAKAHASHI  Takuya MORI  Toshiyuki AMAGASA  Hiroyuki KITAGAWA  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    1000-1009

    In this paper we propose an improved algorithm for k-concealment, which has been proposed as an alternative to the well-known k-anonymity model. k-concealment achieves similar privacy goals as k-anonymity; it proposes to generalize records in a table in such a way that each record is indistinguishable from at least k-1 other records, while achieving higher utility than k-anonymity. However, its computation is quite expensive in particular when dealing with large datasets containing massive records due to its high computational complexity. To cope with this problem, we propose neighbor lists, where for each record similar records are stored. Neighbor lists are constructed in advance, and can also be efficiently constructed by mapping each record to a point in a high-dimensional space and using appropriate multidimensional indexes. Our proposed scheme successfully decreases the execution time from O(kn2) to O(k2n+knlogn), and it can be practically applied to databases with millions of records. The experimental evaluation using a real dataset reveals that the proposed scheme can achieve the same level of utility as k-concealment while maintaining the efficiency at the same time.

  • Topic Representation of Researchers' Interests in a Large-Scale Academic Database and Its Application to Author Disambiguation

    Marie KATSURAI  Ikki OHMUKAI  Hideaki TAKEDA  

     
    PAPER

      Pubricized:
    2016/01/14
      Page(s):
    1010-1018

    It is crucial to promote interdisciplinary research and recommend collaborators from different research fields via academic database analysis. This paper addresses a problem to characterize researchers' interests with a set of diverse research topics found in a large-scale academic database. Specifically, we first use latent Dirichlet allocation to extract topics as distributions over words from a training dataset. Then, we convert the textual features of a researcher's publications to topic vectors, and calculate the centroid of these vectors to summarize the researcher's interest as a single vector. In experiments conducted on CiNii Articles, which is the largest academic database in Japan, we show that the extracted topics reflect the diversity of the research fields in the database. The experiment results also indicate the applicability of the proposed topic representation to the author disambiguation problem.

  • How to Combine Translation Probabilities and Question Expansion for Question Classification in cQA Services

    Kyoungman BAE  Youngjoong KO  

     
    LETTER

      Pubricized:
    2016/01/14
      Page(s):
    1019-1022

    This paper claims to use a new question expansion method for question classification in cQA services. The input questions consist of only a question whereas training data do a pair of question and answer. Thus they cannot provide enough information for good classification in many cases. Since the answer is strongly associated with the input questions, we try to create a pseudo answer to expand each input question. Translation probabilities between questions and answers and a pseudo relevant feedback technique are used to generate the pseudo answer. As a result, we obtain the significant improved performances when two approaches are effectively combined.

  • Special Section on Cyberworlds
  • FOREWORD Open Access

    Masayuki NAKAJIMA  

     
    FOREWORD

      Page(s):
    1023-1023
  • Body-Part Motion Synthesis System and Its Evaluation for Discovery Learning of Dance

    Asako SOGA  Bin UMINO  Yuho YAZAKI  Motoko HIRAYAMA  

     
    PAPER

      Pubricized:
    2016/01/28
      Page(s):
    1024-1031

    This paper reports an assessment of the feasibility and the practicality of a creation support system for contemporary dance e-learning. We developed a Body-part Motion Synthesis System (BMSS) that allows users to create choreographies by synthesizing body-part motions to increase the effect of learning contemporary dance choreography. Short created choreographies can be displayed as animation using 3DCG characters. The system targets students who are studying contemporary dance and is designed to promote the discovery learning of contemporary dance. We conducted a series of evaluation experiments for creating contemporary dance choreographies to verify the learning effectiveness of our system as a support system for discovery learning. As a consequence of experiments with 26 students who created contemporary dances, we verified that BMSS is a helpful creation training tool to discover new choreographic methods, new dance movements, and new awareness of their bodies.

  • A Kinect-Based System for Balance Rehabilitation of Stroke Patients

    Chung-Liang LAI  Chien-Ming TSENG  D. ERDENETSOGT  Tzu-Kuan LIAO  Ya-Ling HUANG  Yung-Fu CHEN  

     
    PAPER

      Pubricized:
    2016/01/28
      Page(s):
    1032-1037

    A low-cost prototypic Kinect-based rehabilitation system was developed for recovering balance capability of stroke patients. A total of 16 stroke patients were recruited to participate in the study. After excluding 3 patients who failed to finish all of the rehabilitation sessions, only the data of 13 patients were analyzed. The results exhibited a significant effect in recovering balance function of the patients after 3 weeks of balance training. Additionally, the questionnaire survey revealed that the designed system was perceived as effective and easy in operation.

  • Dense Light Transport for Relighting Computation Using Orthogonal Illumination Based on Walsh-Hadamard Matrix

    Isao MIYAGAWA  Yukinobu TANIGUCHI  

     
    PAPER

      Pubricized:
    2016/01/28
      Page(s):
    1038-1051

    We propose a practical method that acquires dense light transports from unknown 3D objects by employing orthogonal illumination based on a Walsh-Hadamard matrix for relighting computation. We assume the presence of color crosstalk, which represents color mixing between projector pixels and camera pixels, and then describe the light transport matrix by using sets of the orthogonal illumination and the corresponding camera response. Our method handles not only direct reflection light but also global light radiated from the entire environment. Tests of the proposed method using real images show that orthogonal illumination is an effective way of acquiring accurate light transports from various 3D objects. We demonstrate a relighting test based on acquired light transports and confirm that our method outputs excellent relighting images that compare favorably with the actual images observed by the system.

  • SSL Client Authentication with TPM

    Shohei KAKEI  Masami MOHRI  Yoshiaki SHIRAISHI  Masakatu MORII  

     
    PAPER

      Pubricized:
    2016/01/28
      Page(s):
    1052-1061

    TPM-embedded devices can be used as authentication tokens by issuing certificates to signing keys generated by TPM. TPM generates Attestation Identity Key (AIK) and Binding Key (BK) that are RSA keys. AIK is used to identify TPM. BK is used to encrypt data so that specific TPM can decrypt it. TPM can use for device authentication by linking a SSL client certificate to TPM. This paper proposes a method of an AIK certificate issuance with OpenID and a method of the SSL client certificate issuance to specific TPM using AIK and BK. In addition, the paper shows how to implement device authentication system using the SSL client certificate related to TPM.

  • Application of Feature Engineering for Phishing Detection

    Wei ZHANG  Huan REN  Qingshan JIANG  

     
    PAPER

      Pubricized:
    2016/01/28
      Page(s):
    1062-1070

    Phishing attacks target financial returns by luring Internet users to exposure their sensitive information. Phishing originates from e-mail fraud, and recently it is also spread by social networks and short message service (SMS), which makes phishing become more widespread. Phishing attacks have drawn great attention due to their high volume and causing heavy losses, and many methods have been developed to fight against them. However, most of researches suffered low detection accuracy or high false positive (FP) rate, and phishing attacks are facing the Internet users continuously. In this paper, we are concerned about feature engineering for improving the classification performance on phishing web pages detection. We propose a novel anti-phishing framework that employs feature engineering including feature selection and feature extraction. First, we perform feature selection based on genetic algorithm (GA) to divide features into critical features and non-critical features. Then, the non-critical features are projected to a new feature by implementing feature extraction based on a two-stage projection pursuit (PP) algorithm. Finally, we take the critical features and the new feature as input data to construct the detection model. Our anti-phishing framework does not simply eliminate the non-critical features, but considers utilizing their projection in the process of classification, which is different from literatures. Experimental results show that the proposed framework is effective in detecting phishing web pages.

  • Feature-Chain Based Malware Detection Using Multiple Sequence Alignment of API Call

    Hyun-Joo KIM  Jong-Hyun KIM  Jung-Tai KIM  Ik-Kyun KIM  Tai-Myung CHUNG  

     
    PAPER

      Pubricized:
    2016/01/28
      Page(s):
    1071-1080

    The recent cyber-attacks utilize various malware as a means of attacks for the attacker's malicious purposes. They are aimed to steal confidential information or seize control over major facilities after infiltrating the network of a target organization. Attackers generally create new malware or many different types of malware by using an automatic malware creation tool which enables remote control over a target system easily and disturbs trace-back of these attacks. The paper proposes a generation method of malware behavior patterns as well as the detection techniques in order to detect the known and even unknown malware efficiently. The behavior patterns of malware are generated with Multiple Sequence Alignment (MSA) of API call sequences of malware. Consequently, we defined these behavior patterns as a “feature-chain” of malware for the analytical purpose. The initial generation of the feature-chain consists of extracting API call sequences with API hooking library, classifying malware samples by the similar behavior, and making the representative sequences from the MSA results. The detection mechanism of numerous malware is performed by measuring similarity between API call sequence of a target process (suspicious executables) and feature-chain of malware. By comparing with other existing methods, we proved the effectiveness of our proposed method based on Longest Common Subsequence (LCS) algorithm. Also we evaluated that our method outperforms other antivirus systems with 2.55 times in detection rate and 1.33 times in accuracy rate for malware detection.

  • Hybrid Recovery-Based Intrusion Tolerant System for Practical Cyber-Defense

    Bumsoon JANG  Seokjoo DOO  Soojin LEE  Hyunsoo YOON  

     
    PAPER

      Pubricized:
    2016/01/29
      Page(s):
    1081-1091

    Due to the periodic recovery of virtual machines regardless of whether malicious intrusions exist, proactive recovery-based Intrusion Tolerant Systems (ITSs) are being considered for mission-critical applications. However, the virtual replicas can easily be exposed to attacks during their working period, and additionally, proactive recovery-based ITSs are ineffective in eliminating the vulnerability of exposure time, which is closely related to service availability. To address these problems, we propose a novel hybrid recovery-based ITS in this paper. The proposed method utilizes availability-driven recovery and dynamic cluster resizing. The availability-driven recovery method operates the recovery process by both proactive and reactive ways for the system to gain shorter exposure times and higher success rates. The dynamic cluster resizing method reduces the overhead of the system that occurs from dynamic workload fluctuations. The performance of the proposed ITS with various synthetic and real workloads using CloudSim showed that it guarantees higher availability and reliability of the system, even under malicious intrusions such as DDoS attacks.

  • Regular Section
  • FXA: Executing Instructions in Front-End for Energy Efficiency

    Ryota SHIOYA  Ryo TAKAMI  Masahiro GOSHIMA  Hideki ANDO  

     
    PAPER-Computer System

      Pubricized:
    2016/01/06
      Page(s):
    1092-1107

    Out-of-order superscalar processors have high performance but consume a large amount of energy for dynamic instruction scheduling. We propose a front-end execution architecture (FXA) for improving the energy efficiency of out-of-order superscalar processors. FXA has two execution units: an out-of-order execution unit (OXU) and an in-order execution unit (IXU). The OXU is the execution core of a common out-of-order superscalar processor. In contrast, the IXU consists only of functional units and a bypass network only. The IXU is placed at the processor front end and executes instructions in order. The IXU functions as a filter for the OXU. Fetched instructions are first fed to the IXU, and the instructions are executed in order if they are ready to execute. The instructions executed in the IXU are removed from the instruction pipeline and are not executed in the OXU. The IXU does not include dynamic scheduling logic, and thus its energy consumption is low. Evaluation results show that FXA can execute more than 50% of the instructions by using IXU, thereby making it possible to shrink the energy-consuming OXU without incurring performance degradation. As a result, FXA achieves both high performance and low energy consumption. We evaluated FXA and compared it with conventional out-of-order/in-order superscalar processors after ARM big.LITTLE architecture. The results show that FXA achieves performance improvements of 7.4% on geometric mean in SPECCPU INT 2006 benchmark suite relative to a conventional superscalar processor (big), while reducing the energy consumption by 17% in the entire processor. The performance/energy ratio (the inverse of the energy-delay product) of FXA is 25% higher than that of a conventional superscalar processor (big) and 27% higher than that of a conventional in-order superscalar processor (LITTLE).

  • An Automatically Peak-Shift Control Design for Charging and Discharging of the Battery in an Ultrabook

    Chun-Hung CHENG  Ying-Wen BAI  

     
    PAPER-Computer System

      Pubricized:
    2016/01/08
      Page(s):
    1108-1116

    As the electricity rates during peak hours are higher, this paper proposes a design for an ultrabook to automatically shift the charging period to an off-peak period. In addition, this design sets an upper limit for the battery which thus protects the battery and prevents it from remaining in a continued state of both high temperature and high voltage. This design uses both a low-power embedded controller (EC) and the fuzzy logic controller (FLC) control method as the main control techniques together with real time clock (RTC) ICs. The sensing value of the EC and the presetting of parameters are used to control the conversion of the AC/DC module. This user interface design allows the user to set not only the peak/off-peak period but also the upper use limit of the battery.

  • Dependency-Based Extraction of Conditional Statements for Understanding Business Rules

    Tomomi HATANO  Takashi ISHIO  Joji OKADA  Yuji SAKATA  Katsuro INOUE  

     
    PAPER-Software Engineering

      Pubricized:
    2016/01/08
      Page(s):
    1117-1126

    For the maintenance of a business system, developers must understand the business rules implemented in the system. One type of business rules defines computational business rules; they represent how an output value of a feature is computed from the valid inputs. Unfortunately, understanding business rules is a tedious and error-prone activity. We propose a program-dependence analysis technique tailored to understanding computational business rules. Given a variable representing an output, the proposed technique extracts the conditional statements that may affect the computation of the output. To evaluate the usefulness of the technique, we conducted an experiment with eight developers in one company. The results confirm that the proposed technique enables developers to accurately identify conditional statements corresponding to computational business rules. Furthermore, we compare the number of conditional statements extracted by the proposed technique and program slicing. We conclude that the proposed technique, in general, is more effective than program slicing.

  • Elastic and Adaptive Resource Orchestration Architecture on 3-Tier Network Virtualization Model

    Masayoshi SHIMAMURA  Hiroaki YAMANAKA  Akira NAGATA  Katsuyoshi IIDA  Eiji KAWAI  Masato TSURU  

     
    PAPER-Information Network

      Pubricized:
    2016/01/18
      Page(s):
    1127-1138

    Network virtualization environments (NVEs) are emerging to meet the increasing diversity of demands by Internet users where a virtual network (VN) can be constructed to accommodate each specific application service. In the future Internet, diverse service providers (SPs) will provide application services on their own VNs running across diverse infrastructure providers (InPs) that provide physical resources in an NVE. To realize both efficient resource utilization and good QoS of each individual service in such environments, SPs should perform adaptive control on network and computational resources in dynamic and competitive resource sharing, instead of explicit and sufficient reservation of physical resources for their VNs. On the other hand, two novel concepts, software-defined networking (SDN) and network function virtualization (NFV), have emerged to facilitate the efficient use of network and computational resources, flexible provisioning, network programmability, unified management, etc., which enable us to implement adaptive resource control. In this paper, therefore, we propose an architectural design of network orchestration for enabling SPs to maintain QoS of their applications aggressively by means of resource control on their VNs efficiently, by introducing virtual network provider (VNP) between InPs and SPs as 3-tier model, and by integrating SDN and NFV functionalities into NVE framework. We define new north-bound interfaces (NBIs) for resource requests, resource upgrades, resource programming, and alert notifications while using the standard OpenFlow interfaces for resource control on users' traffic flows. The feasibility of the proposed architecture is demonstrated through network experiments using a prototype implementation and a sample application service on nation-wide testbed networks, the JGN-X and RISE.

  • The Relevance Dependent Infinite Relational Model for Discovering Co-Cluster Structure from Relationships with Structured Noise

    Iku OHAMA  Hiromi IIDA  Takuya KIDA  Hiroki ARIMURA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2016/01/13
      Page(s):
    1139-1152

    Latent variable models for relational data enable us to extract the co-cluster structures underlying observed relational data. The Infinite Relational Model (IRM) is a well-known relational model for discovering co-cluster structures with an unknown number of clusters. The IRM assumes that the link probability between two objects (e.g., a customer and an item) depends only on their cluster assignment. However, relational models based on this assumption often lead us to extract many non-informative and unexpected clusters. This is because the underlying co-cluster structures in real-world relationships are often destroyed by structured noise that blurs the cluster structure stochastically depending on the pair of related objects. To overcome this problem, in this paper, we propose an extended IRM that simultaneously estimates denoised clear co-cluster structure and a structured noise component. In other words, our proposed model jointly estimates cluster assignment and noise level for each object. We also present posterior probabilities for running collapsed Gibbs sampling to infer the model. Experiments on real-world datasets show that our model extracts a clear co-cluster structure. Moreover, we confirm that the estimated noise levels enable us to extract representative objects for each cluster.

  • Combining Human Action Sensing of Wheelchair Users and Machine Learning for Autonomous Accessibility Data Collection

    Yusuke IWASAWA  Ikuko EGUCHI YAIRI  Yutaka MATSUO  

     
    PAPER-Rehabilitation Engineering and Assistive Technology

      Pubricized:
    2016/01/22
      Page(s):
    1153-1161

    The recent increase in the use of intelligent devices such as smartphones has enhanced the relationship between daily human behavior sensing and useful applications in ubiquitous computing. This paper proposes a novel method inspired by personal sensing technologies for collecting and visualizing road accessibility at lower cost than traditional data collection methods. To evaluate the methodology, we recorded outdoor activities of nine wheelchair users for approximately one hour each by using an accelerometer on an iPod touch and a camcorder, gathered the supervised data from the video by hand, and estimated the wheelchair actions as a measure of street level accessibility in Tokyo. The system detected curb climbing, moving on tactile indicators, moving on slopes, and stopping, with F-scores of 0.63, 0.65, 0.50, and 0.91, respectively. In addition, we conducted experiments with an artificially limited number of training data to investigate the number of samples required to estimate the target.

  • Automatic Recognition of Mycobacterium Tuberculosis Based on Active Shape Model

    Chao XU  Dongxiang ZHOU  Tao GUAN  Yongping ZHAI  Yunhui LIU  

     
    PAPER-Pattern Recognition

      Pubricized:
    2016/01/08
      Page(s):
    1162-1171

    This paper realized the automatic recognition of Mycobacterium tuberculosis in Ziehl-Neelsen stained images by the conventional light microscopy, which can be used in the computer-aided diagnosis of the tuberculosis. We proposed a novel recognition method based on active shape model. First, the candidate bacillus objects are segmented by a method of marker-based watershed transform. Next, a point distribution model of the object shape is proposed to label the landmarks on the object automatically. Then the active shape model is performed after aligning the training set with a weight matrix. The deformation regulation of the object shape is discovered and successfully applied in recognition without using geometric and other commonly used features. During this process, a width consistency constraint is combined with the shape parameter to improve the accuracy of the recognition. Experimental results demonstrate that the proposed method yields high accuracy in the images with different background colors. The recognition accuracy in object level and image level are 92.37% and 97.91% respectively.

  • Character-Position-Free On-Line Handwritten Japanese Text Recognition by Two Segmentation Methods

    Jianjuan LIANG  Bilan ZHU  Taro KUMAGAI  Masaki NAKAGAWA  

     
    PAPER-Pattern Recognition

      Pubricized:
    2016/01/06
      Page(s):
    1172-1181

    The paper presents a recognition method of character-position-free on-line handwritten Japanese text patterns to allow a user to overlay characters freely without confirming previously written characters. To develop this method, we first collected text patterns written without wrist or elbow support and without visual feedback and then prepared large sets of character-position-free handwritten Japanese text patterns artificially from normally handwritten text patterns. The proposed method sets each off-stroke between real strokes as undecided and evaluates the segmentation probability by SVM model. Then, the optimal segmentation-recognition path can be effectively found by Viterbi search in the candidate lattice, combining the scores of character recognition, geometric features, linguistic context, as well as the segmentation scores by SVM classification. We test this method on variously overlaid sample patterns, as well as on the above-mentioned collected handwritten patterns, and verify that its recognition rates match those of the latest recognizer for normally handwritten horizontal Japanese text with no serious speed restriction in practical applications.

  • Using Reversed Sequences and Grapheme Generation Rules to Extend the Feasibility of a Phoneme Transition Network-Based Grapheme-to-Phoneme Conversion

    Seng KHEANG  Kouichi KATSURADA  Yurie IRIBE  Tsuneo NITTA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2016/01/06
      Page(s):
    1182-1192

    The automatic transcription of out-of-vocabulary words into their corresponding phoneme strings has been widely adopted for speech synthesis and spoken-term detection systems. By combining various methods in order to meet the challenges of grapheme-to-phoneme (G2P) conversion, this paper proposes a phoneme transition network (PTN)-based architecture for G2P conversion. The proposed method first builds a confusion network using multiple phoneme-sequence hypotheses generated from several G2P methods. It then determines the best final-output phoneme from each block of phonemes in the generated network. Moreover, in order to extend the feasibility and improve the performance of the proposed PTN-based model, we introduce a novel use of right-to-left (reversed) grapheme-phoneme sequences along with grapheme-generation rules. Both techniques are helpful not only for minimizing the number of required methods or source models in the proposed architecture but also for increasing the number of phoneme-sequence hypotheses, without increasing the number of methods. Therefore, the techniques serve to minimize the risk from combining accurate and inaccurate methods that can readily decrease the performance of phoneme prediction. Evaluation results using various pronunciation dictionaries show that the proposed model, when trained using the reversed grapheme-phoneme sequences, often outperformed conventional left-to-right grapheme-phoneme sequences. In addition, the evaluation demonstrates that the proposed PTN-based method for G2P conversion is more accurate than all baseline approaches that were tested.

  • Fast Mode Decision Technique for HEVC Intra Prediction Based on Reliability Metric for Motion Vectors

    Chihiro TSUTAKE  Yutaka NAKANO  Toshiyuki YOSHIDA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2016/01/21
      Page(s):
    1193-1201

    This paper proposes a fast mode decision technique for intra prediction of High Efficiency Video Coding (HEVC) based on a reliability metric for motion vectors (RMMV). Since such a decision problem can be regarded as a kind of pattern classification, an efficient classifier is required for the reduction of computation complexity. This paper employs the RMMV as a classifier because the RMMV can efficiently categorize image blocks into flat(uniform), active, and edge blocks, and can estimate the direction of an edge block as well. A local search for angular modes is introduced to further speed up the decision process. An experiment shows the advantage of our technique over other techniques.

  • Distributed Compressed Video Sensing with Joint Optimization of Dictionary Learning and l1-Analysis Based Reconstruction

    Fang TIAN  Jie GUO  Bin SONG  Haixiao LIU  Hao QIN  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2016/01/21
      Page(s):
    1202-1211

    Distributed compressed video sensing (DCVS), combining advantages of compressed sensing and distributed video coding, is developed as a novel and powerful system to get an encoder with low complexity. Nevertheless, it is still unclear how to explore the method to achieve an effective video recovery through utilizing realistic signal characteristics as much as possible. Based on this, we present a novel spatiotemporal dictionary learning (DL) based reconstruction method for DCVS, where both the DL model and the l1-analysis based recovery with correlation constraints are included in the minimization problem to achieve the joint optimization of sparse representation and signal reconstruction. Besides, an alternating direction method with multipliers (ADMM) based numerical algorithm is outlined for solving the underlying optimization problem. Simulation results demonstrate that the proposed method outperforms other methods, with 0.03-4.14 dB increases in PSNR and a 0.13-15.31 dB gain for non-key frames.

  • Efficient Local Feature Encoding for Human Action Recognition with Approximate Sparse Coding

    Yu WANG  Jien KATO  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/01/06
      Page(s):
    1212-1220

    Local spatio-temporal features are popular in the human action recognition task. In practice, they are usually coupled with a feature encoding approach, which helps to obtain the video-level vector representations that can be used in learning and recognition. In this paper, we present an efficient local feature encoding approach, which is called Approximate Sparse Coding (ASC). ASC computes the sparse codes for a large collection of prototype local feature descriptors in the off-line learning phase using Sparse Coding (SC) and look up the nearest prototype's precomputed sparse code for each to-be-encoded local feature in the encoding phase using Approximate Nearest Neighbour (ANN) search. It shares the low dimensionality of SC and the high speed of ANN, which are both desired properties for a local feature encoding approach. ASC has been excessively evaluated on the KTH dataset and the HMDB51 dataset. We confirmed that it is able to encode large quantity of local video features into discriminative low dimensional representations efficiently.

  • Privacy Protection for Social Video via Background Estimation and CRF-Based Videographer's Intention Modeling

    Yuta NAKASHIMA  Noboru BABAGUCHI  Jianping FAN  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2016/01/13
      Page(s):
    1221-1233

    The recent popularization of social network services (SNSs), such as YouTube, Dailymotion, and Facebook, enables people to easily publish their personal videos taken with mobile cameras. However, at the same time, such popularity has raised a new problem: video privacy. In such social videos, the privacy of people, i.e., their appearances, must be protected, but naively obscuring all people might spoil the video content. To address this problem, we focus on videographers' capture intentions. In a social video, some persons are usually essential for the video content. They are intentionally captured by the videographers, called intentionally captured persons (ICPs), and the others are accidentally framed-in (non-ICPs). Videos containing the appearances of the non-ICPs might violate their privacy. In this paper, we developed a system called BEPS, which adopts a novel conditional random field (CRF)-based method for ICP detection, as well as a novel approach to obscure non-ICPs and preserve ICPs using background estimation. BEPS reduces the burden of manually obscuring the appearances of the non-ICPs before uploading the video to SNSs. Compared with conventional systems, the following are the main advantages of BEPS: (i) it maintains the video content, and (ii) it is immune to the failure of person detection; false positives in person detection do not violate privacy. Our experimental results successfully validated these two advantages.

  • Continuous Music-Emotion Recognition Based on Electroencephalogram

    Nattapong THAMMASAN  Koichi MORIYAMA  Ken-ichi FUKUI  Masayuki NUMAO  

     
    PAPER-Music Information Processing

      Pubricized:
    2016/01/22
      Page(s):
    1234-1241

    Research on emotion recognition using electroencephalogram (EEG) of subjects listening to music has become more active in the past decade. However, previous works did not consider emotional oscillations within a single musical piece. In this research, we propose a continuous music-emotion recognition approach based on brainwave signals. While considering the subject-dependent and changing-over-time characteristics of emotion, our experiment included self-reporting and continuous emotion annotation in the arousal-valence space. Fractal dimension (FD) and power spectral density (PSD) approaches were adopted to extract informative features from raw EEG signals and then we applied emotion classification algorithms to discriminate binary classes of emotion. According to our experimental results, FD slightly outperformed PSD approach both in arousal and valence classification, and FD was found to have the higher correlation with emotion reports than PSD. In addition, continuous emotion recognition during music listening based on EEG was found to be an effective method for tracking emotional reporting oscillations and provides an opportunity to better understand human emotional processes.

  • HaWL: Hidden Cold Block-Aware Wear Leveling Using Bit-Set Threshold for NAND Flash Memory

    Seon Hwan KIM  Ju Hee CHOI  Jong Wook KWAK  

     
    LETTER-Computer System

      Pubricized:
    2016/01/13
      Page(s):
    1242-1245

    In this letter, we propose a novel wear leveling technique we call Hidden cold block-aware Wear Leveling (HaWL) using a bit-set threshold. HaWL prolongs the lifetime of flash memory devices by using a bit array table in wear leveling. The bit array table saves the histories of block erasures for a period and distinguishes cold blocks from all blocks. In addition, HaWL can reduce the size of the bit array table by using a one-to-many mode, where one bit is related to many blocks. Moreover, to prevent degradation of wear leveling in the one-to-many mode, HaWL uses bit-set threshold (BST) and increases the accuracy of the cold block information. The performance results illustrate that HaWL prolongs the lifetime of flash memory by up to 48% compared with previous wear leveling techniques in our experiments.

  • A Meet-in-the-Middle Attack on Reduced-Round Kalyna-b/2b

    Riham ALTAWY  Ahmed ABDELKHALEK  Amr M. YOUSSEF  

     
    LETTER-Information Network

      Pubricized:
    2016/01/22
      Page(s):
    1246-1250

    In this letter, we present a meet-in-the-middle attack on the 7-round reduced block cipher Kalyna-b/2b, which has been approved as the new encryption standard of Ukraine (DSTU 7624:2014) in 2015. According to its designers, the cipher provides strength to several cryptanalytic methods after the fifth and sixth rounds of the versions with block length of 128 and 256 bits, respectively. Our attack is based on the differential enumeration approach, where we carefully deploy a four-round distinguisher in the first four rounds to bypass the effect of the carry bits resulting from the prewhitening modular key addition. We also exploit the linear relation between consecutive odd and even indexed round keys, which enables us to attack seven rounds and recover all the round keys incrementally. The attack on Kalyna with 128-bit block has a data complexity of 289 chosen plaintexts, time complexity of 2230.2 and a memory complexity of 2202.64. The data, time and memory complexities of our attack on Kalyna with 256-bit block are 2233, 2502.2 and 2170, respectively.

  • Efficient Subversion of Symmetric Encryption with Random Initialization Vector

    Joonsang BAEK  Ilsun YOU  

     
    LETTER-Information Network

      Pubricized:
    2016/01/14
      Page(s):
    1251-1254

    This paper presents an efficient subverted symmetric encryption scheme, which outputs a random initialization vector (IV). Compared with the available scheme of the same kind in the literature, our attack provides a saboteur (big brother) with much faster recovery of a key used in a victim's symmetric encryption scheme. Our result implies that care must be taken when a symmetric encryption scheme with a random IV such as randomized CBC is deployed.

  • Properties of Generalized Feedback Shift Registers for Secure Scan Design

    Hideo FUJIWARA  Katsuya FUJIWARA  

     
    LETTER-Dependable Computing

      Pubricized:
    2016/01/21
      Page(s):
    1255-1258

    In our previous work [12], [13], we introduced generalized feed-forward shift registers (GF2SR, for short) to apply them to secure and testable scan design. In this paper, we introduce another class of generalized shift registers called generalized feedback shift registers (GFSR, for short), and consider the properties of GFSR that are useful for secure scan design. We present how to control/observe GFSR to guarantee scan-in and scan-out operations that can be overlapped in the same way as the conventional scan testing. Testability and security of scan design using GFSR are considered. The cardinality of each class is clarified. We also present how to design strongly secure GFSR as well as GF2SR considered in [13].

  • Nonnegative Component Representation with Hierarchical Dictionary Learning Strategy for Action Recognition

    Jianhong WANG  Pinzheng ZHANG  Linmin LUO  

     
    LETTER-Pattern Recognition

      Pubricized:
    2016/01/13
      Page(s):
    1259-1263

    Nonnegative component representation (NCR) is a mid-level representation based on nonnegative matrix factorization (NMF). Recently, it has attached much attention and achieved encouraging result for action recognition. In this paper, we propose a novel hierarchical dictionary learning strategy (HDLS) for NMF to improve the performance of NCR. Considering the variability of action classes, HDLS clusters the similar classes into groups and forms a two-layer hierarchical class model. The groups in the first layer are disjoint, while in the second layer, the classes in each group are correlated. HDLS takes account of the differences between two layers and proposes to use different dictionary learning methods for this two layers, including the discriminant class-specific NMF for the first layer and the discriminant joint dictionary NMF for the second layer. The proposed approach is extensively tested on three public datasets and the experimental results demonstrate the effectiveness and superiority of NCR with HDLS for large-scale action recognition.

  • Feature-Based On-Line Object Tracking Combining Both Keypoints and Quasi-Keypoints Matching

    Quan MIAO  Chun ZHANG  Long MENG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2016/01/21
      Page(s):
    1264-1267

    This paper proposes a novel object tracking method via online boosting. The on-line boosting technique is combined with local features to treat tracking as a keypoint matching problem. First, We improve matching reliability by exploiting the statistical repeatability of local features. In addition, we propose 2D scale-rotation invariant quasi-keypoint matching to further improve matching efficiency. Benefiting from SURF feature's statistical repeatability and the complementary quasi-keypoint matching technique, we can easily find reliable matching pairs and thus perform accurate and stable tracking. Experimental results show that the proposed method achieves better performance compared with previously reported trackers.

  • Object Tracking with Embedded Deformable Parts in Dynamic Conditional Random Fields

    Suofei ZHANG  Zhixin SUN  Xu CHENG  Lin ZHOU  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2016/01/19
      Page(s):
    1268-1271

    This work presents an object tracking framework which is based on integration of Deformable Part based Models (DPMs) and Dynamic Conditional Random Fields (DCRF). In this framework, we propose a DCRF based novel way to track an object and its details on multiple resolutions simultaneously. Meanwhile, we tackle drastic variations in target appearance such as pose, view, scale and illumination changes with DPMs. To embed DPMs into DCRF, we design specific temporal potential functions between vertices by explicitly formulating deformation and partial occlusion respectively. Furthermore, temporal transition functions between mixture models bring higher robustness to perspective and pose changes. To evaluate the efficacy of our proposed method, quantitative tests on six challenging video sequences are conducted and the results are analyzed. Experimental results indicate that the method effectively addresses serious problems in object tracking and performs favorably against state-of-the-art trackers.

  • Spatial and Anatomical Regularization Based on Multiple Kernel Learning for Neuroimaging Classification

    YingJiang WU  BenYong LIU  

     
    LETTER-Biological Engineering

      Pubricized:
    2016/01/13
      Page(s):
    1272-1274

    Recently, a high dimensional classification framework has been proposed to introduce spatial and anatomical priors in classical single kernel support vector machine optimization scheme, wherein the sequential minimal optimization (SMO) training algorithm is adopted, for brain image analysis. However, to satisfy the optimization conditions required in the single kernel case, it is unreasonably assumed that the spatial regularization parameter is equal to the anatomical one. In this letter, this approach is improved by combining SMO algorithm with multiple kernel learning to avoid that assumption and optimally estimate two parameters. The improvement is comparably demonstrated by experimental results on classification of Alzheimer patients and elderly controls.