The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E101-D No.11  (Publication Date:2018/11/01)

    Special Section on Information and Communication System Security
  • FOREWORD Open Access

    Atsushi KANAI  

     
    FOREWORD

      Page(s):
    2559-2560
  • An Overview of Cyber Security for Connected Vehicles Open Access

    Junko TAKAHASHI  

     
    INVITED PAPER

      Pubricized:
    2018/08/22
      Page(s):
    2561-2575

    The demand for and the scope of connected services have rapidly grown and developed in many industries such as electronic appliances, robotics, and industry automation. In the automotive field, including connected vehicles, different types of connected services have become available and they provide convenience and comfort with users while yielding new business opportunities. With the advent of connected vehicles, the threat of cyber attacks has become a serious issue and protection methods against these attacks are urgently needed to provide safe and secure connected services. From 2017, attack methods have become more sophisticated through different attack surfaces attached to navigation systems and telematics modules, and security requirements to circumvent such attacks have begun to be established. Individual threats have been addressed previously; however, there are few reports that provide an overview of cyber security related to connected vehicles. This paper gives our perspective on cyber security for connected vehicles based on a survey of recent studies related to vehicle security. To introduce these studies, the environment surrounding connected vehicles is classified into three categories: inside the vehicle, communications between the back-end systems and vehicles, and the back-end systems. In each category, this paper introduces recent trends in cyber attacks and the protection requirements that should be developed for connected services. We show that the overall security covering the three categories must be considered because the security of the vehicle is jeopardized even if one item in the categories is not covered. We believe that this paper will further contribute to development of all service systems related to connected vehicles including autonomous vehicles and to the investigation into cyber security against these attacks.

  • Towards Finding Code Snippets on a Question and Answer Website Causing Mobile App Vulnerabilities

    Hiroki NAKANO  Fumihiro KANEI  Yuta TAKATA  Mitsuaki AKIYAMA  Katsunari YOSHIOKA  

     
    PAPER-Mobile Application and Web Security

      Pubricized:
    2018/08/22
      Page(s):
    2576-2583

    Android app developers sometimes copy code snippets posted on a question-and-answer (Q&A) website and use them in their apps. However, if a code snippet has vulnerabilities, Android apps containing the vulnerable snippet could also have the same vulnerabilities. Despite this, the effect of such vulnerable snippets on the Android apps has not been investigated in depth. In this paper, we investigate the correspondence between the vulnerable code snippets and vulnerable apps. we collect code snippets from a Q&A website, extract possibly vulnerable snippets, and calculate similarity between those snippets and bytecode on vulnerable apps. Our experimental results show that 15.8% of all evaluated apps that have SSL implementation vulnerabilities (Improper host name verification), 31.7% that have SSL certificate verification vulnerabilities, and 3.8% that have WEBVIEW remote code execution vulnerabilities contain possibly vulnerable code snippets from Stack Overflow. In the worst case, a single problematic snippet has caused 4,844 apps to contain a vulnerability, accounting for 31.2% of all collected apps with that vulnerability.

  • Understanding the Inconsistency between Behaviors and Descriptions of Mobile Apps

    Takuya WATANABE  Mitsuaki AKIYAMA  Tetsuya SAKAI  Hironori WASHIZAKI  Tatsuya MORI  

     
    PAPER-Mobile Application and Web Security

      Pubricized:
    2018/08/22
      Page(s):
    2584-2599

    Permission warnings and privacy policy enforcement are widely used to inform mobile app users of privacy threats. These mechanisms disclose information about use of privacy-sensitive resources such as user location or contact list. However, it has been reported that very few users pay attention to these mechanisms during installation. Instead, a user may focus on a more user-friendly source of information: text description, which is written by a developer who has an incentive to attract user attention. When a user searches for an app in a marketplace, his/her query keywords are generally searched on text descriptions of mobile apps. Then, users review the search results, often by reading the text descriptions; i.e., text descriptions are associated with user expectation. Given these observations, this paper aims to address the following research question: What are the primary reasons that text descriptions of mobile apps fail to refer to the use of privacy-sensitive resources? To answer the research question, we performed empirical large-scale study using a huge volume of apps with our ACODE (Analyzing COde and DEscription) framework, which combines static code analysis and text analysis. We developed light-weight techniques so that we can handle hundred of thousands of distinct text descriptions. We note that our text analysis technique does not require manually labeled descriptions; hence, it enables us to conduct a large-scale measurement study without requiring expensive labeling tasks. Our analysis of 210,000 apps, including free and paid, and multilingual text descriptions collected from official and third-party Android marketplaces revealed four primary factors that are associated with the inconsistencies between text descriptions and the use of privacy-sensitive resources: (1) existence of app building services/frameworks that tend to add API permissions/code unnecessarily, (2) existence of prolific developers who publish many applications that unnecessarily install permissions and code, (3) existence of secondary functions that tend to be unmentioned, and (4) existence of third-party libraries that access to the privacy-sensitive resources. We believe that these findings will be useful for improving users' awareness of privacy on mobile software distribution platforms.

  • Identifying Evasive Code in Malicious Websites by Analyzing Redirection Differences

    Yuta TAKATA  Mitsuaki AKIYAMA  Takeshi YAGI  Takeo HARIU  Kazuhiko OHKUBO  Shigeki GOTO  

     
    PAPER-Mobile Application and Web Security

      Pubricized:
    2018/08/22
      Page(s):
    2600-2611

    Security researchers/vendors detect malicious websites based on several website features extracted by honeyclient analysis. However, web-based attacks continue to be more sophisticated along with the development of countermeasure techniques. Attackers detect the honeyclient and evade analysis using sophisticated JavaScript code. The evasive code indirectly identifies vulnerable clients by abusing the differences among JavaScript implementations. Attackers deliver malware only to targeted clients on the basis of the evasion results while avoiding honeyclient analysis. Therefore, we are faced with a problem in that honeyclients cannot analyze malicious websites. Nevertheless, we can observe the evasion nature, i.e., the results in accessing malicious websites by using targeted clients are different from those by using honeyclients. In this paper, we propose a method of extracting evasive code by leveraging the above differences to investigate current evasion techniques. Our method analyzes HTTP transactions of the same website obtained using two types of clients, a real browser as a targeted client and a browser emulator as a honeyclient. As a result of evaluating our method with 8,467 JavaScript samples executed in 20,272 malicious websites, we discovered previously unknown evasion techniques that abuse the differences among JavaScript implementations. These findings will contribute to improving the analysis capabilities of conventional honeyclients.

  • A Secure In-Depth File System Concealed by GPS-Based Mounting Authentication for Mobile Devices

    Yong JIN  Masahiko TOMOISHI  Satoshi MATSUURA  Yoshiaki KITAGUCHI  

     
    PAPER-Mobile Application and Web Security

      Pubricized:
    2018/08/22
      Page(s):
    2612-2621

    Data breach and data destruction attack have become the critical security threats for the ICT (Information and Communication Technology) infrastructure. Both the Internet service providers and users are suffering from the cyber threats especially those to confidential data and private information. The requirements of human social activities make people move carrying confidential data and data breach always happens during the transportation. The Internet connectivity and cryptographic technology have made the usage of confidential data much secure. However, even with the high deployment rate of the Internet infrastructure, the concerns for lack of the Internet connectivity make people carry data with their mobile devices. In this paper, we describe the main patterns of data breach occur on mobile devices and propose a secure in-depth file system concealed by GPS-based mounting authentication to mitigate data breach on mobile devices. In the proposed in-depth file system, data can be stored based on the level of credential with corresponding authentication policy and the mounting operation will be only successful on designated locations. We implemented a prototype system using Veracrypt and Perl language and confirmed that the in-depth file system worked exactly as we expected by evaluations on two locations. The contribution of this paper includes the clarification that GPS-based mounting authentication for a file system can reduce the risk of data breach for mobile devices and a realization of prototype system.

  • Automatically Generating Malware Analysis Reports Using Sandbox Logs

    Bo SUN  Akinori FUJINO  Tatsuya MORI  Tao BAN  Takeshi TAKAHASHI  Daisuke INOUE  

     
    PAPER-Network Security

      Pubricized:
    2018/08/22
      Page(s):
    2622-2632

    Analyzing a malware sample requires much more time and cost than creating it. To understand the behavior of a given malware sample, security analysts often make use of API call logs collected by the dynamic malware analysis tools such as a sandbox. As the amount of the log generated for a malware sample could become tremendously large, inspecting the log requires a time-consuming effort. Meanwhile, antivirus vendors usually publish malware analysis reports (vendor reports) on their websites. These malware analysis reports are the results of careful analysis done by security experts. The problem is that even though there are such analyzed examples for malware samples, associating the vendor reports with the sandbox logs is difficult. This makes security analysts not able to retrieve useful information described in vendor reports. To address this issue, we developed a system called AMAR-Generator that aims to automate the generation of malware analysis reports based on sandbox logs by making use of existing vendor reports. Aiming at a convenient assistant tool for security analysts, our system employs techniques including template matching, API behavior mapping, and malicious behavior database to produce concise human-readable reports that describe the malicious behaviors of malware programs. Through the performance evaluation, we first demonstrate that AMAR-Generator can generate human-readable reports that can be used by a security analyst as the first step of the malware analysis. We also demonstrate that AMAR-Generator can identify the malicious behaviors that are conducted by malware from the sandbox logs; the detection rates are up to 96.74%, 100%, and 74.87% on the sandbox logs collected in 2013, 2014, and 2015, respectively. We also present that it can detect malicious behaviors from unknown types of sandbox logs.

  • Design and Implementation of SDN-Based Proactive Firewall System in Collaboration with Domain Name Resolution

    Hiroya IKARASHI  Yong JIN  Nariyoshi YAMAI  Naoya KITAGAWA  Kiyohiko OKAYAMA  

     
    PAPER-Network Security

      Pubricized:
    2018/08/22
      Page(s):
    2633-2643

    Security facilities such as firewall system and IDS/IPS (Intrusion Detection System/Intrusion Prevention System) have become fundamental solutions against cyber threats. With the rapid change of cyber attack tactics, detail investigations like DPI (Deep Packet Inspection) and SPI (Stateful Packet Inspection) for incoming traffic become necessary while they also cause the decrease of network throughput. In this paper, we propose an SDN (Software Defined Network) - based proactive firewall system in collaboration with domain name resolution to solve the problem. The system consists of two firewall units (lightweight and normal) and a proper one will be assigned for checking the client of incoming traffic by the collaboration of SDN controller and internal authoritative DNS server. The internal authoritative DNS server obtains the client IP address using EDNS (Extension Mechanisms for DNS) Client Subnet Option from the external DNS full resolver during the name resolution stage and notifies the client IP address to the SDN controller. By checking the client IP address on the whitelist and blacklist, the SDN controller assigns a proper firewall unit for investigating the incoming traffic from the client. Consequently, the incoming traffic from a trusted client will be directed to the lightweight firewall unit while from others to the normal firewall unit. As a result, the incoming traffic can be distributed properly to the firewall units and the congestion can be mitigated. We implemented a prototype system and evaluated its performance in a local experimental network. Based on the results, we confirmed that the prototype system presented expected features and acceptable performance when there was no flooding attack. We also confirmed that the prototype system showed better performance than conventional firewall system under ICMP flooding attack.

  • Ad-hoc Analytical Framework of Bitcoin Investigations for Law Enforcement

    Hiroki KUZUNO  Giannis TZIAKOURIS  

     
    PAPER-Forensics and Risk Analysis

      Pubricized:
    2018/08/22
      Page(s):
    2644-2657

    Bitcoin is the leading cryptocurrency in the world with a total marketcap of nearly USD 33 billion, [1] with 370,000 transactions recorded daily[2]. Pseudo-anonymous, decentralized peer-to-peer electronic cash systems such as Bitcoin have caused a paradigm shift in the way that people conduct financial transactions and purchase goods. Although cryptocurrencies enable users to securely and anonymously exchange money, they can also facilitate illegal criminal activities. Therefore, it is imperative that law enforcement agencies develop appropriate analytical processes that will allow them to identify and investigate criminal activities in the Blockchain (a distributed ledger). In this paper, INTERPOL, through the INTERPOL Global Complex for Innovation, proposes a Bitcoin analytical framework and a software system that will assist law enforcement agencies in the real-time analysis of the Blockchain while providing digital crime analysts with tracing and visualization capabilities. By doing so, it is feasible to render transactions decipherable and comprehensible for law enforcement investigators and prosecutors. The proposed solution is evaluated against three criminal case studies linked to Darknet markets, ransomware and DDoS extortion.

  • Modeling Attack Activity for Integrated Analysis of Threat Information

    Daiki ITO  Kenta NOMURA  Masaki KAMIZONO  Yoshiaki SHIRAISHI  Yasuhiro TAKANO  Masami MOHRI  Masakatu MORII  

     
    PAPER-Forensics and Risk Analysis

      Pubricized:
    2018/08/22
      Page(s):
    2658-2664

    Cyber attacks targeting specific victims use multiple intrusion routes and various attack methods. In order to combat such diversified cyber attacks, Threat Intelligence is attracting attention. Attack activities, vulnerability information and other threat information are gathered, analyzed and organized in threat intelligence and it enables organizations to understand their risks. Integrated analysis of the threat information is needed to compose the threat intelligence. Threat information can be found in incident reports published by security vendors. However, it is difficult to analyze and compare their reports because they are described in various formats defined by each vendor. Therefore, in this paper, we apply a modeling framework for analyzing and deriving the relevance of the reports from the views of similarity and relation between the models. This paper presents the procedures of modeling incident information described in the reports. Moreover, as case studies, we apply the modeling method to some actual incident reports and compare their models.

  • Model Inversion Attacks for Online Prediction Systems: Without Knowledge of Non-Sensitive Attributes

    Seira HIDANO  Takao MURAKAMI  Shuichi KATSUMATA  Shinsaku KIYOMOTO  Goichiro HANAOKA  

     
    PAPER-Forensics and Risk Analysis

      Pubricized:
    2018/08/22
      Page(s):
    2665-2676

    The number of IT services that use machine learning (ML) algorithms are continuously and rapidly growing, while many of them are used in practice to make some type of predictions from personal data. Not surprisingly, due to this sudden boom in ML, the way personal data are handled in ML systems are starting to raise serious privacy concerns that were previously unconsidered. Recently, Fredrikson et al. [USENIX 2014] [CCS 2015] proposed a novel attack against ML systems called the model inversion attack that aims to infer sensitive attribute values of a target user. In their work, for the model inversion attack to be successful, the adversary is required to obtain two types of information concerning the target user prior to the attack: the output value (i.e., prediction) of the ML system and all of the non-sensitive values used to learn the output. Therefore, although the attack does raise new privacy concerns, since the adversary is required to know all of the non-sensitive values in advance, it is not completely clear how much risk is incurred by the attack. In particular, even though the users may regard these values as non-sensitive, it may be difficult for the adversary to obtain all of the non-sensitive attribute values prior to the attack, hence making the attack invalid. The goal of this paper is to quantify the risk of model inversion attacks in the case when non-sensitive attributes of a target user are not available to the adversary. To this end, we first propose a general model inversion (GMI) framework, which models the amount of auxiliary information available to the adversary. Our framework captures the model inversion attack of Fredrikson et al. as a special case, while also capturing model inversion attacks that infer sensitive attributes without the knowledge of non-sensitive attributes. For the latter attack, we provide a general methodology on how we can infer sensitive attributes of a target user without knowledge of non-sensitive attributes. At a high level, we use the data poisoning paradigm in a conceptually novel way and inject malicious data into the ML system in order to modify the internal ML model being used into a target ML model; a special type of ML model which allows one to perform model inversion attacks without the knowledge of non-sensitive attributes. Finally, following our general methodology, we cast ML systems that internally use linear regression models into our GMI framework and propose a concrete algorithm for model inversion attacks that does not require knowledge of the non-sensitive attributes. We show the effectiveness of our model inversion attack through experimental evaluation using two real data sets.

  • Tag-KEM/DEM Framework for Public-Key Encryption with Non-Interactive Opening

    Yusuke SAKAI  Takahiro MATSUDA  Goichiro HANAOKA  

     
    PAPER-Cryptographic Techniques

      Pubricized:
    2018/08/22
      Page(s):
    2677-2687

    In a large-scale information-sharing platform, such as a cloud storage, it is often required to not only securely protect sensitive information but also recover it in a reliable manner. Public-key encryption with non-interactive opening (PKENO) is considered as a suitable cryptographic tool for this requirement. This primitive is an extension of public-key encryption which enables a receiver to provide a non-interactive proof which confirms that a given ciphertext is decrypted to some public plaintext. In this paper, we present a Tag-KEM/DEM framework for PKENO. In particular, we define a new cryptographic primitive called a Tag-KEM with non-interactive opening (Tag-KEMNO), and prove the KEM/DEM composition theorem for this primitives, which ensures a key encapsulation mechanism (KEM) and a data encapsulation mechanism (DEM) can be, under certain conditions, combined to form a secure PKENO scheme. This theorem provides a secure way of combining a Tag-KEMNO scheme with a DEM scheme to construct a secure PKENO scheme. Using this framework, we explain the essence of existing constructions of PKENO. Furthermore, we present four constructions of Tag-KEMNO, which yields four PKENO constructions. These PKENO constructions coincide with the existing constructions, thereby we explain the essence of these existing constructions. In addition, our Tag-KEMNO framework enables us to expand the plaintext space of a PKENO scheme. Some of the previous PKENO schemes are only able to encrypt a plaintext of restricted length, and there has been no known way to expand this restricted plaintext space to the space of arbitrary-length plaintexts. Using our framework, we can obtain a PKENO scheme with the unbounded-length plaintext space by modifying and adapting such a PKENO scheme with a bounded-length plaintext space.

  • Zero-Knowledge Identification Scheme Using LDPC Codes

    Haruka ITO  Masanori HIROTOMO  Youji FUKUTA  Masami MOHRI  Yoshiaki SHIRAISHI  

     
    PAPER-Cryptographic Techniques

      Pubricized:
    2018/08/22
      Page(s):
    2688-2697

    Recently, IoT compatible products have been popular, and various kinds of things are IoT compliant products. In these devices, cryptosystems and authentication are not treated properly, and security measures for IoT devices are not sufficient. Requirements of authentication for IoT devices are power saving and one-to-many communication. In this paper, we propose a zero-knowledge identification scheme using LDPC codes. In the proposed scheme, the zero-knowledge identification scheme that relies on the binary syndrome decoding problem is improved and the computational cost of identification is reduced by using the sparse parity-check matrix of the LDPC codes. In addition, the security level, computational cost and safety of the proposed scheme are discussed in detail.

  • Design Exploration of SHA-3 ASIP for IoT on a 32-bit RISC-V Processor

    Jinli RAO  Tianyong AO  Shu XU  Kui DAI  Xuecheng ZOU  

     
    PAPER-Cryptographic Techniques

      Pubricized:
    2018/08/22
      Page(s):
    2698-2705

    Data integrity is a key metric of security for Internet of Things (IoT) which refers to accuracy and reliability of data during transmission, storage and retrieval. Cryptographic hash functions are common means used for data integrity verification. Newly announced SHA-3 is the next generation hash function standard to replace existing SHA-1 and SHA-2 standards for better security. However, its underlying Keccak algorithm is computation intensive and thus limits its deployment on IoT systems which are normally equipped with 32-bit resource constrained embedded processors. This paper proposes two efficient SHA-3 ASIPs based on an open 32-bit RISC-V embedded processor named Z-scale. The first operation-oriented ASIP (OASIP) focuses on accelerating time-consuming operations with instruction set extensions to improve resource efficiency. And next datapath-oriented ASIP (DASIP) targets exploiting advance data and instruction level parallelism with extended auxiliary registers and customized datapath to achieve high performance. Implementation results show that both proposed ASIPs can effectively accelerate SHA-3 algorithm with 14.6% and 26.9% code size reductions, 30% and 87% resource efficiency improvements, 71% and 262% better maximum throughputs as well as 40% and 288% better power efficiencies than reference design. This work makes SHA-3 algorithm integration practical for both low-cost and high-performance IoT systems.

  • A Scalable and Seamless Connection Migration Scheme for Moving Target Defense in Legacy Networks

    Taekeun PARK  Koohong KANG  Daesung MOON  

     
    LETTER-Network Security

      Pubricized:
    2018/08/22
      Page(s):
    2706-2709

    In this paper, we propose a scalable and seamless connection migration scheme for moving target defense in legacy networks. The main idea is that a host is allowed to receive incoming packets with a destination address that is either its current IP address or its previous IP address for a period of time because the host does not physically move into another network. Experimental results show that our scheme outperforms the existing connection migration mechanism regardless of the number of active connections in the host.

  • Regular Section
  • Efficient Reusable Collections

    Davud MOHAMMADPUR  Ali MAHJUR  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/08/20
      Page(s):
    2710-2719

    Efficiency and flexibility of collections have a significant impact on the overall performance of applications. The current approaches to implement collections have two main drawbacks: (i) they limit the efficiency of collections and (ii) they have not adequate support for collection composition. So, when the efficiency and flexibility of collections is important, the programmer needs to implement them himself, which leads to the loss of reusability. This article presents neoCollection, a novel approach to encapsulate collections. neoCollection has several distinguishing features: (i) it can be applied on data elements efficiently and flexibly (ii) composition of collections can be made efficiently and flexibly, a feature that does not exist in the current approaches. In order to demonstrate its effectiveness, neoCollection is implemented as an extension to Java and C++.

  • How are IF-Conditional Statements Fixed Through Peer CodeReview?

    Yuki UEDA  Akinori IHARA  Takashi ISHIO  Toshiki HIRAO  Kenichi MATSUMOTO  

     
    PAPER-Software Engineering

      Pubricized:
    2018/08/08
      Page(s):
    2720-2729

    Peer code review is key to ensuring the absence of software defects. To reduce review costs, software developers adopt code convention checking tools that automatically identify maintainability issues in source code. However, these tools do not always address the maintainability issue for a particular project. The goal of this study is to understand how code review fixes conditional statement issues, which are the most frequent changes in software development. We conduct an empirical study to understand if-statement changes through code review. Using review requests in the Qt and OpenStack projects, we analyze changes of the if-conditional statements that are (1) requested to be reviewed, and are (2) revised through code review. We find the most frequently changed symbols are “( )”, “.”, and “!”. We also find project-specific fixing patterns for improving code readability by association rule mining. For example “!” operator is frequently replaced with a function call. These rules are useful for improving a coding convention checker tailored for the projects.

  • Fostering Real-Time Software Analysis by Leveraging Heterogeneous and Autonomous Software Repositories

    Chaman WIJESIRIWARDANA  Prasad WIMALARATNE  

     
    PAPER-Software Engineering

      Pubricized:
    2018/08/06
      Page(s):
    2730-2743

    Mining software repositories allow software practitioners to improve the quality of software systems and to support maintenance based on historical data. Such data is scattered across autonomous and heterogeneous information sources, such as version control, bug tracking and build automation systems. Despite having many tools to track and measure the data originated from such repositories, software practitioners often suffer from a scarcity of the techniques necessary to dynamically leverage software repositories to fulfill their complex information needs. For example, answering a question such as “What is the number of commits between two successful builds?” requires tiresome manual inspection of multiple repositories. As a solution, this paper presents a conceptual framework and a proof of concept visual query interface to satisfy distinct software quality related information needs of software practitioners. The data originated from repositories is integrated and analyzed to perform systematic investigations, which helps to uncover hidden relationships between software quality and trends of software evolution. This approach has several significant benefits such as the ability to perform real-time analyses, the ability to combine data from various software repositories and generate queries dynamically. The framework evaluated with 31 subjects by using a series of questions categorized into three software evolution scenarios. The evaluation results evidently show that our framework surpasses the state of the art tools in terms of correctness, time and usability.

  • Studying the Cost and Effectiveness of OSS Quality Assessment Models: An Experience Report of Fujitsu QNET

    Yasutaka KAMEI  Takahiro MATSUMOTO  Kazuhiro YAMASHITA  Naoyasu UBAYASHI  Takashi IWASAKI  Shuichi TAKAYAMA  

     
    PAPER-Software Engineering

      Pubricized:
    2018/08/08
      Page(s):
    2744-2753

    Nowadays, open source software (OSS) systems are adopted by proprietary software projects. To reduce the risk of using problematic OSS systems (e.g., causing system crashes), it is important for proprietary software projects to assess OSS systems in advance. Therefore, OSS quality assessment models are studied to obtain information regarding the quality of OSS systems. Although the OSS quality assessment models are partially validated using a small number of case studies, to the best of our knowledge, there are few studies that empirically report how industrial projects actually use OSS quality assessment models in their own development process. In this study, we empirically evaluate the cost and effectiveness of OSS quality assessment models at Fujitsu Kyushu Network Technologies Limited (Fujitsu QNET). To conduct the empirical study, we collect datasets from (a) 120 OSS projects that Fujitsu QNET's projects actually used and (b) 10 problematic OSS projects that caused major problems in the projects. We find that (1) it takes average and median times of 51 and 49 minutes, respectively, to gather all assessment metrics per OSS project and (2) there is a possibility that we can filter problematic OSS systems by using the threshold derived from a pool of assessment metrics. Fujitsu QNET's developers agree that our results lead to improvements in Fujitsu QNET's OSS assessment process. We believe that our work significantly contributes to the empirical knowledge about applying OSS assessment techniques to industrial projects.

  • Lightweight Security Hardware Architecture Using DWT and AES Algorithms

    Ignacio ALGREDO-BADILLO  Francisco R. CASTILLO-SORIA  Kelsey A. RAMÍREZ-GUTIÉRREZ  Luis MORALES-ROSALES  Alejandro MEDINA-SANTIAGO  Claudia FEREGRINO-URIBE  

     
    PAPER-Information Network

      Pubricized:
    2018/08/02
      Page(s):
    2754-2761

    The great increase of the digital communications, where the technological society depends on applications, devices and networks, the security problems motivate different researches for providing algorithms and systems resistant to attacks, and these lasts need of services of confidentiality, authentication, integrity, etc. This paper proposes the hardware implementation of an steganographic/cryptographic algorithm, which is based on the DWT (Discrete Wavelet Transform) and the AES (Advanced Encryption Standard) cipher algorithm in CBC mode. The proposed scheme takes advantage of a double-security ciphertext, which makes difficult to identify and decipher it. The hardware architecture reports a high efficiency (182.2 bps/slice and 85.2 bps/LUT) and low hardware resources consumption (867 slices and 1853 LUTs), where several parallel implementations can improve the throughout (0.162 Mbps) for processing large amounts of data.

  • Critical Nodes Identification of Power Grids Based on Network Efficiency

    WenJie KANG  PeiDong ZHU  JieXin ZHANG  JunYang ZHANG  

     
    PAPER-Information Network

      Pubricized:
    2018/07/27
      Page(s):
    2762-2772

    Critical nodes identification is of great significance in protecting power grids. Network efficiency can be used as an evaluation index to identify the critical nodes and is an indicator to quantify how efficiently a network exchanges information and transmits energy. Since power grid is a heterogeneous network and can be decomposed into small functionally-independent grids, the concept of the Giant Component does not apply to power grids. In this paper, we first model the power grid as the directed graph and define the Giant Efficiency sub-Graph (GEsG). The GEsG is the functionally-independent unit of the network where electric energy can be transmitted from a generation node (i.e., power plants) to some demand nodes (i.e., transmission stations and distribution stations) via the shortest path. Secondly, we propose an algorithm to evaluate the importance of nodes by calculating their critical degree, results of which can be used to identify critical nodes in heterogeneous networks. Thirdly, we define node efficiency loss to verify the accuracy of critical nodes identification (CNI) algorithm and compare the results that GEsG and Giant Component are separately used as assessment criteria for computing the node efficiency loss. Experiments prove the accuracy and efficiency of our CNI algorithm and show that the GEsG can better reflect heterogeneous characteristics and power transmission of power grids than the Giant Component. Our investigation leads to a counterintuitive finding that the most important critical nodes may not be the generation nodes but some demand nodes.

  • Accelerating a Lloyd-Type k-Means Clustering Algorithm with Summable Lower Bounds in a Lower-Dimensional Space

    Kazuo AOYAMA  Kazumi SAITO  Tetsuo IKEDA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/08/02
      Page(s):
    2773-2783

    This paper presents an efficient acceleration algorithm for Lloyd-type k-means clustering, which is suitable to a large-scale and high-dimensional data set with potentially numerous classes. The algorithm employs a novel projection-based filter (PRJ) to avoid unnecessary distance calculations, resulting in high-speed performance keeping the same results as a standard Lloyd's algorithm. The PRJ exploits a summable lower bound on a squared distance defined in a lower-dimensional space to which data points are projected. The summable lower bound can make the bound tighter dynamically by incremental addition of components in the lower-dimensional space within each iteration although the existing lower bounds used in other acceleration algorithms work only once as a fixed filter. Experimental results on large-scale and high-dimensional real image data sets demonstrate that the proposed algorithm works at high speed and with low memory consumption when large k values are given, compared with the state-of-the-art algorithms.

  • Speeding up Extreme Multi-Label Classifier by Approximate Nearest Neighbor Search

    Yukihiro TAGAMI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/08/06
      Page(s):
    2784-2794

    Extreme multi-label classification methods have been widely used in Web-scale classification tasks such as Web page tagging and product recommendation. In this paper, we present a novel graph embedding method called “AnnexML”. At the training step, AnnexML constructs a k-nearest neighbor graph of label vectors and attempts to reproduce the graph structure in the embedding space. The prediction is efficiently performed by using an approximate nearest neighbor search method that efficiently explores the learned k-nearest neighbor graph in the embedding space. We conducted evaluations on several large-scale real-world data sets and compared our method with recent state-of-the-art methods. Experimental results show that our AnnexML can significantly improve prediction accuracy, especially on data sets that have a larger label space. In addition, AnnexML improves the trade-off between prediction time and accuracy. At the same level of accuracy, the prediction time of AnnexML was up to 58 times faster than that of SLEEC, a state-of-the-art embedding-based method.

  • Food Intake Detection and Classification Using a Necklace-Type Piezoelectric Wearable Sensor System

    Ghulam HUSSAIN  Kamran JAVED  Jundong CHO  Juneho YI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/08/09
      Page(s):
    2795-2807

    Automatic monitoring of food intake in free living conditions is still an open problem to solve. This paper presents a novel necklace-type wearable system embedded with a piezoelectric sensor to monitor ingestive behavior by detecting skin motion from the lower trachea. Detected events are incorporated for food classification. Unlike the previous state-of-the-art piezoelectric sensor based system that employs spectrogram features, we have tried to fully exploit time-domain based signals for optimal features. Through numerous evaluations on the length of a frame, we have found the best performance with a frame length of 70 samples (3.5 seconds). This demonstrates that the chewing sequence carries important information for food classification. Experimental results show the validity of the proposed algorithm for food intake detection and food classification in real-life scenarios. Our system yields an accuracy of 89.2% for food intake detection and 80.3% for food classification over 17 food categories. Additionally, our system is based on a smartphone app, which helps users live healthy by providing them with real-time feedback about their ingested food episodes and types.

  • High-Performance Super-Resolution via Patch-Based Deep Neural Network for Real-Time Implementation

    Reo AOKI  Kousuke IMAMURA  Akihiro HIRANO  Yoshio MATSUDA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/08/20
      Page(s):
    2808-2817

    Recently, Super-resolution convolutional neural network (SRCNN) is widely known as a state of the art method for achieving single-image super resolution. However, performance problems such as jaggy and ringing artifacts exist in SRCNN. Moreover, in order to realize a real-time upconverting system for high-resolution video streams such as 4K/8K 60 fps, problems such as processing delay and implementation cost remain. In the present paper, we propose high-performance super-resolution via patch-based deep neural network (SR-PDNN) rather than a convolutional neural network (CNN). Despite the very simple end-to-end learning system, the SR-PDNN achieves higher performance than the conventional CNN-based approach. In addition, this system is suitable for ultra-low-delay video processing by hardware implementation using an application-specific integrated circuit (ASIC) or a field-programmable gate array (FPGA).

  • Strip-Switched Deployment Method to Optimize Single Failure Recovery for Erasure Coded Storage Systems

    Yingxun FU  Shilin WEN  Li MA  Jianyong DUAN  

     
    LETTER-Computer System

      Pubricized:
    2018/07/25
      Page(s):
    2818-2822

    With the rapid growth on data scale and complexity, single disk failure recovery becomes very important for erasure coded storage systems. In this paper, we propose a new strip-switched deployment method, which utilizes the feature that strips of each stripe of erasure codes could be switched, and uses simulated annealing algorithm to search for the proper strip-deployment on the stack level to balance the read accesses, in order to improve the recovery performance. The analysis and experiments results show that SSDM could effectively improve the single failure recovery performance.

  • On the Optimal Configuration of Grouping-Based Framed Slotted ALOHA

    Young-Beom KIM  

     
    LETTER-Information Network

      Pubricized:
    2018/08/08
      Page(s):
    2823-2826

    In this letter, we consider several optimization problems associated with the configuration of grouping-based framed slotted ALOHA protocols. Closed-form formulas for determining the optimal values of system parameters such as the process termination time and confidence levels for partitioned groups are presented. Further, we address the maximum group size required for meaningful grouping gain and the effectiveness of the grouping technique in light of signaling overhead.

  • NEST: Towards Extreme Scale Computing Systems

    Yunfeng LU  Huaxi GU  Xiaoshan YU  Kun WANG  

     
    LETTER-Information Network

      Pubricized:
    2018/08/20
      Page(s):
    2827-2830

    High-performance computing (HPC) has penetrated into various research fields, yet the increase in computing power is limited by conventional electrical interconnections. The proposed architecture, NEST, exploits wavelength routing in arrayed waveguide grating routers (AWGRs) to achieve a scalable, low-latency, and high-throughput network. For the intra pod and inter pod communication, the symmetrical topology of NEST reduces the network diameter, which leads to an increase in latency performance. Moreover, the proposed architecture enables exponential growth of network size. Simulation results demonstrate that NEST shows 36% latency improvement and 30% throughput improvement over the dragonfly on an average.

  • Energy-Efficient Connectivity Re-Establishment in UASNs with Dumb Nodes

    Qiuli CHEN  Ming HE  Fei DAI  Chaozheng ZHU  

     
    LETTER-Dependable Computing

      Pubricized:
    2018/08/20
      Page(s):
    2831-2835

    The changes of temperature, salinity and ocean current in underwater environment, have adverse effects on the communication range of sensors, and make them become temporary failure. These temporarily misbehaving sensors are called dumb nodes. In this paper, an energy-efficient connectivity re-establishment (EECR) scheme is proposed. It can reconstruct the topology of underwater acoustic sensor networks (UASNs) with the existing of dumb nodes. Due to the dynamic of underwater environment, the generation and recovery of dumb nodes also change dynamically, resulting in intermittent interruption of network topology. Therefore, a multi-band transmission mode for dumb nodes is designed firstly. It ensures that the current stored data of dumb nodes can be sent out in time. Subsequently, a connectivity re-establishment scheme of sub-nodes is designed. The topology reconstruction is adaptively implemented by changing the current transmission path. This scheme does't need to arrange the sleep nodes in advance. So it can reduce the message expenses and energy consumption greatly. Simulation results show that the proposed method has better network performance under the same conditions than the classical algorithms named LETC and A1. What's more, our method has a higher network throughput rate when the nodes' dumb behavior has a shorter duration.

  • Efficient Methods of Inactive Regions Padding for Segmented Sphere Projection (SSP) of 360 Video

    Yong-Uk YOON  Yong-Jo AHN  Donggyu SIM  Jae-Gon KIM  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2018/08/20
      Page(s):
    2836-2839

    In this letter, methods of inactive regions padding for Segmented Sphere Projection (SSP) of 360 videos are proposed. A 360 video is projected onto a 2D plane to be coded with diverse projection formats. Some projection formats have inactive regions in the converted 2D plane such as SSP. The inactive regions may cause visual artifacts as well as coding efficiency decrease due to discontinuous boundaries between active and inactive regions. In this letter, to improve coding efficiency and reduce visual artifacts, the inactive regions are padded by using two types of adjacent pixels in either rectangular-face or circle-face boundaries. By padding the inactive regions with the highly correlated adjacent pixels, the discontinuities between active and inactive regions are reduced. The experimental results show that, in terms of end-to-end Weighted to Spherically uniform PSNR (WS-PSNR), the proposed methods achieve 0.3% BD-rate reduction over the existing padding method for SSP. In addition, the visual artifacts along the borders between discontinuous faces are noticeably reduced.

  • Efficient Texture Creation Based on Random Patches in Database and Guided Filter

    Seok Bong YOO  Mikyong HAN  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2018/08/01
      Page(s):
    2840-2843

    As the display resolution increases, an effective image upscaling technique is required for recent displays such as an ultra-high-definition display. Even though various image super-resolution algorithms have been developed for the image upscaling, they still do not provide the excellent performance in the ultra-high-definition display. This is because the texture creation capability in the algorithms is not sufficient. Hence, this paper proposes an efficient texture creation algorithm for enhancing the texture super-resolution performance. For the texture creation, we build a database with random patches in the off-line processing and we then synthesize fine textures by employing guided filter in the on-line real-time processing, based on the database. Experimental results show that the proposed texture creation algorithm provides sharper and finer textures compared with the existing state-of-the-art algorithms.

  • Deep Convolutional Neural Networks for Manga Show-Through Cancellation

    Taku NAKAHARA  Kazunori URUMA  Tomohiro TAKAHASHI  Toshihiro FURUKAWA  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2018/08/02
      Page(s):
    2844-2848

    Recently, the demand for the digitization of manga is increased. Then, in the case of an old manga where the original pictures have been lost, we have to digitize it from comics. However, the show-through phenomenon would be caused by scanning of the comics since it is represented as the double sided images. This letter proposes the manga show-through cancellation method based on the deep convolutional neural network (CNN). Numerical results show that the effectiveness of the proposed method.

  • Adaptive Object Tracking with Complementary Models

    Peng GAO  Yipeng MA  Chao LI  Ke SONG  Yan ZHANG  Fei WANG  Liyi XIAO  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/08/06
      Page(s):
    2849-2854

    Most state-of-the-art discriminative tracking approaches are based on either template appearance models or statistical appearance models. Despite template appearance models have shown excellent performance, they perform poorly when the target appearance changes rapidly. In contrast, statistic appearance models are insensitive to fast target state changes, but they yield inferior tracking results in challenging scenarios such as illumination variations and background clutters. In this paper, we propose an adaptive object tracking approach with complementary models based on template and statistical appearance models. Both of these models are unified via our novel combination strategy. In addition, we introduce an efficient update scheme to improve the performance of our approach. Experimental results demonstrate that our approach achieves superior performance at speeds that far exceed the frame-rate requirement on recent tracking benchmarks.

  • Accurate Scale Adaptive and Real-Time Visual Tracking with Correlation Filters

    Jiatian PI  Shaohua ZENG  Qing ZUO  Yan WEI  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/07/27
      Page(s):
    2855-2858

    Visual tracking has been studied for several decades but continues to draw significant attention because of its critical role in many applications. This letter handles the problem of fixed template size in Kernelized Correlation Filter (KCF) tracker with no significant decrease in the speed. Extensive experiments are performed on the new OTB dataset.

  • High-Speed Spelling in Virtual Reality with Sequential Hybrid BCIs

    Zhaolin YAO  Xinyao MA  Yijun WANG  Xu ZHANG  Ming LIU  Weihua PEI  Hongda CHEN  

     
    LETTER-Biological Engineering

      Pubricized:
    2018/07/25
      Page(s):
    2859-2862

    A new hybrid brain-computer interface (BCI), which is based on sequential controls by eye tracking and steady-state visual evoked potentials (SSVEPs), has been proposed for high-speed spelling in virtual reality (VR) with a 40-target virtual keyboard. During target selection, gaze point was first detected by an eye-tracking accessory. A 4-target block was then selected for further target selection by a 4-class SSVEP BCI. The system can type at a speed of 1.25 character/sec in a cue-guided target selection task. Online experiments on three subjects achieved an averaged information transfer rate (ITR) of 360.7 bits/min.