The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ACH(1072hit)

321-340hit(1072hit)

  • A Network-Type Brain Machine Interface to Support Activities of Daily Living Open Access

    Takayuki SUYAMA  

     
    INVITED PAPER

      Vol:
    E99-B No:9
      Page(s):
    1930-1937

    To help elderly and physically disabled people to become self-reliant in daily life such as at home or a health clinic, we have developed a network-type brain machine interface (BMI) system called “network BMI” to control real-world actuators like wheelchairs based on human intention measured by a portable brain measurement system. In this paper, we introduce the technologies for achieving the network BMI system to support activities of daily living.

  • The Novel Performance Evaluation Method of the Fingerprinting-Based Indoor Positioning

    Shutchon PREMCHAISAWATT  Nararat RUANGCHAIJATUPON  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2016/05/17
      Vol:
    E99-D No:8
      Page(s):
    2131-2139

    In this work, the novel fingerprinting evaluation parameter, which is called the punishment cost, is proposed. This parameter can be calculated from the designed matrix, the punishment matrix, and the confusion matrix. The punishment cost can describe how well the result of positioning is in the designated grid or not, by which the conventional parameter, the accuracy, cannot describe. The experiment is done with real measured data on weekdays and weekends. The results are considered in terms of accuracy and the punishment cost. Three well-known machine learning algorithms, i.e. Decision Tree, k-Nearest Neighbors, and Artificial Neural Network, are verified in fingerprinting positioning. In experimental environment, Decision Tree can perform well on the data from weekends whereas the performance is underrated on the data from weekdays. The k-Nearest Neighbors has proper punishment costs, even though it has lower accuracy than that of Artificial Neural Network, which has moderate accuracies but lower punishment costs. Therefore, other criteria should be considered in order to select the algorithm for indoor positioning. In addition, punishment cost can facilitate the conversion spot positioning to floor positioning without data modification.

  • Improvement of Data Utilization Efficiency for Cache Memory by Compressing Frequent Bit Sequences

    Ryotaro KOBAYASHI  Ikumi KANEKO  Hajime SHIMADA  

     
    PAPER

      Vol:
    E99-C No:8
      Page(s):
    936-946

    In the most recent processor designs, memory access latency is shortened by adopting a memory hierarchy. In this configuration, the memory consists of a main memory, which comprises dynamic random-access memory (DRAM), and a cache memory, which consists of static random-access memory (SRAM). A cache memory, which is now used in increasingly large volumes, accounts for a vast proportion of the energy consumption of the overall processor. There are two ways to reduce the energy consumption of the cache memory: by decreasing the number of accesses, and by minimizing the energy consumed per access. In this study, we reduce the size of the L1 cache by compressing frequent bit sequences, thus cutting the energy consumed per access. A “frequent bit sequence” is a specific bit pattern that often appears in high-order bits of data retained in the cache memory. Our proposed mechanism, which is based on measurements using a software simulator, cuts energy consumption by 41.0% on average as compared with conventional mechanisms.

  • A Graphical Game Theoretic Approach to Optimization of Energy Efficiency in Multihop Wireless Sensor Networks

    Hui JING  Hitoshi AIDA  

    This paper was deleted on February 16, 2024 because it was found to be a a illegal submission (see details in the pdf file).
     
    PAPER-Network

      Vol:
    E99-B No:8
      Page(s):
    1789-1798

    Recently, multihop wireless sensor networks (WSNs) are widely developed and applied to energy efficient data collections from environments by establishing reliable transmission radio links and employing data aggregation algorithms, which can eliminate redundant transmissions and provide fusion information. In this paper, energy efficiency which consists of not only energy consumptions but also the amount of received data by the base station, as the performance metric to evaluate network utilities is presented for achieving energy efficient data collections. In order to optimize energy efficiency for improvements of network utilization, we firstly establish a graphical game theoretic model for energy efficiency in multihop WSNs, considering message length, practical energy consumptions and packet success probabilities. Afterwards, we propose a graphical protocol for performance optimization from Nash equilibrium of the graphical game theory. The approach also consists of the distributed protocol for generating optimum tree networks in practical WSNs. The experimental results show energy efficient multihop communications can be achieved by optimum tree networks of the approach. The quantitative evaluation and comparisons with related work are presented for the metric with respect to network energy consumptions and the amount of received data by the base station. The performances of our proposal are improved in all experiments. As an example, our proposal can achieve up to about 52% energy efficiency more than collection tree protocol (CTP). The corresponding tree structure is provided for the experiment.

  • PAC-k: A Parallel Aho-Corasick String Matching Approach on Graphic Processing Units Using Non-Overlapped Threads

    ThienLuan HO  Seung-Rohk OH  HyunJin KIM  

     
    PAPER-Network Management/Operation

      Vol:
    E99-B No:7
      Page(s):
    1523-1531

    A parallel Aho-Corasick (AC) approach, named PAC-k, is proposed for string matching in deep packet inspection (DPI). The proposed approach adopts graphic processing units (GPUs) to perform the string matching in parallel for high throughput. In parallel string matching, the boundary detection problem happens when a pattern is matched across chunks. The PAC-k approach solves the boundary detection problem because the number of characters to be scanned by a thread can reach the longest pattern length. An input string is divided into multiple sub-chunks with k characters. By adopting the new starting position in each sub-chunk for the failure transition, the required number of threads is reduced by a factor of k. Therefore, the overhead of terminating and reassigning threads is also decreased. In order to avoid the unnecessary overlapped scanning with multiple threads, a checking procedure is proposed that decides whether a new starting position is in the sub-chunk. In the experiments with target patterns from Snort and realistic input strings from DEFCON, throughputs are enhanced greatly compared to those of previous AC-based string matching approaches.

  • Guide Automatic Vectorization by means of Machine Learning: A Case Study of Tensor Contraction Kernels

    Antoine TROUVÉ  Arnaldo J. CRUZ  Kazuaki J. MURAKAMI  Masaki ARAI  Tadashi NAKAHIRA  Eiji YAMANAKA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2016/03/22
      Vol:
    E99-D No:6
      Page(s):
    1585-1594

    Modern optimizing compilers tend to be conservative and often fail to vectorize programs that would have benefited from it. In this paper, we propose a way to predict the relevant command-line options of the compiler so that it chooses the most profitable vectorization strategy. Machine learning has proven to be a relevant approach for this matter: fed with features that describe the software to the compiler, a machine learning device is trained to predict an appropriate optimization strategy. The related work relies on the control and data flow graphs as software features. In this article, we consider tensor contraction programs, useful in various scientific simulations, especially chemistry. Depending on how they access the memory, different tensor contraction kernels may yield very different performance figures. However, they exhibit identical control and data flow graphs, making them completely out of reach of the related work. In this paper, we propose an original set of software features that capture the important properties of the tensor contraction kernels. Considering the Intel Merom processor architecture with the Intel Compiler, we model the problem as a classification problem and we solve it using a support vector machine. Our technique predicts the best suited vectorization options of the compiler with a cross-validation accuracy of 93.4%, leading to up to a 3-times speedup compared to the default behavior of the Intel Compiler. This article ends with an original qualitative discussion on the performance of software metrics by means of visualization. All our measurements are made available for the sake of reproducibility.

  • Choreography Realization by Re-Constructible Decomposition of Acyclic Relations

    Toshiyuki MIYAMOTO  

     
    PAPER-Formal Methods

      Pubricized:
    2016/05/02
      Vol:
    E99-D No:6
      Page(s):
    1420-1427

    For a service-oriented architecture-based system, the problem of synthesizing a concrete model (i.e., a behavioral model) for each peer configuring the system from an abstract specification — which is referred to as choreography — is known as the choreography realization problem. In this paper, we consider the condition for the behavioral model when choreography is given by an acyclic relation. A new notion called re-constructible decomposition of acyclic relations is introduced, and a necessary and sufficient condition for a decomposed relation to be re-constructible is shown. The condition provides lower and upper bounds of the acyclic relation for the behavioral model. Thus, the degree of freedom for behavioral models increases; developing algorithms for synthesizing an intelligible model for users becomes possible. It is also expected that the condition is applied to the case where choreography is given by a set of acyclic relations.

  • D-Paxos: Building Hierarchical Replicated State Machine for Cloud Environments

    Fagui LIU  Yingyi YANG  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2016/03/22
      Vol:
    E99-D No:6
      Page(s):
    1485-1501

    We present a hierarchical replicated state machine (H-RSM) and its corresponding consensus protocol D-Paxos for replication across multiple data centers in the cloud. Our H-RSM is based on the idea of parallel processing and aims to improve resource utilization. We detail D-Paxos and theoretically prove that D-Paxos implements an H-RSM. With batching and logical pipelining, D-Paxos efficiently utilizes the idle time caused by high-latency message transmission in a wide-area network and available bandwidth in a local-area network. Experiments show that D-Paxos provides higher throughput and better scalability than other Paxos variants for replication across multiple data centers. To predict the optimal batch sizes when D-Paxos reaches its maximum throughput, an analytical model is developed theoretically and validated experimentally.

  • The Multi-Level SICC Algorithm Based Virtual Machine Dynamic Consolidation and FFD Algorithm

    Changming ZHAO  Jian LIU  Jian LIU  Sani UMAR ABDULLAHI  

     
    PAPER-Network

      Vol:
    E99-B No:5
      Page(s):
    1110-1120

    The Virtual Machine Consolidation (VMC) algorithm is the core strategy of virtualization resource management software. In general, VMC efficiency dictates cloud datacenter efficiency to a great extent. However, all the current Virtual Machine (VM) consolidation strategies, including the Iterative Correlation Match Algorithm (ICMA), are not suitable for the dynamic VM consolidation of the level of physical servers in actual datacenter environments. In this paper, we propose two VM consolidation and placement strategies which are called standard Segmentation Iteration Correlation Combination (standard SICC) and Multi-level Segmentation Iteration Correlation Combination (multi-level SICC). The standard SICC is suitable for the single-size VM consolidation environment and is the cornerstone of multi-level SICC which is suitable for the multi-size VM consolidation environment. Numerical simulation results indicate that the numbers of remaining Consolidated VM (CVM), which are generated by standard SICC, are 20% less than the corresponding parameters of ICMA in the single-level VM environment with the given initial condition. The numbers of remaining CVMs of multi-level SICC are 14% less than the corresponding parameters of ICMA in the multi-level VM environment. Furthermore, the used physical servers of multi-level SICC are also 5% less than the used servers of ICMA under the given initial condition.

  • Application Performance Profiling in Android Dalvik Virtual Machines

    Hung-Cheng CHANG  Kuei-Chung CHANG  Ying-Dar LIN  Yuan-Cheng LAI  

     
    PAPER-Software System

      Pubricized:
    2016/01/25
      Vol:
    E99-D No:5
      Page(s):
    1296-1303

    Most Android applications are written in JAVA and run on a Dalvik virtual machine. For smartphone vendors and users who wish to know the performance of an application on a particular smartphone but cannot obtain the source code, we propose a new technique, Dalvik Profiler for Applications (DPA), to profile an Android application on a Dalvik virtual machine without the support of source code. Within a Dalvik virtual machine, we determine the entry and exit locations of a method, log its execution time, and analyze the log to determine the performance of the application. Our experimental results show an error ratio of less than 5% from the baseline tool Traceview which instruments source code. The results also show some interesting behaviors of applications and smartphones: the performance of some smartphones with higher hardware specifications is 1.5 times less than the phones with lower specifications. DPA is now publicly available as an open source tool.

  • Learning Subspace Classification Using Subset Approximated Kernel Principal Component Analysis

    Yoshikazu WASHIZAWA  

     
    PAPER-Pattern Recognition

      Pubricized:
    2016/01/25
      Vol:
    E99-D No:5
      Page(s):
    1353-1363

    We propose a kernel-based quadratic classification method based on kernel principal component analysis (KPCA). Subspace methods have been widely used for multiclass classification problems, and they have been extended by the kernel trick. However, there are large computational complexities for the subspace methods that use the kernel trick because the problems are defined in the space spanned by all of the training samples. To reduce the computational complexity of the subspace methods for multiclass classification problems, we extend Oja's averaged learning subspace method and apply a subset approximation of KPCA. We also propose an efficient method for selecting the basis vectors for this. Due to these extensions, for many problems, our classification method exhibits a higher classification accuracy with fewer basis vectors than does the support vector machine (SVM) or conventional subspace methods.

  • A Survey of Caching Networks in Content Oriented Networks Open Access

    Miki YAMAMOTO  

     
    INVITED PAPER

      Vol:
    E99-B No:5
      Page(s):
    961-973

    Content oriented network is expected to be one of the most promising approaches for resolving design concept difference between content oriented network services and location oriented architecture of current network infrastructure. There have been proposed several content oriented network architectures, but research efforts for content oriented networks have just started and technical issues to be resolved are still remained. Because of content oriented feature, content data transmitted in a network can be reused by content requests from other users. Pervasive cache is one of the most important benefits brought by the content oriented network architecture, which forms interconnected caching networks. Caching network is the hottest research area and lots of research activities have been published. This paper surveys recent research activities for caching networks in content oriented networks, with focusing on important factors which affect caching network performance, i.e. content request routing, caching decision, and replacement policy of cache. And this paper also discusses future direction of caching network researches.

  • Content Retrieval Method in Cooperation with CDN and Breadcrumbs-Based In-Network Guidance Method

    Yutaro INABA  Yosuke TANIGAWA  Hideki TODE  

     
    PAPER

      Vol:
    E99-B No:5
      Page(s):
    992-1001

    These days, in addition to host-to-host communication, Information-Centric Network (ICN) has emerged to reflect current content-centric network usage, based on the fact that many users are now interested not in where contents are but in acquired contents themselves. However, current IP network must still remain, at least from deployment perspective, as one of near future network architectures. This is because ICN has various scalability and feasibility challenges, and host-to-host communication is also diffused like remote login, VoIP, and so on. Therefore, the authors aim to establish the feature of ICN on conventional IP network to achieve feasible and efficient architecture. We consider that, as a feasible and efficient architecture, only user-edges keep some contents' caches within their computational and bandwidth limitations and contents should be replicated also on some replica servers dispersedly to assure contents' distribution even if user caches are not found. To achieve this, in this paper, we propose to operate Content Delivery Network (CDN) and Breadcrumbs (BC) frameworks coordinately on IP network. Both CDN and BC are important as a content-centric technique. In CDN, replica servers called surrogates are placed dispersedly in all over the Internet. Although this provides users with contents from nearer surrogate servers, the surrogate servers have higher workload to distribute contents to many users. In the proposed method, in cooperation with BC method that is proposed to implement ICN on IP network, the surrogate server workload is drastically reduced without largely increasing hop count for content delivery. Although it needs some functions to implement our approach such as adopting BC architecture to routers, calculating and reporting information required for cooperation of BC method with CDN, the cost for the functions in our solution is not so significant. Finally, we evaluate the proposed method with CDN we carefully modeled through simulation.

  • Location-Aware Forwarding and Caching in CCN-Based Mobile Ad Hoc Networks

    Rana Asif REHMAN  Byung-Seo KIM  

     
    LETTER-Information Network

      Pubricized:
    2016/02/17
      Vol:
    E99-D No:5
      Page(s):
    1388-1391

    Content centric network (CCN) is conceived as a good candidate for a futuristic Internet paradigm due to its simple and robust communication mechanism. By directly applying the CCN paradigm in wireless multihop mobile ad hoc networks, we experience various kind of issues such as packet flooding, data redundancy, packet collisions, and retransmissions etc., due to the broadcast nature of the wireless channel. To cope with the problems, in this study, we propose a novel location-aware forwarding and caching scheme for CCN-based mobile ad hoc networks. Extensive simulations are performed by using simulator, named ndnSIM. Experiment results show that proposed scheme does better as compared to other schemes in terms of content retrieval time and the number of Interest retransmissions triggered in the network.

  • Using Reversed Sequences and Grapheme Generation Rules to Extend the Feasibility of a Phoneme Transition Network-Based Grapheme-to-Phoneme Conversion

    Seng KHEANG  Kouichi KATSURADA  Yurie IRIBE  Tsuneo NITTA  

     
    PAPER-Speech and Hearing

      Pubricized:
    2016/01/06
      Vol:
    E99-D No:4
      Page(s):
    1182-1192

    The automatic transcription of out-of-vocabulary words into their corresponding phoneme strings has been widely adopted for speech synthesis and spoken-term detection systems. By combining various methods in order to meet the challenges of grapheme-to-phoneme (G2P) conversion, this paper proposes a phoneme transition network (PTN)-based architecture for G2P conversion. The proposed method first builds a confusion network using multiple phoneme-sequence hypotheses generated from several G2P methods. It then determines the best final-output phoneme from each block of phonemes in the generated network. Moreover, in order to extend the feasibility and improve the performance of the proposed PTN-based model, we introduce a novel use of right-to-left (reversed) grapheme-phoneme sequences along with grapheme-generation rules. Both techniques are helpful not only for minimizing the number of required methods or source models in the proposed architecture but also for increasing the number of phoneme-sequence hypotheses, without increasing the number of methods. Therefore, the techniques serve to minimize the risk from combining accurate and inaccurate methods that can readily decrease the performance of phoneme prediction. Evaluation results using various pronunciation dictionaries show that the proposed model, when trained using the reversed grapheme-phoneme sequences, often outperformed conventional left-to-right grapheme-phoneme sequences. In addition, the evaluation demonstrates that the proposed PTN-based method for G2P conversion is more accurate than all baseline approaches that were tested.

  • Incorporation of Target Specific Knowledge for Sentiment Analysis on Microblogging

    Yongyos KAEWPITAKKUN  Kiyoaki SHIRAI  

     
    PAPER

      Pubricized:
    2016/01/14
      Vol:
    E99-D No:4
      Page(s):
    959-968

    Sentiment analysis of microblogging has become an important classification task because a large amount of user-generated content is published on the Internet. In Twitter, it is common that a user expresses several sentiments in one tweet. Therefore, it is important to classify the polarity not of the whole tweet but of a specific target about which people express their opinions. Moreover, the performance of the machine learning approach greatly depends on the domain of the training data and it is very time-consuming to manually annotate a large set of tweets for a specific domain. In this paper, we propose a method for sentiment classification at the target level by incorporating the on-target sentiment features and user-aware features into the classifier trained automatically from the data createdfor the specific target. An add-on lexicon, extended target list, and competitor list are also constructed as knowledge sources for the sentiment analysis. None of the processes in the proposed framework require manual annotation. The results of our experiment show that our method is effective and improves on the performance of sentiment classification compared to the baselines.

  • Combining Human Action Sensing of Wheelchair Users and Machine Learning for Autonomous Accessibility Data Collection

    Yusuke IWASAWA  Ikuko EGUCHI YAIRI  Yutaka MATSUO  

     
    PAPER-Rehabilitation Engineering and Assistive Technology

      Pubricized:
    2016/01/22
      Vol:
    E99-D No:4
      Page(s):
    1153-1161

    The recent increase in the use of intelligent devices such as smartphones has enhanced the relationship between daily human behavior sensing and useful applications in ubiquitous computing. This paper proposes a novel method inspired by personal sensing technologies for collecting and visualizing road accessibility at lower cost than traditional data collection methods. To evaluate the methodology, we recorded outdoor activities of nine wheelchair users for approximately one hour each by using an accelerometer on an iPod touch and a camcorder, gathered the supervised data from the video by hand, and estimated the wheelchair actions as a measure of street level accessibility in Tokyo. The system detected curb climbing, moving on tactile indicators, moving on slopes, and stopping, with F-scores of 0.63, 0.65, 0.50, and 0.91, respectively. In addition, we conducted experiments with an artificially limited number of training data to investigate the number of samples required to estimate the target.

  • Automating URL Blacklist Generation with Similarity Search Approach

    Bo SUN  Mitsuaki AKIYAMA  Takeshi YAGI  Mitsuhiro HATADA  Tatsuya MORI  

     
    PAPER-Web security

      Pubricized:
    2016/01/13
      Vol:
    E99-D No:4
      Page(s):
    873-882

    Modern web users may encounter a browser security threat called drive-by-download attacks when surfing on the Internet. Drive-by-download attacks make use of exploit codes to take control of user's web browser. Many web users do not take such underlying threats into account while clicking URLs. URL Blacklist is one of the practical approaches to thwarting browser-targeted attacks. However, URL Blacklist cannot cope with previously unseen malicious URLs. Therefore, to make a URL blacklist effective, it is crucial to keep the URLs updated. Given these observations, we propose a framework called automatic blacklist generator (AutoBLG) that automates the collection of new malicious URLs by starting from a given existing URL blacklist. The primary mechanism of AutoBLG is expanding the search space of web pages while reducing the amount of URLs to be analyzed by applying several pre-filters such as similarity search to accelerate the process of generating blacklists. AutoBLG consists of three primary components: URL expansion, URL filtration, and URL verification. Through extensive analysis using a high-performance web client honeypot, we demonstrate that AutoBLG can successfully discover new and previously unknown drive-by-download URLs from the vast web space.

  • Demonstration of Bit-Level CWDM-Based Power Budget Extender Providing a High-Power Gain of 54dB in Symmetric-Rate 10G-EPON System

    Kwangok KIM  Hwanseok CHUNG  Younseon JANG  

     
    PAPER-Fiber-Optic Transmission for Communications

      Vol:
    E99-B No:4
      Page(s):
    867-874

    We propose the cost-effective bit-level coarse wavelength division multiplexing (CWDM)-based power budget extender (PBEx) that can provide a high link budget of 54dB in a symmetric-rate 10-Gbit/s Ethernet passive optical network (10/10G-EPON). The proposed CWDM-based PBEx comprises a 2R-based 10/10G-EPON central office terminal (COT) and 3R-based 10/10G-EPON remote terminal (RT). It can apply several conventional CWDM technologies at the feeder fiber to reduce the amount of optical fiber required and increase the link capacity. Thus, it mainly conducts a wavelength conversion and signal retiming, as well as an upstream burst-mode for a continuous-mode conversion. We experimentally demonstrate the feasibility of a CWDM-based PBEx through packet-level testing using a pre-commercialized 10/10G-EPON system. We can confirm that the proposed solution can support a 128-way split at a distance of over 40km per CWDM channel with an enlarged loss budget of 54dB. We can also satisfy loss-free service during a packet transmission of 1010 both downstream and upstream.

  • Improvement of Renamed Trace Cache through the Reduction of Dependent Path Length for High Energy Efficiency

    Ryota SHIOYA  Hideki ANDO  

     
    PAPER-Computer System

      Pubricized:
    2015/12/04
      Vol:
    E99-D No:3
      Page(s):
    630-640

    Out-of-order superscalar processors rename register numbers to remove false dependencies between instructions. A renaming logic for register renaming is a high-cost module in a superscalar processor, and it consumes considerable energy. A renamed trace cache (RTC) was proposed for reducing the energy consumption of a renaming logic. An RTC caches and reuses renamed operands, and thus, register renaming can be omitted on RTC hits. However, conventional RTCs suffer from several performance, energy consumption, and hardware overhead problems. We propose a semi-global renamed trace cache (SGRTC) that caches only renamed operands that are short distance from producers outside traces, and solves the problems of conventional RTCs. Evaluation results show that SGRTC achieves 64% lower energy consumption for renaming with a 0.2% performance overhead as compared to a conventional processor.

321-340hit(1072hit)