The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E94-D No.9  (Publication Date:2011/09/01)

    Regular Section
  • A Prediction-Based Green Scheduler for Datacenters in Clouds

    Truong Vinh Truong DUY  Yukinori SATO  Yasushi INOGUCHI  

     
    PAPER-Fundamentals of Information Systems

      Page(s):
    1731-1741

    With energy shortages and global climate change leading our concerns these days, the energy consumption of datacenters has become a key issue. Obviously, a substantial reduction in energy consumption can be made by powering down servers when they are not in use. This paper aims at designing, implementing and evaluating a Green Scheduler for reducing energy consumption of datacenters in Cloud computing platforms. It is composed of four algorithms: prediction, ON/OFF, task scheduling, and evaluation algorithms. The prediction algorithm employs a neural predictor to predict future load demand based on historical demand. According to the prediction, the ON/OFF algorithm dynamically adjusts server allocations to minimize the number of servers running, thus minimizing the energy use at the points of consumption to benefit all other levels. The task scheduling algorithm is responsible for directing request traffic away from powered-down servers and toward active servers. The performance is monitored by the evaluation algorithm to balance the system's adaptability against stability. For evaluation, we perform simulations with two load traces. The results show that the prediction mode, with a combination of dynamic training and dynamic provisioning of 20% additional servers, can reduce energy consumption by 49.8% with a drop rate of 0.02% on one load trace, and a drop rate of 0.16% with an energy consumption reduction of 55.4% on the other. Our method is also proven to have a distinct advantage over its counterparts.

  • Software-Based Parallel Cryptographic Solution with Massive-Parallel Memory-Embedded SIMD Matrix Architecture for Data-Storage Systems

    Takeshi KUMAKI  Tetsushi KOIDE  Hans Jurgen MATTAUSCH  Masaharu TAGAMI  Masakatsu ISHIZAKI  

     
    PAPER-Fundamentals of Information Systems

      Page(s):
    1742-1754

    This paper presents a software-based parallel cryptographic solution with a massive-parallel memory-embedded SIMD matrix (MTX) for data-storage systems. MTX can have up to 2,048 2-bit processing elements, which are connected by a flexible switching network, and supports 2-bit 2,048-way bit-serial and word-parallel operations with a single command. Furthermore, a next-generation SIMD matrix called MX-2 has been developed by expanding processing-element capability of MTX from 2-bit to 4-bit processing. These SIMD matrix architectures are verified to be a better alternative for processing repeated-arithmetic and logical-operations in multimedia applications with low power consumption. Moreover, we have proposed combining Content Addressable Memory (CAM) technology with the massive-parallel memory-embedded SIMD matrix architecture to enable fast pipelined table-lookup coding. Since both arithmetic logical operation and table-lookup coding execute extremely fast on these architectures, efficient execution of encryption and decryption algorithms can be realized. Evaluation results of the CAM-less and CAM-enhanced massive-parallel SIMD matrix processor for the example of the Advanced Encryption Standard (AES), which is a widely-used cryptographic algorithm, show that a throughput of up to 2.19 Gbps becomes possible. This means that several standard data-storage transfer specifications, such as SD, CF (Compact Flash), USB (Universal Serial Bus) and SATA (Serial Advanced Technology Attachment) can be covered. Consequently, the massive-parallel SIMD matrix architecture is very suitable for private information protection in several data-storage media. A further advantage of the software based solution is the flexible update possibility of the implemented-cryptographic algorithm to a safer future algorithm. The massive-parallel memory-embedded SIMD matrix architecture (MTX and MX-2) is therefore a promising solution for integrated realization of real-time cryptographic algorithms with low power dissipation and small Si-area consumption.

  • A Flexible and Accurate Reasoning Method for Danger-Aware Services Based on Context Similarity from Feature Point of View

    Junbo WANG  Zixue CHENG  Yongping CHEN  Lei JING  

     
    PAPER-Information Network

      Page(s):
    1755-1767

    Context awareness is viewed as one of the most important goals in the pervasive computing paradigm. As one kind of context awareness, danger awareness describes and detects dangerous situations around a user, and provides services such as warning to protect the user from dangers. One important problem arising in danger-aware systems is that the description/definition of dangerous situations becomes more and more complex, since many factors have to be considered in such description, which brings a big burden to the developers/users and thereby reduces the reliability of the system. It is necessary to develop a flexible reasoning method, which can ease the description/definition of dangerous situations by reasoning dangers using limited specified/predefined contexts/rules, and increase system reliability by detecting unspecified dangerous situations. Some reasoning mechanisms based on context similarity were proposed to address the above problems. However, the current mechanisms are not so accurate in some cases, since the similarity is computed from only basic knowledge, e.g. nature property, such as material, size etc, and category information, i.e. they may cause false positive and false negative problems. To solve the above problems, in this paper we propose a new flexible and accurate method from feature point of view. Firstly, a new ontology explicitly integrating basic knowledge and danger feature is designed for computing similarity in danger-aware systems. Then a new method is proposed to compute object similarity from both basic knowledge and danger feature point of views when calculating context similarity. The method is implemented in an indoor ubiquitous test bed and evaluated through experiments. The experiment result shows that the accuracy of system can be effectively increased based on the comparison between system decision and estimation of human observers, comparing with the existing methods. And the burden of defining dangerous situations can be decreased by evaluating trade-off between the system's accuracy and burden of defining dangerous situations.

  • On the Security of BioEncoding Based Cancelable Biometrics

    Osama OUDA  Norimichi TSUMURA  Toshiya NAKAGUCHI  

     
    PAPER-Information Network

      Page(s):
    1768-1777

    Proving the security of cancelable biometrics and other template protection techniques is a key prerequisite for the widespread deployment of biometric technologies. BioEncoding is a cancelable biometrics scheme that has been proposed recently to protect biometric templates represented as binary strings like iris codes. Unlike other template protection schemes, BioEncoding does not require user-specific keys or tokens. Moreover, it satisfies the requirements of untraceable biometrics without sacrificing the matching accuracy. However, the security of BioEncoding against smart attacks, such as correlation and optimization-based attacks, has to be proved before recommending it for practical deployment. In this paper, the security of BioEncopding, in terms of both non-invertibility and privacy protection, is analyzed. First, resistance of protected templates generated using BioEncoding against brute-force search attacks is revisited rigorously. Then, vulnerabilities of BioEncoding with respect to correlation attacks and optimization based attacks are identified and explained. Furthermore, an important modification to the BioEncoding algorithm is proposed to enhance its security against correlation attacks. The effect of integrating this modification into BioEncoding is validated and its impact on the matching accuracy is investigated empirically using CASIA-IrisV3-Interval dataset. Experimental results confirm the efficacy of the proposed modification and show that it has no negative impact on the matching accuracy.

  • An Empirical Evaluation of an Unpacking Method Implemented with Dynamic Binary Instrumentation

    Hyung Chan KIM  Tatsunori ORII  Katsunari YOSHIOKA  Daisuke INOUE  Jungsuk SONG  Masashi ETO  Junji SHIKATA  Tsutomu MATSUMOTO  Koji NAKAO  

     
    PAPER-Information Network

      Page(s):
    1778-1791

    Many malicious programs we encounter these days are armed with their own custom encoding methods (i.e., they are packed) to deter static binary analysis. Thus, the initial step to deal with unknown (possibly malicious) binary samples obtained from malware collecting systems ordinarily involves the unpacking step. In this paper, we focus on empirical experimental evaluations on a generic unpacking method built on a dynamic binary instrumentation (DBI) framework to figure out the applicability of the DBI-based approach. First, we present yet another method of generic binary unpacking extending a conventional unpacking heuristic. Our architecture includes managing shadow states to measure code exposure according to a simple byte state model. Among available platforms, we built an unpacking implementation on PIN DBI framework. Second, we describe evaluation experiments, conducted on wild malware collections, to discuss workability as well as limitations of our tool. Without the prior knowledge of 6029 samples in the collections, we have identified at around 64% of those were analyzable with our DBI-based generic unpacking tool which is configured to operate in fully automatic batch processing. Purging corrupted and unworkable samples in native systems, it was 72%.

  • Practical Orientation Field Estimation for Embedded Fingerprint Recognition Systems

    Yukun LIU  Dongju LI  Tsuyoshi ISSHIKI  Hiroaki KUNIEDA  

     
    PAPER-Pattern Recognition

      Page(s):
    1792-1799

    As a global feature of fingerprint patterns, the Orientation Field (OF) plays an important role in fingerprint recognition systems. This paper proposes a fast binary pattern based orientation estimation with nearest-neighbor search, which can reduce the computational complexity greatly. We also propose a classified post processing with adaptive averaging strategy to increase the accuracy of the estimated OF. Experimental results confirm that the proposed method can satisfy the strict requirements of the embedded applications over the conventional approaches.

  • Global Selection vs Local Ordering of Color SIFT Independent Components for Object/Scene Classification

    Dan-ni AI  Xian-hua HAN  Guifang DUAN  Xiang RUAN  Yen-wei CHEN  

     
    PAPER-Pattern Recognition

      Page(s):
    1800-1808

    This paper addresses the problem of ordering the color SIFT descriptors in the independent component analysis for image classification. Component ordering is of great importance for image classification, since it is the foundation of feature selection. To select distinctive and compact independent components (IC) of the color SIFT descriptors, we propose two ordering approaches based on local variation, named as the localization-based IC ordering and the sparseness-based IC ordering. We evaluate the performance of proposed methods, the conventional IC selection method (global variation based components selection) and original color SIFT descriptors on object and scene databases, and obtain the following two main results. First, the proposed methods are able to obtain acceptable classification results in comparison with original color SIFT descriptors. Second, the highest classification rate can be obtained by using the global selection method in the scene database, while the local ordering methods give the best performance for the object database.

  • Image Categorization Using Scene-Context Scale Based on Random Forests

    Yousun KANG  Hiroshi NAGAHASHI  Akihiro SUGIMOTO  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    1809-1816

    Scene-context plays an important role in scene analysis and object recognition. Among various sources of scene-context, we focus on scene-context scale, which means the effective scale of local context to classify an image pixel in a scene. This paper presents random forests based image categorization using the scene-context scale. The proposed method uses random forests, which are ensembles of randomized decision trees. Since the random forests are extremely fast in both training and testing, it is possible to perform classification, clustering and regression in real time. We train multi-scale texton forests which efficiently provide both a hierarchical clustering into semantic textons and local classification in various scale levels. The scene-context scale can be estimated by the entropy of the leaf node in the multi-scale texton forests. For image categorization, we combine the classified category distributions in each scale and the estimated scene-context scale. We evaluate on the MSRC21 segmentation dataset and find that the use of the scene-context scale improves image categorization performance. Our results have outperformed the state-of-the-art in image categorization accuracy.

  • User-Calibration-Free Gaze Estimation Method Using a Binocular 3D Eye Model

    Takashi NAGAMATSU  Ryuichi SUGANO  Yukina IWAMOTO  Junzo KAMAHARA  Naoki TANAKA  

     
    PAPER-Multimedia Pattern Processing

      Page(s):
    1817-1829

    This paper presents a user-calibration-free method for estimating the point of gaze (POG). This method provides a fast and stable solution for realizing user-calibration-free gaze estimation more accurately than the conventional method that uses the optical axis of the eye as an approximation of the visual axis of the eye. The optical axis of the eye can be estimated by using two cameras and two light sources. This estimation is carried out by using a spherical model of the cornea. The point of intersection of the optical axis of the eye with the object that the user gazes at is termed POA. On the basis of an assumption that the visual axes of both eyes intersect on the object, the POG is approximately estimated using the binocular 3D eye model as the midpoint of the line joining the POAs of both eyes. Based on this method, we have developed a prototype system that comprises a 19 display with two pairs of stereo cameras. We evaluated the system experimentally with 20 subjects who were at a distance of 600 mm from the display. The root-mean-square error (RMSE) of measurement of POG in the display screen coordinate system is 1.58.

  • Assessing the Impact of Node Churn to Random Walk-Based Overlay Construction

    Kyungbaek KIM  

     
    LETTER-Information Network

      Page(s):
    1830-1833

    Distributed systems desire to construct a random overlay graph for robustness, efficient information dissemination and load balancing. A random walk-based overlay construction is a promising alternative to generate an ideal random scale free overlay in distributed systems. However, a simple random walk-based overlay construction can be affected by node churn. Especially, the number of edges increases and the degree distribution is skewed. This inappropriate distortion can be exploited by malicious nodes. In this paper, we propose a modified random walk-based overlay construction supported by a logistic/trial based decision function to compensate the impact of node churn. Through event-driven simulations, we show that the decision function helps an overlay maintain the proper degree distribution, low diameter and low clustering coefficient with shorter random walks.

  • A Cost-Effective and Robust Edge-Based Blur Metric Based on Careful Computation of Edge Slope

    Hanhoon PARK  Hideki MITSUMINE  Mahito FUJII  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    1834-1838

    This letter presents a novel edge-based blur metric that averages the ratios between the slopes and heights of edges. The metric computes the edge slopes more carefully, i.e., by averaging the edge gradients. The effectiveness of the proposed metric is confirmed by experiments with motion or Gaussian blurred real images and comparison with existing edge-based blur metrics.

  • The Relationship between Aging and Photic Driving EEG Response

    Tadanori FUKAMI  Takamasa SHIMADA  Fumito ISHIKAWA  Bunnoshin ISHIKAWA  Yoichi SAITO  

     
    LETTER-Biological Engineering

      Page(s):
    1839-1842

    The present study examined the evaluation of aging using the photic driving response, a measure used in routine EEG examinations. We examined 60 normal participants without EEG abnormalities, classified into three age groups (2029, 3059 and over 60 years; 20 participants per group). EEG was measured at rest and during photic stimulation (PS). We calculated Z-scores as a measure of enhancement and suppression due to visual stimulation at rest and during PS and tested for between-group and intraindividual differences. We examined responses in the alpha frequency and harmonic frequency ranges separately, because alpha suppression can affect harmonic frequency responses that overlap the alpha frequency band. We found a negative correlation between Z-scores for harmonics and age by fitting the data to a linear function (CC: -0.740). In contrast, Z-scores and alpha frequency were positively correlated (CC: 0.590).