The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

7901-7920hit(42807hit)

  • Robust Face Alignment with Random Forest: Analysis of Initialization, Landmarks Regression, and Shape Regularization Methods

    Chun Fui LIEW  Takehisa YAIRI  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2015/10/27
      Vol:
    E99-D No:2
      Page(s):
    496-504

    Random forest regressor has recently been proposed as a local landmark estimator in the face alignment problem. It has been shown that random forest regressor can achieve accurate, fast, and robust performance when coupled with a global face-shape regularizer. In this paper, we extend this approach and propose a new Local Forest Classification and Regression (LFCR) framework in order to handle face images with large yaw angles. Specifically, the LFCR has an additional classification step prior to the regression step. Our experiment results show that this additional classification step is useful in rejecting outliers prior to the regression step, thus improving the face alignment results. We also analyze each system component through detailed experiments. In addition to the selection of feature descriptors and several important tuning parameters of the random forest regressor, we examine different initialization and shape regularization processes. We compare our best outcomes to the state-of-the-art system and show that our method outperforms other parametric shape-fitting approaches.

  • Feasibility of Interference Alignment for MIMO Two-Way Interference Channel

    Kiyeon KIM  Janghoon YANG  Dong Ku KIM  

     
    LETTER-Digital Signal Processing

      Vol:
    E99-A No:2
      Page(s):
    651-655

    The feasibility condition of interference alignment (IA) for multiple-input multiple-output two-way interference channel is studied in this paper. A necessary condition and a sufficient condition on the IA feasibility are established and the sum degrees of freedom (DoF) for a broad class of network topologies is characterized. The numerical results demonstrate that two-way operation with appropriate IA is able to achieve larger sum DoF than the conventional one-way operation.

  • Public-Key Encryption with Lazy Parties

    Kenji YASUNAGA  

     
    PAPER-Cryptography and Information Security

      Vol:
    E99-A No:2
      Page(s):
    590-600

    In a public-key encryption scheme, if a sender is not concerned about the security of a message and is unwilling to generate costly randomness, the security of the encrypted message can be compromised. In this work, we characterize such lazy parties, who are regarded as honest parties, but are unwilling to perform a costly task when they are not concerned about the security. Specifically, we consider a rather simple setting in which the costly task is to generate randomness used in algorithms, and parties can choose either perfect randomness or a fixed string. We model lazy parties as rational players who behave rationally to maximize their utilities, and define a security game between the parties and an adversary. Since a standard secure encryption scheme does not work in this setting, we provide constructions of secure encryption schemes in various settings.

  • Efficient Implementation and Empirical Evaluation of Compression by Substring Enumeration

    Sho KANAI  Hidetoshi YOKOO  Kosumo YAMAZAKI  Hideaki KANEYASU  

     
    PAPER-Information Theory

      Vol:
    E99-A No:2
      Page(s):
    601-611

    This paper gives an array-based practical encoder for the lossless data compression algorithm known as Compression by Substring Enumeration (CSE). The encoder makes use of the relation between CSE and the Burrows-Wheeler transform. We also modify the decoding algorithm to accommodate to the proposed encoder. Thanks to the proposed encoder and decoder, we can apply CSE to long data of more than tens of megabytes. We show compression results obtained when we perform experiments on such long data. The results empirically validate theoretical predictions on CSE.

  • Insecurity of a Certificateless Aggregate Signature Scheme

    Han SHEN  Jianhua CHEN  Hao HU  Jian SHEN  

     
    LETTER-Cryptography and Information Security

      Vol:
    E99-A No:2
      Page(s):
    660-662

    Recently, H. Liu et al. [H. Liu, M. Liang, and H. Sun, A secure and efficient certificateless aggregate signature scheme, IEICE Transactions on Fundamentals of Electronics Communications and Computer Sciences, vol.E97-A, no.4, pp.991-915, 2014] proposed a new certificateless aggregate signature (CLAS) scheme and demonstrated that it was provably secure in the random oracle model. However, in this letter, we show that their scheme cannot provide unforgeability, i.e., an adversary having neither the user's secret value nor his/her partial private key can forge a legal signature of any message.

  • vCanal: Paravirtual Socket Library towards Fast Networking in Virtualized Environment

    Dongwoo LEE  Changwoo MIN  Young IK EOM  

     
    PAPER-Software System

      Pubricized:
    2015/11/11
      Vol:
    E99-D No:2
      Page(s):
    360-369

    Virtualization is no longer an emerging research area since the virtual processor and memory operate as efficiently as the physical ones. However, I/O performance is still restricted by the virtualization overhead caused by the costly and complex I/O virtualization mechanism, in particular by massive exits occurring on the guest-host switch and redundant processing of the I/O stacks at both guest and host. A para-virtual device driver may reduce the number of exits to the hypervisor, whereas the network stacks in the guest OS are still duplicated. Previous work proposed a socket-outsourcing technique that bypasses the redundant guest network stack by delivering the network request directly to the host. However, even by bypassing the redundant network paths in the guest OS, the obtained performance was still below 60% of the native device, since notifications of completion still depended on the hypervisor. In this paper, we propose vCanal, a novel network virtualization framework, to improve the performance of network access in the virtual machine toward that of the native machine. Implementation of vCanal reached 96% of the native TCP throughput, increasing the UDP latency by only 4% compared to the native latency.

  • Offline Selective Data Deduplication for Primary Storage Systems

    Sejin PARK  Chanik PARK  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2015/10/26
      Vol:
    E99-D No:2
      Page(s):
    370-382

    Data deduplication is a technology that eliminates redundant data to save storage space. Most previous studies on data deduplication target backup storage, where the deduplication ratio and throughput are important. However, data deduplication on primary storage has recently been receiving attention; in this case, I/O latency should be considered equally with the deduplication ratio. Unfortunately, data deduplication causes high sequential-read-latency problems. When a file is created, the file system allocates physically contiguous blocks to support low sequential-read latency. However, the data deduplication process rearranges the block mapping information to eliminate duplicate blocks. Because of this rearrangement, the physical sequentiality of blocks in a file is broken. This makes a sequential-read request slower because it operates like a random-read operation. In this paper, we propose a selective data deduplication scheme for primary storage systems. A selective scheme can achieve a high deduplication ratio and a low I/O latency by applying different data-chunking methods to the files, according to their file access characteristics. In the proposed system, file accesses are characterized by recent access time and the access frequency of each file. No chunking is applied to update-intensive files since they are meaningless in terms of data deduplication. For sequential-read-intensive files, we apply big chunking to preserve their sequentiality on the media. For random-read-intensive files, small chunking is used to increase the deduplication ratio. Experimental evaluation showed that the proposed method achieves a maximum of 86% of an ideal deduplication ratio and 97% of the sequential-read performance of a native file system.

  • Threshold-Based Distributed Continuous Top-k Query Processing for Minimizing Communication Overhead

    Kamalas UDOMLAMLERT  Takahiro HARA  Shojiro NISHIO  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2015/11/11
      Vol:
    E99-D No:2
      Page(s):
    383-396

    In this paper, we propose a communication-efficient top-k continuous query processing method on distributed local nodes where data are horizontally partitioned. A designated coordinator server takes the role of issuing queries from users to local nodes and delivering the results to users. The final results are requested via a top-k subscription which lets local nodes know which data and updates need to be returned to users. Our proposed method makes use of the active previously posed queries to identify a small set of needed top-k subscriptions. In addition, with the pre-indexed nodes' skylines, the number of local nodes to be subscribed can be significantly reduced. As a result, only a small number of subscriptions are informed to a small number of local nodes resulting in lower communication overhead. Furthermore, according to dynamic data updates, we also propose a method that prevents nodes from reporting needless updates and also maintenance procedures to preserve the consistency. The results of experiments that measure the volume of transferred data show that our proposed method significantly outperforms the previously proposed methods.

  • Image Arbitrary-Ratio Down- and Up-Sampling Scheme Exploiting DCT Low Frequency Components and Sparsity in High Frequency Components

    Meng ZHANG  Tinghuan CHEN  Xuchao SHI  Peng CAO  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2015/10/30
      Vol:
    E99-D No:2
      Page(s):
    475-487

    The development of image acquisition technology and display technology provide the base for popularization of high-resolution images. On the other hand, the available bandwidth is not always enough to data stream such high-resolution images. Down- and up-sampling, which decreases the data volume of images and increases back to high-resolution images, is a solution for the transmission of high-resolution images. In this paper, motivated by the observation that the high-frequency DCT components are sparse in the spatial domain, we propose a scheme combined with Discrete Cosine Transform (DCT) and Compressed Sensing (CS) to achieve arbitrary-ratio down-sampling. Our proposed scheme makes use of two properties: First, the energy of a image concentrates on the low-frequency DCT components. Second, the high-frequency DCT components are sparse in the spatial domain. The scheme is able to preserve the most information and avoid absolutely blindly estimating the high-frequency components. Experimental results show that the proposed down- and up-sampling scheme produces better performance compared with some state-of-the-art schemes in terms of peak signal to noise ratio (PSNR), structural similarity index measurement (SSIM) and processing time.

  • A Method for Extraction of Future Reference Sentences Based on Semantic Role Labeling

    Yoko NAKAJIMA  Michal PTASZYNSKI  Hirotoshi HONMA  Fumito MASUI  

     
    PAPER-Natural Language Processing

      Pubricized:
    2015/11/18
      Vol:
    E99-D No:2
      Page(s):
    514-524

    In everyday life, people use past events and their own knowledge in predicting probable unfolding of events. To obtain the necessary knowledge for such predictions, newspapers and the Internet provide a general source of information. Newspapers contain various expressions describing past events, but also current and future events, and opinions. In our research we focused on automatically obtaining sentences that make reference to the future. Such sentences can contain expressions that not only explicitly refer to future events, but could also refer to past or current events. For example, if people read a news article that states “In the near future, there will be an upward trend in the price of gasoline,” they may be likely to buy gasoline now. However, if the article says “The cost of gasoline has just risen 10 yen per liter,” people will not rush to buy gasoline, because they accept this as reality and may expect the cost to decrease in the future. In the following study we firstly investigate future reference sentences in newspapers and Web news. Next, we propose a method for automatic extraction of such sentences by using semantic role labels, without typical approaches (temporal expressions, etc.). In a series of experiments, we extract semantic role patterns from future reference sentences and examine the validity of the extracted patterns in classification of future reference sentences.

  • An Optimization Strategy for CFDMiner: An Algorithm of Discovering Constant Conditional Functional Dependencies

    Jinling ZHOU  Xingchun DIAO  Jianjun CAO  Zhisong PAN  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2015/11/06
      Vol:
    E99-D No:2
      Page(s):
    537-540

    Compared to the traditional functional dependency (FD), the extended conditional functional dependency (CFD) has shown greater potential for detecting and repairing inconsistent data. CFDMiner is a widely used algorithm for mining constant-CFDs. But the search space of CFDMiner is too large, and there is still room for efficiency improvement. In this paper, an efficient pruning strategy is proposed to optimize the algorithm by reducing the search space. Both theoretical analysis and experiments have proved the optimized algorithm can produce the consistent results as the original CFDMiner.

  • Performance Analysis of All-Optical Wavelength-Shift-Free Format Conversion from QPSK to Two BPSK Tributaries Using FWM and Interference

    Rina ANDO  Hiroki KISHIKAWA  Nobuo GOTO  Shin-ichiro YANAGIYA  Lawrence R. CHEN  

     
    PAPER

      Vol:
    E99-C No:2
      Page(s):
    219-226

    Conversion between multi-level modulation formats is one of key processing functions for flexible networking aimed at high spectral efficiency (SE) in optical fiber transmission. The authors previously proposed an all-optical format conversion system from binary phase-shift keying (BPSK) to quadrature PSK (QPSK) and reported an experimental demonstration. In this paper, we consider its reversed conversion, that is, from QPSK to BPSK. The proposed system consists of a highly nonlinear fiber used to generate complex conjugate signal, and a 3-dB directional coupler used to produce converted signals by interfering the incident signal with the complex conjugate signal. The incident QPSK stream is converted into two BPSK tributaries without any loss of transmitting data. We show the system performances such as bit-error-rate and optical signal-to-noise ratio penalty evaluated by numerical simulation.

  • 0.5-V Sub-ns Open-BL SRAM Array with Mid-Point-Sensing Multi-Power-Supply 5T Cell

    Khaja Ahmad SHAIK  Kiyoo ITOH  Amara AMARA  

     
    INVITED PAPER

      Vol:
    E99-A No:2
      Page(s):
    523-530

    To achieve low-voltage low-power SRAMs, two proposals are demonstrated. One is a multi-power-supply five-transistor cell (5T cell), assisted by a boosted word-line voltage and a mid-point sensing enabled by precharging bit-lines to VDD/2. The cell enables to reduce VDD to 0.5V or less for a given speed, or enhance speed for a given VDD. The other is a partial activation of a compact multi-divided open-bit-line array for low power. Layout and post-layout simulation with a 28-nm fully-depleted planar-logic SOI MOSFET reveal that a 0.5-V 5T-cell 4-kb array in a 128-kb SRAM core using the proposals is able to achieve x2-3 faster cycle time and x11 lower power than the counterpart 6T-cell array, suggesting a possibility of a 730-ps cycle time at 0.5V.

  • Contour-Based Binary Image Orientation Detection by Orientation Context and Roulette Distance

    Jian ZHOU  Takafumi MATSUMARU  

     
    PAPER-Image

      Vol:
    E99-A No:2
      Page(s):
    621-633

    This paper proposes a novel technology to detect the orientation of an image relying on its contour which is noised to varying degrees. For the image orientation detection, most methods regard to the landscape image and the image taken of a single object. In these cases, the contours of these images are supposed to be immune to the noise. This paper focuses on the the contour noised after image segmentation. A polar orientation descriptor Orientation Context is viewed as a feature to describe the coarse distribution of the contour points. This descriptor is verified to be independent of translation, isotropic scaling, and rotation transformation by theory and experiment. The relative orientation depends on the minimum distance Roulette Distance between the descriptor of a template image and that of a test image. The proposed method is capable of detecting the direction on the interval from 0 to 359 degrees which is wider than the former contour-based means (Distance Phase [1], from 0 to 179 degrees). What's more, the results of experiments show that not only the normal binary image (Noise-0, Accuracy-1: 84.8%) (defined later) achieves more accurate orientation but also the binary image with slight contour noise (Noise-1, Accuracy-1: 73.5%) could obtain more precise orientation compared to Distance Phase (Noise-0, Accuracy-1: 56.3%; Noise-1, Accuracy-1: 27.5%). Although the proposed method (O(op2)) takes more time to detect the orientation than Distance Phase (O(st)), it could be realized including the preprocessing in real time test with a frame rate of 30.

  • A Two-Way Relay Scheme for Multi-User MIMO Systems with Partial CSIT

    Sai JIN  Deyou ZHANG  Li PING  

     
    LETTER-Communication Theory and Signals

      Vol:
    E99-A No:2
      Page(s):
    678-681

    The acquisition of accurate channel state information at the transmitter (CSIT) is a difficult task in multiple-input multiple-output (MIMO) systems. Partial CSIT is a more realistic assumption, especially for high-mobility mobile users (MUs) whose channel varies very rapidly. In this letter, we propose a MIMO two-way relaying (MTWR) scheme, in which the communication between the BS and a high-mobility MU is assisted by other low-mobility MUs serving as relays. This produces a beamforming effect that can significantly improve the performance of the high-mobility MU, especially for a large number of MUs and unreliable CSIT.

  • Learning a Similarity Constrained Discriminative Kernel Dictionary from Concatenated Low-Rank Features for Action Recognition

    Shijian HUANG  Junyong YE  Tongqing WANG  Li JIANG  Changyuan XING  Yang LI  

     
    LETTER-Pattern Recognition

      Pubricized:
    2015/11/16
      Vol:
    E99-D No:2
      Page(s):
    541-544

    Traditional low-rank feature lose the temporal information among action sequence. To obtain the temporal information, we split an action video into multiple action subsequences and concatenate all the low-rank features of subsequences according to their time order. Then we recognize actions by learning a novel dictionary model from concatenated low-rank features. However, traditional dictionary learning models usually neglect the similarity among the coding coefficients and have bad performance in dealing with non-linearly separable data. To overcome these shortcomings, we present a novel similarity constrained discriminative kernel dictionary learning for action recognition. The effectiveness of the proposed method is verified on three benchmarks, and the experimental results show the promising results of our method for action recognition.

  • An Adaptive Relay Transmission Scheme for Reliable Data Forwarding in Wireless Body Area Networks

    Xuan Sam NGUYEN  Daehee KIM  Sunshin AN  

     
    PAPER-Information Network

      Pubricized:
    2015/11/06
      Vol:
    E99-D No:2
      Page(s):
    415-423

    The new generation of telemedicine systems enables healthcare service providers to monitor patients not only in the hospital but also when they are at home. In order to efficiently exploit these systems, human information collected from end devices must be sent to the medical center through reliable data transmission. In this paper, we propose an adaptive relay transmission scheme to improve the reliability of data transmission for wireless body area networks. In our proposal, relay nodes that have successfully decoded a packet from the source node are selected as relay nodes in which the best relay with the highest channel gain is selected to forward the failed packet instead of the source node. The scheme addresses both the data collision problem and the inefficient relay selection in relay transmission. Our experimental results show that the proposed scheme provides a better performance than previous works in terms of the packet delivery ratio and end-to-end delay.

  • An Optimization Mechanism for Mid-Bond Testing of TSV-Based 3D SoCs

    Kele SHEN  Zhigang YU  Zhou JIANG  

     
    PAPER-Semiconductor Materials and Devices

      Vol:
    E99-C No:2
      Page(s):
    308-315

    Unlimited requirements for system-on-chip (SoC) facilitate three-dimensional (3D) technology as a promising alternative for extending Moore's Law. In spite of many advantages 3D technology provides, 3D technology faces testing issues because of the complexity of 3D design. Therefore, resolving the problem of test optimization and reducing test cost are crucial challenges. In this paper, we propose a novel optimization mechanism of 3D SoCs to minimize test time for mid-bond testing. To make our proposed mechanism more practical, we discuss test cost in mid-bond testing with consideration of manufacturing influence factors. Experimental results on ITC'02 SoC benchmark circuits show that our proposed mechanism reduces mid-bond test time by around 73% on average compared with one baseline solution, furthermore, the mechanism also proves its capacity in test cost reduction.

  • A Workload Assignment Policy for Reducing Power Consumption in Software-Defined Data Center Infrastructure

    Takaaki DEGUCHI  Yoshiaki TANIGUCHI  Go HASEGAWA  Yutaka NAKAMURA  Norimichi UKITA  Kazuhiro MATSUDA  Morito MATSUOKA  

     
    PAPER-Energy in Electronics Communications

      Vol:
    E99-B No:2
      Page(s):
    347-355

    In this paper, we propose a workload assignment policy for reducing power consumption by air conditioners in data centers. In the proposed policy, to reduce the air conditioner power consumption by raising the temperature set points of the air conditioners, the temperatures of all server back-planes are equalized by moving workload from the servers with the highest temperatures to the servers with the lowest temperatures. To evaluate the proposed policy, we use a computational fluid dynamics simulator for obtaining airflow and air temperature in data centers, and an air conditioner model based on experimental results from actual data center. Through evaluation, we show that the air conditioners' power consumption is reduced by 10.4% in a conventional data center. In addition, in a tandem data center proposed in our research group, the air conditioners' power consumption is reduced by 53%, and the total power consumption of the whole data center is exhibited to be reduced by 23% by reusing the exhaust heat from the servers.

  • Maximizing the Total Weight of Just-In-Time Jobs under Multi-Slot Conditions Is NP-Hard

    Eishi CHIBA  Shinji IMAHORI  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2015/10/26
      Vol:
    E99-D No:2
      Page(s):
    525-528

    A job is called just-in-time if it is completed exactly on its due date. Under multi-slot conditions, each job has one due date per time slot and has to be completed just-in-time on one of its due dates. Moreover, each job has a certain weight per time slot. We would like to find a just-in-time schedule that maximizes the total weight under multi-slot conditions. In this paper, we prove that this problem is NP-hard.

7901-7920hit(42807hit)