The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

10661-10680hit(42807hit)

  • Bitstream Protection in Dynamic Partial Reconfiguration Systems Using Authenticated Encryption

    Yohei HORI  Toshihiro KATASHITA  Hirofumi SAKANE  Kenji TODA  Akashi SATOH  

     
    PAPER-Computer System

      Vol:
    E96-D No:11
      Page(s):
    2333-2343

    Protecting the confidentiality and integrity of a configuration bitstream is essential for the dynamic partial reconfiguration (DPR) of field-programmable gate arrays (FPGAs). This is because erroneous or falsified bitstreams can cause fatal damage to FPGAs. In this paper, we present a high-speed and area-efficient bitstream protection scheme for DPR systems using the Advanced Encryption Standard with Galois/Counter Mode (AES-GCM), which is an authenticated encryption algorithm. Unlike many previous studies, our bitstream protection scheme also provides a mechanism for error recovery and tamper resistance against configuration block deletion, insertion, and disorder. The implementation and evaluation results show that our DPR scheme achieves a higher performance, in terms of speed and area, than previous methods.

  • Training Multiple Support Vector Machines for Personalized Web Content Filters

    Dung Duc NGUYEN  Maike ERDMANN  Tomoya TAKEYOSHI  Gen HATTORI  Kazunori MATSUMOTO  Chihiro ONO  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E96-D No:11
      Page(s):
    2376-2384

    The abundance of information published on the Internet makes filtering of hazardous Web pages a difficult yet important task. Supervised learning methods such as Support Vector Machines (SVMs) can be used to identify hazardous Web content. However, scalability is a big challenge, especially if we have to train multiple classifiers, since different policies exist on what kind of information is hazardous. We therefore propose two different strategies to train multiple SVMs for personalized Web content filters. The first strategy identifies common data clusters and then performs optimization on these clusters in order to obtain good initial solutions for individual problems. This initialization shortens the path to the optimal solutions and reduces the training time on individual training sets. The second approach is to train all SVMs simultaneously. We introduce an SMO-based kernel-biased heuristic that balances the reduction rate of individual objective functions and the computational cost of kernel matrix. The heuristic primarily relies on the optimality conditions of all optimization problems and secondly on the pre-calculated part of the whole kernel matrix. This strategy increases the amount of information sharing among learning tasks, thus reduces the number of kernel calculation and training time. In our experiments on inconsistently labeled training examples, both strategies were able to predict hazardous Web pages accurately (> 91%) with a training time of only 26% and 18% compared to that of the normal sequential training.

  • Negative Correlation Learning in the Estimation of Distribution Algorithms for Combinatorial Optimization

    Warin WATTANAPORNPROM  Prabhas CHONGSTITVATANA  

     
    PAPER-Artificial Intelligence, Data Mining

      Vol:
    E96-D No:11
      Page(s):
    2397-2408

    This article introduces the Coincidence Algorithm (COIN) to solve several multimodal puzzles. COIN is an algorithm in the category of Estimation of Distribution Algorithms (EDAs) that makes use of probabilistic models to generate solutions. The model of COIN is a joint probability table of adjacent events (coincidence) derived from the population of candidate solutions. A unique characteristic of COIN is the ability to learn from a negative sample. Various experiments show that learning from a negative example helps to prevent premature convergence, promotes diversity and preserves good building blocks.

  • Improving Naturalness of HMM-Based TTS Trained with Limited Data by Temporal Decomposition

    Trung-Nghia PHUNG  Thanh-Son PHAN  Thang Tat VU  Mai Chi LUONG  Masato AKAGI  

     
    PAPER-Speech and Hearing

      Vol:
    E96-D No:11
      Page(s):
    2417-2426

    The most important advantage of HMM-based TTS is its highly intelligible. However, speech synthesized by HMM-based TTS is muffled and far from natural, especially under limited data conditions, which is mainly caused by its over-smoothness. Therefore, the motivation for this paper is to improve the naturalness of HMM-based TTS trained under limited data conditions while preserving its intelligibility. To achieve this motivation, a hybrid TTS between HMM-based TTS and the modified restricted Temporal Decomposition (MRTD), named HTD in this paper, was proposed. Here, TD is an interpolation model of decomposing a spectral or prosodic sequence of speech into sparse event targets and dynamic event functions, and MRTD is one simplified version of TD. With a determination of event functions close to the concept of co-articulation in speech, MRTD can synthesize smooth speech and the smoothness in synthesized speech can be adjusted by manipulating event targets of MRTD. Previous studies have also found that event functions of MRTD can represent linguistic information of speech, which is important to perceive speech intelligibility, while sparse event targets can convey the non-linguistics information, which is important to perceive the naturalness of speech. Therefore, prosodic trajectories and MRTD event functions of the spectral trajectory generated by HMM-based TTS were kept unchanged to preserve the high and stable intelligibility of HMM-based TTS. Whereas MRTD event targets of the spectral trajectory generated by HMM-based TTS were rendered with an original speech database to enhance the naturalness of synthesized speech. Experimental results with small Vietnamese datasets revealed that the proposed HTD was equivalent to HMM-based TTS in terms of intelligibility but was superior to it in terms of naturalness. Further discussions show that HTD had a small footprint. Therefore, the proposed HTD showed its strong efficiency under limited data conditions.

  • Towards Logging Optimization for Dynamic Object Process Graph Construction

    Takashi ISHIO  Hiroki WAKISAKA  Yuki MANABE  Katsuro INOUE  

     
    LETTER-Software System

      Vol:
    E96-D No:11
      Page(s):
    2470-2472

    Logging the execution process of a program is a popular activity for practical program understanding. However, understanding the behavior of a program from a complete execution trace is difficult because a system may generate a substantial number of runtime events. To focus on a small subset of runtime events, a dynamic object process graph (DOPG) has been proposed. Although a DOPG can potentially facilitate program understanding, the logging process has not been adapted for DOPGs. If a developer is interested in the behavior of a particular object, only the runtime events related to the object are necessary to construct a DOPG. The vast majority of runtime events in a complete execution trace are irrelevant to the interesting object. This paper analyzes actual DOPGs and reports that a logging tool can be optimized to record only the runtime events related to a particular object specified by a developer.

  • Learning from Ideal Edge for Image Restoration

    Jin-Ping HE  Kun GAO  Guo-Qiang NI  Guang-Da SU  Jian-Sheng CHEN  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E96-D No:11
      Page(s):
    2487-2491

    Considering the real existent fact of the ideal edge and the learning style of image analogy without reference parameters, a blind image recovery algorithm using a self-adaptive learning method is proposed in this paper. We show that a specific local image patch with degradation characteristic can be utilized for restoring the whole image. In the training process, a clear counterpart of the local image patch is constructed based on the ideal edge assumption so that identification of the Point Spread Function is no longer needed. Experiments demonstrate the effectiveness of the proposed method on remote sensing images.

  • Predominant Melody Extraction from Polyphonic Music Signals Based on Harmonic Structure

    Jea-Yul YOON  Chai-Jong SONG  Hochong PARK  

     
    LETTER-Music Information Processing

      Vol:
    E96-D No:11
      Page(s):
    2504-2507

    A new method for predominant melody extraction from polyphonic music signals based on harmonic structure is proposed. The proposed method first extracts a set of fundamental frequency candidates by analyzing the distance between spectral peaks. Then, the predominant fundamental frequency is selected by pitch tracking according to the harmonic strength of the selected candidates. Finally, the method runs pitch smoothing on a large temporal scale for eliminating pitch doubling error, and conducts voicing frame detection. The proposed method shows the best overall performance for ADC 2004 DB in the MIREX 2011 audio melody extraction task.

  • On Discrete Logarithm Based Additively Homomorphic Encryption

    Jae Hong SEO  Keita EMURA  

     
    LETTER-Cryptography and Information Security

      Vol:
    E96-A No:11
      Page(s):
    2286-2289

    In this paper, we examine additive homomorphic encryptions in the discrete logarithm setting. Recently, Wang et al. proposed an additive homomorphic encryption scheme by modifying the ElGamal encryption scheme [Information Sciences 181(2011) 3308-3322]. We show that their scheme allows only limited number of additions among encrypted messages, which is different from what they claimed.

  • On Minimum Feedback Vertex Sets in Bipartite Graphs and Degree-Constraint Graphs

    Asahi TAKAOKA  Satoshi TAYU  Shuichi UENO  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E96-D No:11
      Page(s):
    2327-2332

    We consider the minimum feedback vertex set problem for some bipartite graphs and degree-constrained graphs. We show that the problem is linear time solvable for bipartite permutation graphs and NP-hard for grid intersection graphs. We also show that the problem is solvable in O(n2log 6n) time for n-vertex graphs with maximum degree at most three.

  • Personalized Emotion Recognition Considering Situational Information and Time Variance of Emotion

    Yong-Soo SEOL  Han-Woo KIM  

     
    PAPER-Human-computer Interaction

      Vol:
    E96-D No:11
      Page(s):
    2409-2416

    To understand human emotion, it is necessary to be aware of the surrounding situation and individual personalities. In most previous studies, however, these important aspects were not considered. Emotion recognition has been considered as a classification problem. In this paper, we attempt new approaches to utilize a person's situational information and personality for use in understanding emotion. We propose a method of extracting situational information and building a personalized emotion model for reflecting the personality of each character in the text. To extract and utilize situational information, we propose a situation model using lexical and syntactic information. In addition, to reflect the personality of an individual, we propose a personalized emotion model using KBANN (Knowledge-based Artificial Neural Network). Our proposed system has the advantage of using a traditional keyword-spotting algorithm. In addition, we also reflect the fact that the strength of emotion decreases over time. Experimental results show that the proposed system can more accurately and intelligently recognize a person's emotion than previous methods.

  • Mathematical Analysis of Call Admission Control in Mobile Hotspots

    Jae Young CHOI  Bong Dae CHOI  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E96-B No:11
      Page(s):
    2816-2827

    A mobile hotspot is a moving vehicle that hosts an Access Point (AP) such as train, bus and subway where users in these vehicles connect to external cellular network through AP to access their internet services. To meet Quality of Service (QoS) requirements, typically throughput and/or delay, a Call Admission Control (CAC) is needed to restrict the number of users accepted by the AP. In this paper, we analyze a modified guard channel scheme as CAC for mobile hotspot as follows: During a mobile hotspot is in the stop-state, we adopt a guard channel scheme where the optimal number of resource units is reserved for vertical handoff users from cellular network to WLAN. During a mobile hotspot is in the move-state, there are no handoff calls and so no resources for handoff calls are reserved in order to maximize the utility of the WLAN capacity. We model call's arrival and departure processes by Markov Modulated Poisson Process (MMPP) and then we model our CAC by 2-dimensional continuous time Markov chain (CTMC) for single traffic and 3-dimensional CTMC for two types of traffic. We solve steady-state probabilities by the Quasi-Birth and Death (QBD) method and we get various performance measures such as the new call blocking probabilities, the handoff call dropping probabilities and the channel utilizations. We compare our CAC with the conventional guard channel scheme which the number of guard resources is fixed all the time regardless of states of the mobile hotspot. Finally, we find the optimal threshold value on the amount of resources to be reserved for the handoff call subject to a strict constraint on the handoff call dropping probability.

  • Improving Recovery Rate for Packet Loss in Large-Scale Telecom Smart TV Systems

    Xiuyan JIANG  Dejian YE  Yiming CHEN  Xuejun TIAN  

     
    PAPER-Information Network

      Vol:
    E96-D No:11
      Page(s):
    2365-2375

    Smart TVs are expected to play a leading role in the future networked intelligent screen market. Currently, many operators are planning to deploy it in large scale in a few years. Therefore, it is necessary for smart TVs to provide high quality services for users. Packet loss is one critical reason that decreases the QoS in smart TVs. Even a very small amount of packet loss (1-2%) can decrease the QoS and affect users' experience seriously. This paper applies stochastic differential equations to analyzing the queue in the buffer of access points in smart TV multicast systems, demonstrates the reason for packet loss, and then proposes an end-to-end error recovery scheme (short as OPRSFEC) whose core algorithm is based on Reed-Solomon theory, and optimizes four aspects in finite fields: 1) Using Cauchy matrix instead of Vandermonde matrix to code and decode; 2) generating inverse matrix by table look-up; 3) changing the matrix multiplication into the table look-up; 4) originally dividing the matrix multiplication. This paper implements the scheme on the application layer, which screens the heterogeneity of terminals and servers, corrects 100% packet loss (loss rate is 1%-2%) in multicast systems, and brings very little effect on real-time users experience. Simulations demonstrate that the proposed scheme has good performances, successfully runs on Sigma and Mstar Moca TV terminals, and increases the QoS of smart TVs. Recently, OPRSFEC middleware has become a part of IPTV2.0 standard in Shanghai Telecom and has been running on the Mstar boards of Haier Moca TVs properly.

  • Utilizing Multiple Data Sources for Localization in Wireless Sensor Networks Based on Support Vector Machines

    Prakit JAROENKITTICHAI  Ekachai LEELARASMEE  

     
    PAPER-Mobile Information Network and Personal Communications

      Vol:
    E96-A No:11
      Page(s):
    2081-2088

    Localization in wireless sensor networks is the problem of estimating the geographical locations of wireless sensor nodes. We propose a framework to utilizing multiple data sources for localization scheme based on support vector machines. The framework can be used with both classification and regression formulation of support vector machines. The proposed method uses only connectivity information. Multiple hop count data sources can be generated by adjusting the transmission power of sensor nodes to change the communication ranges. The optimal choice of communication ranges can be determined by evaluating mutual information. We consider two methods for integrating multiple data sources together; unif method and align method. The improved localization accuracy of the proposed framework is verified by simulation study.

  • Cheating Detectable Secret Sharing Schemes for Random Bit Strings

    Wakaha OGATA  Toshinori ARAKI  

     
    PAPER-Cryptography and Information Security

      Vol:
    E96-A No:11
      Page(s):
    2230-2234

    In secret sharing scheme, Tompa and Woll considered a problem of cheaters who try to make another participant reconstruct an invalid secret. Later, some models of such cheating were formalized and lower bounds of the size of shares were shown in the situation of fixing the minimum successful cheating probability. Under the assumption that cheaters do not know the distributed secret, no efficient scheme is known which can distribute bit strings. In this paper, we propose an efficient scheme for distributing bit strings with an arbitrary access structure. When distributing a random bit string with threshold access structures, the bit length of shares in the proposed scheme is only a few bits longer than the lower bound.

  • Output Feedback Control of a Chain of Integrators under Uncertain AC and DC Sensor Noise

    Ho-Lim CHOI  

     
    LETTER-Systems and Control

      Vol:
    E96-A No:11
      Page(s):
    2275-2278

    We consider an output feedback control problem of a chain of integrators under sensor noise. The sensor noise enters the output feedback channel in an additive form. A similar problem has been addressed most recently in [9], but their result has been developed only under AC sensor noise. We generalize the result of [9] by allowing the sensor noise to include both AC and DC components. With our new output feedback controller, we show that the ultimate bounds of all states can be made arbitrarily small. We show the generality of our result over [9] via an example.

  • Single Parameter Logarithmic Image Processing for Edge Detection

    Fuji REN  Bo LI  Qimei CHEN  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E96-D No:11
      Page(s):
    2437-2449

    Considering the non-linear properties of the human visual system, many non-linear operators and models have been developed, particularly the logarithmic image processing (LIP) model proposed by Jourlin and Pinoli, which has been proved to be physically justified in several laws of the human visual system and has been successfully applied in image processing areas. Recently, several modifications based on this logarithmic mathematical framework have been presented, such as parameterized logarithmic image processing (PLIP), pseudo-logarithmic image processing, homomorphic logarithmic image processing. In this paper, a new single parameter logarithmic model for image processing with an adaptive parameter-based Sobel edge detection algorithm is presented. On the basis of analyzing the distributive law, the subtractive law, and the isomorphic property of the PLIP model, the five parameters in PLIP are replaced by a single parameter to ensure the completeness of the model and physical constancy with the nature of an image, and then an adaptive parameter-based Sobel edge detection algorithm is proposed. By using an image noise estimation method to evaluate the noise level of image, the adaptive parameter in the single parameter LIP model is calculated based on the noise level and grayscale value of a corresponding image area, followed by the single-parameter LIP-based Sobel operation to overcome the noise-sensitive problem of classical LIP-based Sobel edge detection methods, especially in the dark area of an image, while retaining edge sensitivity. Compared with the classical LIP and PLIP model, the given single parameter LIP achieves satisfactory results in noise suppression and edge accuracy.

  • Auto-Tuning of Thread Assignment for Matrix-Vector Multiplication on GPUs

    Jinwei WANG  Xirong MA  Yuanping ZHU  Jizhou SUN  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E96-D No:11
      Page(s):
    2319-2326

    Modern GPUs have evolved to become a more general processor capable of executing scientific and engineering computations. It provides a highly parallel computing environment due to its large number of computing cores, which are suitable for numerous data parallel arithmetic computations, particularly linear algebra operations. The matrix-vector multiplication is one of the most important dense linear algebraic operations. It is applied to a diverse set of applications in many fields and must therefore be fully optimized to achieve a high-performance. In this paper, we proposed a novel auto-tuning method for matrix-vector multiplication on GPUs, where the number of assigned threads that are used to compute one element of the result vector can be auto-tuned according to the size of matrix. On the Nvidia's GPU GTX 650 with the most recent Kepler architecture, we developed an auto-tuner that can automatically select the optimal number of assigned threads for calculation. Based on the auto-tuner's result, we developed a versatile generic matrix-vector multiplication kernel with the CUDA programming model. A series of experiments on different shapes and sizes of matrices were conducted for comparing the performance of our kernel with that of the kernels from CUBLAS 5.0, MAGMA 1.3 and a warp method. The experiments results show that the performance of our matrix-vector multiplication kernel is close to the optimal behavior with increasing of the size of the matrix and has very little dependency on the shape of the matrix, which is a significant improvement compared to the other three kernels that exhibit unstable performance behavior for different shapes of matrices.

  • Enhanced Film Grain Noise Removal and Synthesis for High Fidelity Video Coding

    Inseong HWANG  Jinwoo JEONG  Sungjei KIM  Jangwon CHOI  Yoonsik CHOE  

     
    PAPER-Image

      Vol:
    E96-A No:11
      Page(s):
    2253-2264

    In this paper, we propose a novel technique for film grain noise removal and synthesis that can be adopted in high fidelity video coding. Film grain noise enhances the natural appearance of high fidelity video, therefore, it should be preserved. However, film grain noise is a burden to typical video compression systems because it has relatively large energy levels in the high frequency region. In order to improve the coding performance while preserving film grain noise, we propose film grain noise removal in the pre-processing step and film grain noise synthesis in the post processing step. In the pre-processing step, the film grain noise is removed by using temporal and inter-color correlations. Specifically, color image denoisng using inter color prediction provides good denoising performance in the noise-concentrated B plane, because film grain noise has inter-color correlation in the RGB domain. In the post-processing step, we present a noise model to generate noise that is close to the actual noise in terms of a couple of observed statistical properties, such as the inter-color correlation and power of the film grain noise. The results show that the coding gain of the denoised video is higher than for previous works, while the visual quality of the final reconstructed video is well preserved.

  • Congestion Control, Routing and Scheduling in Communication Networks: A Tutorial Open Access

    Jean WALRAND  Abhay K. PAREKH  

     
    INVITED PAPER

      Vol:
    E96-B No:11
      Page(s):
    2714-2723

    In communication networks, congestion control, routing, and multiple access schemes for scheduling transmissions are typically regulated by distributed algorithms. Engineers designed these algorithms using clever heuristics that they refined in the light of simulation results and experiments. Over the last two decades, a deeper understanding of these algorithms emerged through the work of researchers. This understanding has a real potential for improving the design of protocols for data centers, cloud computing, and even wireless networks. Since protocols tend to be standardized by engineers, it is important that they become familiar with the insights that emerged in research. We hope that this paper might appeal to practitioners and make the research results intuitive and useful. The methods that the paper describes may be useful for many other resource allocation problems such as in call centers, manufacturing lines, hospitals and the service industry.

  • On the Complexity of Inference and Completion of Boolean Networks from Given Singleton Attractors

    Hao JIANG  Takeyuki TAMURA  Wai-Ki CHING  Tatsuya AKUTSU  

     
    PAPER-General Fundamentals and Boundaries

      Vol:
    E96-A No:11
      Page(s):
    2265-2274

    In this paper, we consider the problem of inferring a Boolean network (BN) from a given set of singleton attractors, where it is required that the resulting BN has the same set of singleton attractors as the given one. We show that the problem can be solved in linear time if the number of singleton attractors is at most two and each Boolean function is restricted to be a conjunction or disjunction of literals. We also show that the problem can be solved in polynomial time if more general Boolean functions can be used. In addition to the inference problem, we study two network completion problems from a given set of singleton attractors: adding the minimum number of edges to a given network, and determining Boolean functions to all nodes when only network structure of a BN is given. In particular, we show that the latter problem cannot be solved in polynomial time unless P=NP, by means of a polynomial-time Turing reduction from the complement of the another solution problem for the Boolean satisfiability problem.

10661-10680hit(42807hit)