The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E92-D No.10  (Publication Date:2009/10/01)

    Special Section on New Technologies and their Applications of the Internet
  • FOREWORD Open Access

    Katsuyuki YAMAZAKI  

     
    FOREWORD

      Page(s):
    1825-1825
  • An Active Multicasting Mechanism for Mobile Hosts in Wireless Networking Environments

    Ping WANG  Fuqiang LIU  

     
    PAPER-Wireless Network

      Page(s):
    1826-1835

    To support mobile multicasting in wireless networks, we present a new active multicasting mechanism which makes use of the state characteristic of multicast agent. In this mechanism, a multicast agent just locates the position for roaming hosts when it does not forward multicast packets. Upon reception of multicast packets, the multicast agent adjusts the service range to achieve an appropriate balance between routing efficiency and the overhead of multicast tree reconstruction. Therefore, a lot of unnecessary tree reconstructions are eliminated during the time when none multicast packet is transferred and multicast delivery path is near optimal because of the limited service range of multicast agent in the active state. To justify the effectiveness of our proposed scheme, we develop an analytical model to evaluate the signaling overhead. Our performance analysis shows that the proposed scheme can significantly reduce the system overhead and multicast routing is near optimal. The other important contribution is the novel analytical approach in evaluating the performance of mobile multicast routing protocol.

  • Autonomous Pull-Push Community Construction Technology for High-Assurance

    Khalid MAHMOOD  Xiaodong LU  Yuji HORIKOSHI  Kinji MORI  

     
    PAPER-Wireless Network

      Page(s):
    1836-1846

    Location Based Services (LBS) are expected to become one of the major drivers of ubiquitous services due to recent inception of GPS-enabled mobile devices, the development of Web2.0 paradigm, and emergence of 3G broadband networks. Having this vision in mind, Community Context-attribute-oriented Collaborative Information Environment (CCCIE) based Autonomous Decentralized Community System (ADCS) is proposed to enable provision of services to specific users in specific place at specific time considering various context-attributes. This paper presents autonomous community construction technology that share service discovered by one member among others in flexible way to improve timeliness and reduce network cost. In order to meet crucial goal of real-time and context-aware community construction (provision of service/ service information to users with common interests), and defining flexible service area in highly dynamic operating environment of ADCS, proposed progressive ripple based service discovery technique introduces novel idea of snail's pace and steady advancing search followed by swift boundary confining mechanism; while service area construction shares the discovered service among members in defined area to further improve timeliness and reduce network cost. Analysis and empirical results verify the effectiveness of the proposed technique.

  • Handover Management for VoWLAN Based on Estimation of AP Queue Length and Frame Retries

    Muhammad NISWAR  Shigeru KASHIHARA  Kazuya TSUKAMOTO  Youki KADOBAYASHI  Suguru YAMAGUCHI  

     
    PAPER-Wireless Network

      Page(s):
    1847-1856

    Switching a communication path from one Access Point (AP) to another in inter-domain WLANs is a critical challenge for delay-sensitive applications such as Voice over IP (VoIP) because communication quality during handover (HO) is more likely to be deteriorated. To maintain VoIP quality during HO, we need to solve many problems. In particular, in bi-directional communication such as VoIP, an AP becomes a bottleneck with the increase of VoIP calls. As a result, packets queued in the AP buffer may experience a large queuing delay or packet losses due to increase in queue length or buffer overflow, thereby causing the degradation of VoIP quality for the Mobile Nodes (MNs) side. To avoid this degradation, MNs need to appropriately and autonomously execute HO in response to the change in wireless network condition, i.e., the deterioration of wireless link quality and the congestion state at the AP. In this paper, we propose an HO decision strategy considering frame retries, AP queue length, and transmission rate at an MN for maintaining VoIP quality during HO. Through simulation experiments, we then show that our proposed method can maintain VoIP quality during HO by properly detecting the wireless network condition.

  • HIMIPv6: An Efficient IP Mobility Management Protocol for Broadband Wireless Networks

    Hyunku JEONG  Seungryoul MAENG  Youngsu CHAE  

     
    PAPER-Wireless Network

      Page(s):
    1857-1866

    With the increasing deployment of mobile devices and the advent of broadband wireless access systems such as WiBro, mWiMAX, and HSDPA, an efficient IP mobility management protocol becomes one of the most important technical issues for the successful deployment of the broadband wireless data networking service. IETF has proposed the Mobile IPv6 as the basic mobility management protocol for IPv6 networks. To enhance the performance of the basic MIPv6, researchers have been actively working on HMIPv6 and FMIPv6 protocols. In this paper, we propose a new mobility management protocol, HIMIPv6 (Highly Integrated MIPv6), which tightly integrates the hierarchical mobility management mechanism of the HMIPv6 and the proactive handover support of the FMIPv6 to enhance the handover performance especially for the cellular networking environment with high frequent handover activities. We have performed extensive simulation study using ns2 and the results show that the proposed HIMIPv6 outperforms FMIPv6 and HMIPv6. There is no packet loss and consequent service interruption caused by IP handover in HIMIP.

  • Proactive AP Selection Method Considering the Radio Interference Environment

    Yuzo TAENAKA  Shigeru KASHIHARA  Kazuya TSUKAMOTO  Suguru YAMAGUCHI  Yuji OIE  

     
    PAPER-Wireless Network

      Page(s):
    1867-1876

    In the near future, wireless local area networks (WLANs) will overlap to provide continuous coverage over a wide area. In such ubiquitous WLANs, a mobile node (MN) moving freely between multiple access points (APs) requires not only permanent access to the Internet but also continuous communication quality during handover. In order to satisfy these requirements, an MN needs to (1) select an AP with better performance and (2) execute a handover seamlessly. To satisfy requirement (2), we proposed a seamless handover method in a previous study. Moreover, in order to achieve (1), the Received Signal Strength Indicator (RSSI) is usually employed to measure wireless link quality in a WLAN system. However, in a real environment, especially if APs are densely situated, it is difficult to always select an AP with better performance based on only the RSSI. This is because the RSSI alone cannot detect the degradation of communication quality due to radio interference. Moreover, it is important that AP selection is completed only on an MN, because we can assume that, in ubiquitous WLANs, various organizations or operators will manage APs. Hence, we cannot modify the APs for AP selection. To overcome these difficulties, in the present paper, we propose and implement a proactive AP selection method considering wireless link condition based on the number of frame retransmissions in addition to the RSSI. In the evaluation, we show that the proposed AP selection method can appropriately select an AP with good wireless link quality, i.e., high RSSI and low radio interference.

  • Expediting Experiments across Testbeds with AnyBed: A Testbed-Independent Topology Configuration System and Its Tool Set

    Mio SUZUKI  Hiroaki HAZEYAMA  Daisuke MIYAMOTO  Shinsuke MIWA  Youki KADOBAYASHI  

     
    PAPER-Network Architecture and Testbed

      Page(s):
    1877-1887

    Building an experimental network within a testbed has been a tiresome process for experimenters, due to the complexity of the physical resource assignment and the configuration overhead. Also, the process could not be expedited across testbeds, because the syntax of a configuration file varies depending on specific hardware and software. Re-configuration of an experimental topology for each testbed wastes time, an experimenter could not carry out his/her experiments during the limited lease time of a testbed at worst. In this paper, we propose the AnyBed: the experimental network-building system. The conceptual idea of AnyBed is "If experimental network topologies can be portable across any kinds of testbed, then, it would expedite building an experimental network on a testbed while manipulating experiments by each testbed support tool". To achieve this concept, AnyBed divide an experimental network configuration into the logical and physical network topologies. Mapping these two topologies, AnyBed can build intended logical network topology on any PC clusters. We have evaluated the AnyBed implementation using two distinct clusters. The evaluation result shows a BGP topology with 150 nodes can be constructed on a large scale testbed in less than 113 seconds.

  • A Pub/Sub Message Distribution Architecture for Disruption Tolerant Networks

    Sergio CARRILHO  Hiroshi ESAKI  

     
    PAPER-Network Architecture and Testbed

      Page(s):
    1888-1896

    Access to information is taken for granted in urban areas covered by a robust communication infrastructure. Nevertheless most of the areas in the world, are not covered by such infrastructures. We propose a DTN publish and subscribe system called Hikari, which uses nodes' mobility in order to distribute messages without using a robust infrastructure. The area of Disruption/Delay Tolerant Networks (DTN) focuses on providing connectivity to locations separated by networks with disruptions and delays. The Hikari system does not use node identifiers for message forwarding thus eliminating the complexity of routing associated with many forwarding schemes in DTN. Hikari uses nodes paths' information, advertised by special nodes in the system or predicted by the system itself, for optimizing the message dissemination process. We have used the Paris subway system, due to it's complexity, to validate Hikari and to analyze it's performance. We have shown that Hikari achieves a superior deliver rate while keeping redundant messages in the system low, which is ideal when using devices with limited resources for message dissemination.

  • A Decentralized VPN Service over Generalized Mobile Ad-Hoc Networks

    Sho FUJITA  Keiichi SHIMA  Yojiro UO  Hiroshi ESAKI  

     
    PAPER-Network Architecture and Testbed

      Page(s):
    1897-1904

    We present a decentralized VPN service that can be built over generalized mobile ad-hoc networks (Generalized MANETs), in which topologies can be represented as a time-varying directed multigraph. We address wireless ad-hoc networks and overlay ad-hoc networks as instances of Generalized MANETs. We first propose an architecture to operate on various kinds of networks through a single set of operations. Then, we design and implement a decentralized VPN service on the proposed architecture. Through the development and operation of a prototype system we implemented, we found that the proposed architecture makes the VPN service applicable to each instance of Generalized MANETs, and that the VPN service makes it possible for unmodified applications to operate on the networks.

  • Cross-Layer Protocol Combining Tree Routing and TDMA Slotting in Wireless Sensor Networks

    Ronggang BAI  Yusheng JI  Zhiting LIN  Qinghua WANG  Xiaofang ZHOU  Yugui QU  Baohua ZHAO  

     
    PAPER-Network Architecture and Testbed

      Page(s):
    1905-1914

    Being different from other networks, the load and direction of data traffic for wireless sensor networks are rather predictable. The relationships between nodes are cooperative rather than competitive. These features allow the design approach of a protocol stack to be able to use the cross-layer interactive way instead of a hierarchical structure. The proposed cross-layer protocol CLWSN optimizes the channel allocation in the MAC layer using the information from the routing tables, reduces the conflicting set, and improves the throughput. Simulations revealed that it outperforms SMAC and MINA in terms of delay and energy consumption.

  • Efficient Packet Classification with a Hybrid Algorithm

    Pi-Chung WANG  

     
    PAPER-QoS and Quality Management

      Page(s):
    1915-1922

    Packet classification categorizes incoming packets into multiple forwarding classes based on pre-defined filters. This categorization makes information accessible for quality of service or security handling in the network. In this paper, we propose a scheme which combines the Aggregate Bit Vector algorithm and the Pruned Tuple Space Search algorithm to improve the performance of packet classification in terms of speed and storage. We also present the procedures of incremental update. Our scheme is evaluated with filter databases of varying sizes and characteristics. The experimental results demonstrate that our scheme is feasible and scalable.

  • FreeNA: A Multi-Platform Framework for Inserting Upper-Layer Network Services

    Ryota KAWASHIMA  Yusheng JI  Katsumi MARUYAMA  

     
    PAPER-QoS and Quality Management

      Page(s):
    1923-1933

    Networking technologies have recently been evolving and network applications are now expected to support flexible composition of upper-layer network services, such as security, QoS, or personal firewall. We propose a multi-platform framework called FreeNA* that extends existing applications by incorporating the services based on user definitions. This extension does not require users to modify their systems at all. Therefore, FreeNA is valuable for experimental system usage. We implemented FreeNA on both Linux and Microsoft Windows operating systems, and evaluated their functionality and performance. In this paper, we describe the design and implementation of FreeNA including details on how to insert network services into existing applications and how to create services in a multi-platform environment. We also give an example implementation of a service with SSL, a functionality comparison with relevant systems, and our performance evaluation results. The results show that FreeNA offers finer configurability, composability, and usability than other similar systems. We also show that the throughput degradation of transparent service insertion is 2% at most compared with a method of directly inserting such services into applications.

  • Routing Scheme for Bandwidth Guaranteed Traffic in AMC-Enabled Wireless Mesh Networks

    Jun NISHIOKA  Satoru YAMANO  

     
    PAPER-QoS and Quality Management

      Page(s):
    1934-1944

    Backbone network of the mobile networks, i.e. mobile backhaul networks, is an important part of mobile network system. With the decreasing size of mobile network system cells, it is considered next-generation mobile backhaul networks will form mesh topology. Most mobile backhaul networks are formed with microwave radios. To increase data rate, Adaptive Modulation and Coding (AMC) is used for wireless links. However, the data rate of each wireless link changes over time and leads to unexpected packet loss or traffic degradation. This paper proposes a routing scheme and methods for estimating the transmission parameters or modes of wireless links to route bandwidth guaranteed flows over mobile backhaul networks. Proposed routing scheme can reduce degradation of flows caused by unexpected changes of the data rate of wireless links. We evaluate our routing scheme when mode distribution of links follows normal, uniform and Poisson distributions. This paper shows mode estimation using mode history of link to estimate the link quality can route bandwidth guaranteed flows efficiently by choosing more stable links for the path.

  • Traffic Adaptive Contention Differentiation Scheme for LR-WPANs

    Wook KIM  Heungwoo NAM  Sunshin AN  

     
    LETTER-QoS and Quality Management

      Page(s):
    1945-1948

    IEEE 802.15.4 is a new standard, uniquely designed for low rate wireless personal area networks (LR-WPANs). It targets ultra-low complexity, cost, and power, for low-data-rate wireless connectivity. However, one of the main problems of this new standard is its insufficient, and inefficient, media access control (MAC) for priority data. This paper introduces an extended contention access period (XCAP) concept for priority packets, also an traffic adaptive contention differentiation utilizing the XCAP (TACDX). The TACDX determines appropriate transmission policy alternatively according to the traffic conditions and type of packet. TACDX achieves not only enhanced transmission for priority packets but it also has a high energy efficiency for the overall network. The proposed TACDX is verified with simulations to measure the performances.

  • Impact of Censoring on Estimation of Flow Duration Distribution and Its Mitigation Using Kaplan-Meier-Based Method

    Yuki SAKAI  Masato UCHIDA  Masato TSURU  Yuji OIE  

     
    LETTER-QoS and Quality Management

      Page(s):
    1949-1952

    A basic and inevitable problem in estimating flow duration distribution arises from "censoring" (i.e., cutting off) the observed flow duration because of a finite measurement period. We extended the Kaplan-Meier method, which is used in the survival analysis field, and applied it to recover information on the flow duration distribution that was lost due to censoring. We show that the flow duration distribution from a short period of actual traffic data with censoring that was estimated using a Kaplan-Meier-based method can approximate well the flow duration distribution calculated from a sufficiently long period of actual traffic data.

  • Fast and Memory-Efficient Regular Expression Matching Using Transition Sharing

    Shuzhuang ZHANG  Hao LUO  Binxing FANG  Xiaochun YUN  

     
    PAPER-DRM and Security

      Page(s):
    1953-1960

    Scanning packet payload at a high speed has become a crucial task in modern network management due to its wide variety applications on network security and application-specific services. Traditionally, Deterministic finite automatons (DFAs) are used to perform this operation in linear time. However, the memory requirements of DFAs are prohibitively high for patterns used in practical packet scanning, especially when many patterns are compiled into a single DFA. Existing solutions for memory blow-up are making a trade-off between memory requirement and memory access of processing per input character. In this paper we proposed a novel method to drastically reduce the memory requirements of DFAs while still maintain the high matching speed and provide worst-case guarantees. We removed the duplicate transitions between states by dividing all the DFA states into a number of groups and making each group of states share a merged transition table. We also proposed an efficient algorithm for transition sharing between states. The high efficiency in time and space made our approach adapted to frequently updated DFAs. We performed several experiments on real world rule sets. Overall, for all rule sets and approach evaluated, our approach offers the best memory versus run-time trade-offs.

  • Service Independent Access Control Architecture for User Generated Content (UGC) and Its Implementation

    Akira YAMADA  Ayumu KUBOTA  Yutaka MIYAKE  Kazuo HASHIMOTO  

     
    PAPER-DRM and Security

      Page(s):
    1961-1970

    Using Web-based content management systems such as Blog, an end user can easily publish User Generated Content (UGC). Although publishing of UGCs is easy, controlling access to them is a difficult problem for end users. Currently, most of Blog sites offer no access control mechanism, and even when it is available to users, it is not sufficient to control users who do not have an account at the site, not to mention that it cannot control accesses to content hosted by other UGC sites. In this paper, we propose new access control architecture for UGC, in which third party entities can offer access control mechanism to users independently of UGC hosting sites. With this architecture, a user can control accesses to his content that might be spread over many different UGC sites, regardless of whether those sites have access control mechanism or not. The key idea to separate access control mechanism from UGC sites is to apply cryptographic access control and we implemented the idea in such a way that it requires no modification to UGC sites and Web browsers. Our prototype implementation shows that the proposed access control architecture can be easily deployed in the current Web-based communication environment and it works quite well with popular Blog sites.

  • Reducing Payload Inspection Cost Using Rule Classification for Fast Attack Signature Matching

    Sunghyun KIM  Heejo LEE  

     
    PAPER-DRM and Security

      Page(s):
    1971-1978

    Network intrusion detection systems rely on a signature-based detection engine. When under attack or during heavy traffic, the detection engines need to make a fast decision whether a packet or a sequence of packets is normal or malicious. However, if packets have a heavy payload or the system has a great deal of attack patterns, the high cost of payload inspection severely diminishes detection performance. Therefore, it would be better to avoid unnecessary payload scans by checking the protocol fields in the packet header, before executing their heavy operations of payload inspection. When payload inspection is necessary, it is better to compare a minimum number of attack patterns. In this paper, we propose new methods to classify attack signatures and make pre-computed multi-pattern groups. Based on IDS rule analysis, we grouped the signatures of attack rules by a multi-dimensional classification method adapted to a simplified address flow. The proposed methods reduce unnecessary payload scans and make light pattern groups to be checked. While performance improvements are dependent on a given networking environment, the experimental results with the DARPA data set and university traffic show that the proposed methods outperform the most recent Snort by up to 33%.

  • An Enhanced Security Protocol for Fast Mobile IPv6

    Ilsun YOU  Kouichi SAKURAI  Yoshiaki HORI  

     
    LETTER-DRM and Security

      Page(s):
    1979-1982

    Recently, Kempf and Koodli have proposed a security protocol for Fast Mobile IPv6 (FMIPv6). Through the SEcure Neighbor Discovery (SEND) protocol, it achieves secure distribution of a handover key, and consequently becomes a security standard for FMIPv6. However, it is still vulnerable to redirection attacks. In addition, due to the SEND protocol, it suffers from denial of service attacks and expensive computational cost. In this paper, we present a security protocol, which enhances Kempf-Koodli's one with the help of the AAA infrastructure.

  • Content Sharing in User Binding DRM

    Byung-Rae LEE  

     
    LETTER-DRM and Security

      Page(s):
    1983-1985

    Content sharing mechanisms in current DRM systems are based on a domain where multiple devices have the same domain key. This can lead to a security weakness as failure of one device means revocation of a domain itself. Furthermore, if a device leaves the domain, all the other devices should update their domain key. This also leads to efficiency problem. This paper proposes the new content sharing scheme based on the user binding DRM without the use of domain key. The proposed scheme improves the previous domain technology in terms of security and efficiency as it removes the use of domain key and only allows content sharing for multiple devices owned by the same user.

  • Partially Eager Update Propagation and Freshness-Based Read Relaxation for Replicated Internet Services

    Ho-Joong KIM  Seungryoul MAENG  

     
    PAPER-Parallel and Distributed Architecture

      Page(s):
    1986-1998

    We propose an Edge-write architecture which performs eager update propagation for update requests for the corresponding secondary server, whereas it lazily propagates updates from other secondary servers. Our architecture resolves consistency problems caused by read/update decoupling in the conventional lazy update propagation-based system. It also improves overall scalability by alleviating the performance bottleneck at the primary server in compensation for increased but bounded response time. Such relaxed consistency management enables a read request to choose whether to read the replicated data immediately or to refresh it. We use the age of a local data copy as the freshness factor so that a secondary server can make a decision for freshness control independently. As a result, our freshness-controlled edge-write architecture benefits by adjusting a tradeoff between the response time and the correctness of data.

  • Data Recovery of Distributed Hash Table with Distributed-to-Distributed Data Copy

    Yusuke DOI  Shirou WAKAYAMA  Satoshi OZAKI  

     
    PAPER-Parallel and Distributed Architecture

      Page(s):
    1999-2006

    To realize huge-scale information services, many Distributed Hash Table (DHT) based systems have been proposed. For example, there are some proposals to manage item-level product traceability information with DHTs. In such an application, each entry of a huge number of item-level IDs need to be available on a DHT. To ensure data availability, the soft-state approach has been employed in previous works. However, this does not scale well against the number of entries on a DHT. As we expect 1010 products in the traceability case, the soft-state approach is unacceptable. In this paper, we propose Distributed-to-Distributed Data Copy (D3C). With D3C, users can reconstruct the data as they detect data loss, or even migrate to another DHT system. We show why it scales well against the number of entries on a DHT. We have confirmed our approach with a prototype. Evaluation shows our approach fits well on a DHT with a low rate of failure and a huge number of data entries.

  • Regular Section
  • Static Dependency Pair Method Based on Strong Computability for Higher-Order Rewrite Systems

    Keiichirou KUSAKARI  Yasuo ISOGAI  Masahiko SAKAI  Frederic BLANQUI  

     
    PAPER-Computation and Computational Models

      Page(s):
    2007-2015

    Higher-order rewrite systems (HRSs) and simply-typed term rewriting systems (STRSs) are computational models of functional programs. We recently proposed an extremely powerful method, the static dependency pair method, which is based on the notion of strong computability, in order to prove termination in STRSs. In this paper, we extend the method to HRSs. Since HRSs include λ-abstraction but STRSs do not, we restructure the static dependency pair method to allow λ-abstraction, and show that the static dependency pair method also works well on HRSs without new restrictions.

  • A Self-Adaptive Routing Protocol in Wireless LANs Based on Attractor Selection

    Gen NISHIKAWA  Tomoko IZUMI  Fukuhito OOSHITA  Hirotsugu KAKUGAWA  Toshimitsu MASUZAWA  

     
    PAPER-Computation and Computational Models

      Page(s):
    2016-2024

    Wireless LANs, which consist of access points and wireless stations, have widely spread in recent years. Routing in wireless LANs suffers the problem that each wireless station selects an access point and a wired path to its destination station. It is desired to design an adaptive routing protocol for wireless LANs since throughputs of communications are dynamically affected by selections of other wireless stations and external environmental changes. In this paper, we propose a routing protocol for wireless LANs based on attractor selection. Attractor selection is a biologically inspired approach, and it has high adaptability to dynamic environmental changes. By applying attractor selection, each wireless station can adaptively select its access point and wired path with high throughput against environmental changes. In addition, we design the protocol with a new technique: combination of multiple attractor selections. The technique is useful because it enables us to divide a problem into several simpler problems. To the best of our knowledge, our protocol is the first one designed around a combination of multiple attractor selections. We show the effectiveness and adaptability of our protocol by simulations.

  • Fast Computation of Rank and Select Functions for Succinct Representation

    Joong Chae NA  Ji Eun KIM  Kunsoo PARK  Dong Kyue KIM  

     
    PAPER-Algorithm Theory

      Page(s):
    2025-2033

    Succinct representation is a space-efficient method to represent n discrete objects using space close to the information-theoretic lower bound. In order to directly access the ith object of succinctly represented data structures in constant time, two fundamental functions, rank and select, are commonly used. In this paper we propose two implementations supporting rank and select in constant time for non-compressed bit strings. One uses O(n log log n / ) bits of extra space and the other uses n+O(n log log n / log n) bits of extra space in the worst case. The former is rather a theoretical algorithm and the latter is a practical algorithm which works faster and uses less space in practice.

  • Pipelining a Multi-Mode SHA-384/512 Core with High Area Performance Rate

    Anh-Tuan HOANG  Katsuhiro YAMAZAKI  Shigeru OYANAGI  

     
    PAPER-VLSI Systems

      Page(s):
    2034-2042

    The security hash algorithm 512 (SHA-512), which is used to verify the integrity of a message, involves computational iterations on data. The huge computation delay generated in such iterations limits the entire throughput of the system and makes it difficult to pipeline the computation. We describe a way to pipeline the computation using fine-grained pipelining with balanced critical paths. In this method, one critical path is broken into two stages by using data forwarding. The other critical path is broken into three stages by using computation postponement. The resulting critical paths all have two adder-layers with some data movements, and thus are balanced. In addition, the method also allows register reduction. Also, the similarity in SHA-384 and SHA-512 are used for a multi-mode design, which can generate a message digest for both versions with the same throughput, but with only a small increase in hardware size. Experimental results show that our implementation achieved not only the best area performance rate (throughput divided by area), but also a higher throughput than almost all related work.

  • A Technique for Defining Metamodel Translations

    Iván GARCÍA-MAGARIÑO  Rubén FUENTES-FERNÁNDEZ  

     
    PAPER-Fundamentals of Software and Theory of Programs

      Page(s):
    2043-2052

    Model-Driven Engineering and Domain-Specific Modeling Languages are encouraging an increased used of metamodels for the definition of languages and tools. Although the Meta Object Facility language is the standard for metamodeling, there are alternative metamodeling languages that are aimed at satisfying specific requirements. In this context, sharing information throughout different domains and tools requires not only being able to translate models between modeling languages defined with the same metamodeling language, but also between different metamodeling languages. This paper addresses this latter need describing a general technique to define transformations that perform this translation. In this work, two case studies illustrate the application of this process.

  • A Phase-Adaptive Garbage Collector Using Dynamic Heap Partitioning and Opportunistic Collection

    Yangwoo ROH  Jaesub KIM  Kyu Ho PARK  

     
    PAPER-Fundamentals of Software and Theory of Programs

      Page(s):
    2053-2063

    Applications usually have their own phases in heap memory usage. The traditional garbage collector fails to match various application phases because the same heuristic on the object behavior is used throughout the entire execution. This paper introduces a phase-adaptive garbage collector which reorganizes the heap layout and adjusts the invocation time of the garbage collection according to the phases. The proposed collector identifies phases by detecting the application methods strongly related to the phase boundaries. The experimental results show that the proposed phase-adaptive collector successfully recognizes application phases and improves the garbage collection time by as much as 41%.

  • Load Balancing Scheme on the Basis of Huffman Coding for P2P Information Retrieval

    Hisashi KURASAWA  Atsuhiro TAKASU  Jun ADACHI  

     
    PAPER-Contents Technology and Web Information Systems

      Page(s):
    2064-2072

    Although a distributed index on a distributed hash table (DHT) enables efficient document query processing in Peer-to-Peer information retrieval (P2P IR), the index costs a lot to construct and it tends to be an unfair management because of the unbalanced term frequency distribution. We devised a new distributed index, named Huffman-DHT, for P2P IR. The new index uses an algorithm similar to Huffman coding with a modification to the DHT structure based on the term distribution. In a Huffman-DHT, a frequent term is assigned to a short ID and allocated a large space in the node ID space in DHT. Throuth ID management, the Huffman-DHT balances the index registration accesses among peers and reduces load concentrations. Huffman-DHT is the first approach to adapt concepts of coding theory and term frequency distribution to load balancing. We evaluated this approach in experiments using a document collection and assessed its load balancing capabilities in P2P IR. The experimental results indicated that it is most effective when the P2P system consists of about 30,000 nodes and contains many documents. Moreover, we proved that we can construct a Huffman-DHT easily by estimating the probability distribution of the term occurrence from a small number of sample documents.

  • On the Design of the Peer-Assisted UGC VoD System

    Yi WAN  Takuya ASAKA  Tatsuro TAKAHASHI  

     
    PAPER-Networks

      Page(s):
    2073-2081

    User Generated Content (UGC) VoD services such as YouTube are becoming more and more popular, and their maintenance costs are growing as well. Many P2P solutions have been proposed to reduce server load in such systems, but almost all of them focus on the single-video approach, which only has limited effect on the systems serving short videos such as UGC. The purpose of this paper is to investigate the potential of an alternative approach, the multi-video approach, and we use a very simple method called collaborative caching to show that methods using the multi-video approach are generally more suitable for current UGC VoD systems. We also study the influence of the major design factors through simulations and provide guidelines for efficiently building systems with this method.

  • End-to-End Loss Differentiation Algorithm Based on Estimation of Queue Usage in Multi-Hop Wireless Networks

    Mi-Young PARK  Sang-Hwa CHUNG  Prasanthi SREEKUMARI  

     
    PAPER-Networks

      Page(s):
    2082-2093

    When TCP operates in multi-hop wireless networks, it suffers from severe performance degradation. This is because TCP reacts to wireless packet losses by unnecessarily decreasing its sending rate. Although previous loss differentiation algorithms (LDAs) can identify some of the packet losses due to wireless transmission errors as wireless losses, their accuracy is not high as much as we expect, and these schemes cannot avoid sacrificing the accuracy of congestion loss discrimination by misclassifying congestion losses as wireless losses. In this paper, we suggest a new end-to-end loss differentiation scheme which has high accuracy in both wireless loss discrimination and congestion loss discrimination. Our scheme estimates the rate of queue usage using information available to TCP. If the estimated queue usage is larger than 50% when a packet is lost, our scheme diagnoses the packet loss as congestion losses. Otherwise, it diagnoses the packet loss as wireless losses. Because the estimated queue usage is highly correlated to congestion, our scheme has an advantage to more exactly identify packet losses related to congestion and those unrelated to congestion. Through extensive simulations, we compare and evaluate our scheme with previous LDAs in terms of correlation, accuracy, and stability. And the results show that our scheme has the highest accuracy as well as its accuracy is more reliable than the other LDAs.

  • Acceleration of Genetic Programming by Hierarchical Structure Learning: A Case Study on Image Recognition Program Synthesis

    Ukrit WATCHAREERUETAI  Tetsuya MATSUMOTO  Noboru OHNISHI  Hiroaki KUDO  Yoshinori TAKEUCHI  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Page(s):
    2094-2102

    We propose a learning strategy for acceleration in learning speed of genetic programming (GP), named hierarchical structure GP (HSGP). The HSGP exploits multiple learning nodes (LNs) which are connected in a hierarchical structure, e.g., a binary tree. Each LN runs conventional evolutionary process to evolve its own population, and sends the evolved population into the connected higher-level LN. The lower-level LN evolves the population with a smaller subset of training data. The higher-level LN then integrates the evolved population from the connected lower-level LNs together, and evolves the integrated population further by using a larger subset of training data. In HSGP, evolutionary processes are sequentially executed from the bottom-level LNs to the top-level LN which evolves with the entire training data. In the experiments, we adopt conventional GPs and the HSGPs to evolve image recognition programs for given training images. The results show that the use of hierarchical structure learning can significantly improve learning speed of GPs. To achieve the same performance, the HSGPs need only 30-40% of the computation cost needed by conventional GPs.

  • Optimizing Region of Support for Boundary-Based Corner Detection: A Statistic Approach

    Wen-Bing HORNG  Chun-Wen CHEN  

     
    PAPER-Pattern Recognition

      Page(s):
    2103-2111

    Boundary-based corner detection has been widely applied in spline curve fitting, automated optical inspection, image segmentation, object recognition, etc. In order to obtain good results, users usually need to adjust the length of region of support to resist zigzags due to quantization and random noise on digital boundaries. To automatically determine the length of region of support for corner detection, Teh-Chin and Guru-Dinesh presented adaptive approaches based on some local properties of boundary points. However, these local-property based approaches are sensitive to noise. In this paper, we propose a new approach to find the optimum length of region of support for corner detection based on a statistic discriminant criterion. Since our approach is based on the global perspective of all boundary points, rather than the local properties of some points, the experiments show that the determined length of region of support increases as the noise intensity strengthens. In addition, the detected corners based on the optimum length of region of support are consistent with human experts' judgment, even for noisy boundaries.

  • Multiple Object Category Detection and Localization Using Generative and Discriminative Models

    Dipankar DAS  Yoshinori KOBAYASHI  Yoshinori KUNO  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    2112-2121

    This paper proposes an integrated approach to simultaneous detection and localization of multiple object categories using both generative and discriminative models. Our approach consists of first generating a set of hypotheses for each object category using a generative model (pLSA) with a bag of visual words representing each object. Based on the variation of objects within a category, the pLSA model automatically fits to an optimal number of topics. Then, the discriminative part verifies each hypothesis using a multi-class SVM classifier with merging features that combines spatial shape and appearance of an object. In the post-processing stage, environmental context information along with the probabilistic output of the SVM classifier is used to improve the overall performance of the system. Our integrated approach with merging features and context information allows reliable detection and localization of various object categories in the same image. The performance of the proposed framework is evaluated on the various standards (MIT-CSAIL, UIUC, TUD etc.) and the authors' own datasets. In experiments we achieved superior results to some state of the art methods over a number of standard datasets. An extensive experimental evaluation on up to ten diverse object categories over thousands of images demonstrates that our system works for detecting and localizing multiple objects within an image in the presence of cluttered background, substantial occlusion, and significant scale changes.

  • Dependency Parsing with Lattice Structures for Resource-Poor Languages

    Sutee SUDPRASERT  Asanee KAWTRAKUL  Christian BOITET  Vincent BERMENT  

     
    PAPER-Natural Language Processing

      Page(s):
    2122-2136

    In this paper, we present a new dependency parsing method for languages which have very small annotated corpus and for which methods of segmentation and morphological analysis producing a unique (automatically disambiguated) result are very unreliable. Our method works on a morphosyntactic lattice factorizing all possible segmentation and part-of-speech tagging results. The quality of the input to syntactic analysis is hence much better than that of an unreliable unique sequence of lemmatized and tagged words. We propose an adaptation of Eisner's algorithm for finding the k-best dependency trees in a morphosyntactic lattice structure encoding multiple results of morphosyntactic analysis. Moreover, we present how to use Dependency Insertion Grammar in order to adjust the scores and filter out invalid trees, the use of language model to rescore the parse trees and the k-best extension of our parsing model. The highest parsing accuracy reported in this paper is 74.32% which represents a 6.31% improvement compared to the model taking the input from the unreliable morphosyntactic analysis tools.

  • A Hybrid Segmentation Framework for Computer-Assisted Dental Procedures

    Mohammad HOSNTALAB  Reza AGHAEIZADEH ZOROOFI  Ali ABBASPOUR TEHRANI-FARD  Gholamreza SHIRANI  Mohammad REZA ASHARIF  

     
    PAPER-Biological Engineering

      Page(s):
    2137-2151

    Teeth segmentation in computed tomography (CT) images is a major and challenging task for various computer assisted procedures. In this paper, we introduced a hybrid method for quantification of teeth in CT volumetric dataset inspired by our previous experiences and anatomical knowledge of teeth and jaws. In this regard, we propose a novel segmentation technique using an adaptive thresholding, morphological operations, panoramic re-sampling and variational level set algorithm. The proposed method consists of several steps as follows: first, we determine the operation region in CT slices. Second, the bony tissues are separated from other tissues by utilizing an adaptive thresholding technique based on the 3D pulses coupled neural networks (PCNN). Third, teeth tissue is classified from other bony tissues by employing panorex lines and anatomical knowledge of teeth in the jaws. In this case, the panorex lines are estimated using Otsu thresholding and mathematical morphology operators. Then, the proposed method is followed by calculating the orthogonal lines corresponding to panorex lines and panoramic re-sampling of the dataset. Separation of upper and lower jaws and initial segmentation of teeth are performed by employing the integral projections of the panoramic dataset. Based the above mentioned procedures an initial mask for each tooth is obtained. Finally, we utilize the initial mask of teeth and apply a variational level set to refine initial teeth boundaries to final contour. In the last step a surface rendering algorithm known as marching cubes (MC) is applied to volumetric visualization. The proposed algorithm was evaluated in the presence of 30 cases. Segmented images were compared with manually outlined contours. We compared the performance of segmentation method using ROC analysis of the thresholding, watershed and our previous works. The proposed method performed best. Also, our algorithm has the advantage of high speed compared to our previous works.

  • Utilization Bound of Non-preemptive Fixed Priority Schedulers

    Moonju PARK  Jinseok CHAE  

     
    LETTER-Dependable Computing

      Page(s):
    2152-2155

    It is known that the schedulability of a non-preemptive task set with fixed priority can be determined in pseudo-polynomial time. However, since Rate Monotonic scheduling is not optimal for non-preemptive scheduling, the applicability of existing polynomial time tests that provide sufficient schedulability conditions, such as Liu and Layland's bound, is limited. This letter proposes a new sufficient condition for non-preemptive fixed priority scheduling that can be used for any fixed priority assignment scheme. It is also shown that the proposed schedulability test has a tighter utilization bound than existing test methods.

  • Vibration Analysis of Human Middle Ear with Differential Floating Mass Transducer Using Electrical Model

    Ki-Woong SEONG  Eui-Sung JUNG  Hyung-Gyu LIM  Jang-Woo LEE  Min-Woo KIM  Sang-Hyo WOO  Jung-Hyun LEE  Il-Yong PARK  Jin-Ho CHO  

     
    LETTER-Rehabilitation Engineering and Assistive Technology

      Page(s):
    2156-2158

    In this paper, the vibration characteristics of stapes, driven by the implanted differential floating mass transducer (DFMT) in the human middle ear, are analyzed by using an electrical model. The electrical model has been simulated by using the PSpice, in which the simulated results are compared with the experimental results by using the fabricated DFMT and the human temporal bones.

  • Direct Importance Estimation with Gaussian Mixture Models

    Makoto YAMADA  Masashi SUGIYAMA  

     
    LETTER-Pattern Recognition

      Page(s):
    2159-2162

    The ratio of two probability densities is called the importance and its estimation has gathered a great deal of attention these days since the importance can be used for various data processing purposes. In this paper, we propose a new importance estimation method using Gaussian mixture models (GMMs). Our method is an extention of the Kullback-Leibler importance estimation procedure (KLIEP), an importance estimation method using linear or kernel models. An advantage of GMMs is that covariance matrices can also be learned through an expectation-maximization procedure, so the proposed method--which we call the Gaussian mixture KLIEP (GM-KLIEP)--is expected to work well when the true importance function has high correlation. Through experiments, we show the validity of the proposed approach.

  • Video Frame Interpolation by Image Morphing Including Fully Automatic Correspondence Setting

    Miki HASEYAMA  Makoto TAKIZAWA  Takashi YAMAMOTO  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    2163-2166

    In this paper, a new video frame interpolation method based on image morphing for frame rate up-conversion is proposed. In this method, image features are extracted by Scale-Invariant Feature Transform in each frame, and their correspondence in two contiguous frames is then computed separately in foreground and background regions. By using the above two functions, the proposed method accurately generates interpolation frames and thus achieves frame rate up-conversion.

  • Plausibility-Based Approach to Image Thresholding

    Suk Tae SEO  Hye Cheun JEONG  In Keun LEE  Chang Sik SON  Soon Hak KWON  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    2167-2170

    An approach to image thresholding based on the plausibility of object and background regions by adopting a co-occurrence matrix and category utility is presented. The effectiveness of the proposed method is shown through the experimental results tested on several images and compared with conventional methods.

  • Contourlet Based Adaptive Watermarking for Color Images

    Haohao SONG  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    2171-2174

    This paper proposes a contourlet based adaptive watermarking for color images (CAWCI). A color image with RGB space is firstly converted to its YCbCr space equivalent; a luminance (Y) image and two chrominance (Cb and Cr) images are subsequently transformed into contourlet domain respectively; the watermark is embedded into the contourlet coefficients of the largest detail subbands of three images lastly. On the one hand, the embedded watermark is imperceptible because contrast sensitivity function and watermark visual mask are adopted in our CAWCI. On the other hand, the embedded watermark is very robust due to the spread specialty of Laplacian pyramid (LP) in contourlet transform. The corresponding watermarking detection algorithm is proposed to decide whether the watermark is present or not by exploiting the unique transform structure of LP. Experimental results show the validity of CAWCI in terms of both watermarking invisibility and watermarking robustness.