The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] REM(1013hit)

301-320hit(1013hit)

  • Incremental Single-Source Multi-Target A* Algorithm for LBS Based on Road Network Distance

    Htoo HTOO  Yutaka OHSAWA  Noboru SONEHARA  Masao SAKAUCHI  

     
    PAPER-Spatial DB

      Vol:
    E96-D No:5
      Page(s):
    1043-1052

    Searching for the shortest paths from a query point to several target points on a road network is an essential operation for several types of queries in location-based services. This search can be performed using Dijkstra's algorithm. Although the A* algorithm is faster than Dijkstra's algorithm for finding the shortest path from a query point to a target point, the A* algorithm is not so fast to find all paths between each point and the query point when several target points are given. In this case, the search areas on road network overlap for each search, and the total number of operations at each node is increased, especially when the number of query points increases. In the present paper, we propose the single-source multi-target A* (SSMTA*) algorithm, which is a multi-target version of the A* algorithm. The SSMTA* algorithm guarantees at most one operation for each road network node, and the searched area on road network is smaller than that of Dijkstra's algorithm. Deng et al. proposed the LBC approach with the same objective. However, several heaps are used to manage the search area on the road network and the contents in each heap must always be kept the same in their method. This operation requires much processing time. Since the proposed method uses only one heap, such content synchronization is not necessary. The present paper demonstrates through empirical evaluations that the proposed method outperforms other similar methods.

  • AspectQuery: A Method for Identification of Crosscutting Concerns in the Requirement Phase

    Chengwan HE  Chengmao TU  

     
    PAPER-Software Engineering

      Vol:
    E96-D No:4
      Page(s):
    897-905

    Identification of early aspects is the critical problem in aspect-oriented requirement engineering. But the representation of crosscutting concerns is various, which makes the identification difficult. To address the problem, this paper proposes the AspectQuery method based on goal model. We analyze four kinds of goal decomposition models, then summarize the main factors about identification of crosscutting concerns and conclude the identification rules based on a goal model. A goal is crosscutting concern when it satisfies one of the following conditions: i) the goal is contributed to realize one soft-goal; ii) parent goal of the goal is candidate crosscutting concern; iii) the goal has at least two parent goals. AspectQuery includes four steps: building the goal model, transforming the goal model, identifying the crosscutting concerns by identification rules, and composing the crosscutting concerns with the goals affected by them. We illustrate the AspectQuery method through a case study (a ticket booking management system). The results show the effectiveness of AspectQuery in identifying crosscutting concerns in the requirement phase.

  • Ultimate Boundedness of Nonlinear Singularly Perturbed System with Measurement Noise

    Kyung-In KANG  Kyun-Sang PARK  Jong-Tae LIM  

     
    LETTER-Systems and Control

      Vol:
    E96-A No:4
      Page(s):
    826-829

    In this letter, we consider the ultimate boundedness of the singularly perturbed system with measurement noise. The composite controller is commonly used to regulate the singularly perturbed system. However, in the presence of measurement noise, the composite controller does not guarantee the ultimate boundedness of the singularly perturbed system. Thus, we propose the modified composite controller to show the ultimate boundedness of the singularly perturbed system with measurement noise.

  • Energy-Aware MAC Protocol to Extend Network Lifetime in Asynchronous MAC-Based WSNs

    Min-Gon KIM  Hongkyu JEONG  Hong-Shik PARK  

     
    PAPER-Network

      Vol:
    E96-B No:4
      Page(s):
    967-975

    In Wireless Sensor Networks (WSNs), sensor nodes consume their limited battery energy to send and receive data packets for data transmission. If some sensor nodes transmit data packets more frequently due to imbalance in the network topology or traffic flows, they experience higher energy consumption. And if the sensor nodes are not recharged, they will be turned off from the lack of battery energy which will degrade network sustainability. In order to resolve this problem, this paper proposes an Energy-aware MAC Protocol (EMP), which adaptively decides on the size of the channel polling cycle consisting of the sleep state (not to communicate with its target node) and the listening state (to awaken to receive data packets), according to the network traffic condition. Moreover, in accordance with the remaining energy state of the sensor node, the minimum size of the channel polling cycle is increased for better energy saving. For performance evaluation and comparison, we develop a Markov chain-based analytical model and an event-driven simulator. Simulation results show that a sensor node with EMP effectively reduces its energy consumption in imbalanced network condition and traffic flows, while latency somewhat increases under insufficient remaining energy. As a consequence, a holistic perspective for enhanced network sustainability can be studied in consideration of network traffic condition as well as the remaining energy states of sensor nodes.

  • DiSCo: Distributed Scalable Compilation Tool for Heavy Compilation Workload

    Kyongjin JO  Seon Wook KIM  Jong-Kook KIM  

     
    PAPER-Fundamentals of Information Systems

      Vol:
    E96-D No:3
      Page(s):
    589-600

    The size and complexity of software in computer systems and even in consumer electronics is drastically and continuously increasing, thus increasing the compilation time. For example, the compilation time for building some of mobile phones' platform software takes several hours. In order to reduce the compilation time, this paper proposes a Distributed Scalable Compilation Tool, called DiSCo where full compilation passes such as preprocessing, compilation, and even linking are performed at remote machines, i.e. in parallel. To the best of our knowledge DiSCo is the first distributed compiler to support complete distributed processing in all the compilation passes. We use an extensive dependency analysis in parsing compilation commands for exploiting higher command-level parallelism, and we apply a file caching method and a network-drive protocol for reducing the remote compilation overhead and simplifying the implementation. Lastly, we minimize load imbalance and remote machine management overhead with our heuristic static scheduling method by predicting compilation time and considering the overheads invoked by the compilation process. Our evaluation using four large mobile applications and eight GNU applications shows that the performance of DiSCo is scalable and the performance is close to a profile scheduling.

  • Transform Domain Shadow Removal for Foreground Silhouette

    Toshiaki SHIOTA  Kazuki NAKAGAMI  Takao NISHITANI  

     
    PAPER-Digital Signal Processing

      Vol:
    E96-A No:3
      Page(s):
    667-674

    A novel shadow removal approach is proposed by using block-wise transform domain shadow detection. The approach is based on the fact that the spatial frequency distributions on normal background areas and those under casted shadows from foreground objects are the same. The proposed approach is especially useful for silhouette extraction by using the Gaussian Mixture background Model (GMM) foreground segmentation in the transform domain, because the frequency distribution has already been calculated in the foreground segmentation. The stable shadow removal is realized, due to the transform domain implementation.

  • Hardware Software Co-design of H.264 Baseline Encoder on Coarse-Grained Dynamically Reconfigurable Computing System-on-Chip

    Hung K. NGUYEN  Peng CAO  Xue-Xiang WANG  Jun YANG  Longxing SHI  Min ZHU  Leibo LIU  Shaojun WEI  

     
    PAPER-Computer System

      Vol:
    E96-D No:3
      Page(s):
    601-615

    REMUS-II (REconfigurable MUltimedia System 2) is a coarse-grained dynamically reconfigurable computing system for multimedia and communication baseband processing. This paper proposes a real-time H.264 baseline profile encoder on REMUS-II. First, we propose an overall mapping flow for mapping algorithms onto the platform of REMUS-II system and then illustrate it by implementing the H.264 encoder. Second, parallel and pipelining techniques are considered for fully exploiting the abundant computing resources of REMUS-II, thus increasing total computing throughput and solving high computational complexity of H.264 encoder. Besides, some data-reuse schemes are also used to increase data-reuse ratio and therefore reduce the required data bandwidth. Third, we propose a scheduling scheme to manage run-time reconfiguration of the system. The scheduling is also responsible for synchronizing the data communication between tasks and handling conflict between hardware resources. Experimental results prove that the REMUS-MB (REMUS-II version for mobile applications) system can perform a real-time H.264/AVC baseline profile encoder. The encoder can encode CIF@30 fps video sequences with two reference frames and maximum search range of [-16,15]. The implementation, thereby, can be applied to handheld devices targeted at mobile multimedia applications. The platform of REMUS-MB system is designed and synthesized by using TSMC 65 nm low power technology. The die size of REMUS-MB is 13.97 mm2. REMUS-MB consumes, on average, about 100 mW while working at 166 MHz. To my knowledge, in the literature this is the first implementation of H.264 encoding algorithm on a coarse-grained dynamically reconfigurable computing system.

  • Understanding the Impact of BPRAM on Incremental Checkpoint

    Xu LI  Kai LU  Xiaoping WANG  Bin DAI  Xu ZHOU  

     
    PAPER-Dependable Computing

      Vol:
    E96-D No:3
      Page(s):
    663-672

    Existing large-scale systems suffer from various hardware/software failures, motivating the research of fault-tolerance techniques. Checkpoint-restart techniques are widely applied fault-tolerance approaches, especially in scientific computing systems. However, the overhead of checkpoint largely influences the overall system performance. Recently, the emerging byte-addressable, persistent memory technologies, such as phase change memory (PCM), make it possible to implement checkpointing in arbitrary data granularity. However, the impact of data granularity on the checkpointing cost has not been fully addressed. In this paper, we investigate how data granularity influences the performance of a checkpoint system. Further, we design and implement a high-performance checkpoint system named AG-ckpt. AG-ckpt is a hybrid-granularity incremental checkpointing scheme through: (1) low-cost modified-memory detection and (2) fine-grained memory duplication. Moreover, we also formulize the performance-granularity relationship of checkpointing systems through a mathematical model, and further obtain the optimum solutions. We conduct the experiments through several typical benchmarks to verify the performance gain of our design. Compared to conventional incremental checkpoint, our results show that AG-ckpt can reduce checkpoint data amount up to 50% and provide a speedup of 1.2x-1.3x on checkpoint efficiency.

  • Novel Superconducting Quantum Interference Device Bootstrap Circuit and Its Application in Biomagnetism Open Access

    Xiangyan KONG  Yi ZHANG  Xiaoming XIE  Mianheng JIANG  

     
    INVITED PAPER

      Vol:
    E96-C No:3
      Page(s):
    320-325

    The voltage biased SQUID Bootstrap Circuit (SBC) was recently demonstrated for direct readout of SQUID signals. The SBC combines current- and voltage-feedbacks in one circuit to suppress the preamplifier noise. It offers not only a good noise performance, but also wide tolerance of SQUID parameters. Using SBC gradiometer, the bio-magnetic signals were successfully measured. In this paper, we overview the concept of SBC and its applications.

  • Analyzing Characteristics of TCP Quality Metrics with Respect to Type of Connection through Measured Traffic Data

    Yasuhiro IKEDA  Ryoichi KAWAHARA  Noriaki KAMIYAMA  Tatsuaki KIMURA  Tatsuya MORI  

     
    PAPER-Internet

      Vol:
    E96-B No:2
      Page(s):
    533-542

    We analyze measured traffic data to investigate the characteristics of TCP quality metrics such as packet retransmission rate, roundtrip time (RTT), and throughput of connections classified by their type (client-server (C/S) or peer-to-peer (P2P)), or by the location of the connection host (domestic or overseas). Our findings are as follows. (i) The TCP quality metrics of the measured traffic data are not necessarily consistent with a theoretical formula proposed in a previous study. However, the average RTT and retransmission rate are negatively correlated with the throughput, which is similar to this formula. Furthermore, the maximum idle time, which is defined as the maximum length of the packet interarrival times, is negatively correlated with throughput. (ii) Each TCP quality metric of C/S connections is higher than that of P2P connections. Here “higher quality” means that either the throughput is higher, or the other TCP quality metrics lead to higher throughput; for example the average RTT is lower or the retransmission rate is lower. Specifically, the median throughput of C/S connections is 2.5 times higher than that of P2P connections in the incoming direction of domestic traffic. (iii) The characteristics of TCP quality metrics depend on the location of the host of the TCP connection. There are cases in which overseas servers might use a different TCP congestion control scheme. Even if we eliminate these servers, there is still a difference in the degree of impact the average RTT has on the throughput between domestic and overseas traffic. One reason for this is thought to be the difference in the maximum idle time, and another is the fact that congestion levels of these types of traffic differ, even if their average RTTs are the same.

  • Device-Parameter Estimation through IDDQ Signatures

    Michihiro SHINTANI  Takashi SATO  

     
    PAPER-Dependable Computing

      Vol:
    E96-D No:2
      Page(s):
    303-313

    We propose a novel technique for the estimation of device-parameters suitable for postfabrication performance compensation and adaptive delay testing, which are effective means to improve the yield and reliability of LSIs. The proposed technique is based on Bayes' theorem, in which the device-parameters of a chip, such as the threshold voltage of transistors, are estimated by current signatures obtained in a regular IDDQ testing framework. Neither additional circuit implementation nor additional measurement is required for the purpose of parameter estimation. Numerical experiments demonstrate that the proposed technique can achieve 10-mV accuracy in threshold voltage estimations.

  • Deep Inspection of Unreachable BitTorrent Swarms

    Masahiro YOSHIDA  Akihiro NAKAO  

     
    PAPER

      Vol:
    E96-D No:2
      Page(s):
    249-258

    BitTorrent is one of the most popular P2P file sharing applications worldwide. Each BitTorrent network is called a swarm, and millions of peers may join multiple swarms. However, there are many unreachable peers (NATed (network address translated), firewalled, or inactive at the time of measurement) in each swarm; hence, existing techniques can only measure a part of all the peers in a swarm. In this paper, we propose an improved measurement method for BitTorrent swarms that include many unreachable peers. In essence, NATed peers and those behind firewalls are found by allowing them to connect to our crawlers by actively advertising our crawlers' addresses. Evaluation results show that the proposed method increases the number of unique contacted peers by 112% compared to the conventional method. Moreover, the proposed method increases the total volume of downloaded pieces by 66%. We investigate the sampling bias among the proposed and conventional methods, and we find that different measurement methods yield significantly different results.

  • PCA-Based Retinal Vessel Tortuosity Quantification

    Rashmi TURIOR  Danu ONKAEW  Bunyarit UYYANONVARA  

     
    PAPER-Pattern Recognition

      Vol:
    E96-D No:2
      Page(s):
    329-339

    Automatic vessel tortuosity measures are crucial for many applications related to retinal diseases such as those due to retinopathy of prematurity (ROP), hypertension, stroke, diabetes and cardiovascular diseases. An automatic evaluation and quantification of retinal vascular tortuosity would help in the early detection of such retinopathies and other systemic diseases. In this paper, we propose a novel tortuosity index based on principal component analysis. The index is compared with three existant indices using simulated curves and real retinal images to demonstrate that it is a valid indicator of tortuosity. The proposed index satisfies all the tortuosity properties such as invariance to translation, rotation and scaling and also the modulation properties. It is capable of differentiating the tortuosity of structures that visually appear to be different in tortuosity and shapes. The proposed index can automatically classify the image as tortuous or non tortuous. For an optimal set of training parameters, the prediction accuracy is as high as 82.94% and 86.6% on 45 retinal images at segment level and image level, respectively. The test results are verified against the judgement of two expert Ophthalmologists. The proposed index is marked by its inherent simplicity and computational attractiveness, and produces the expected estimate, irrespective of the segmentation approach. Examples and experimental results demonstrate the fitness and effectiveness of the proposed technique for both simulated curves and retinal images.

  • Frequency Response and Applications of Optical Electric-Field Sensor at Frequencies from 20 kHz to 180 GHz

    Hiroyoshi TOGO  David MORENO-DOMINGUEZ  Naoya KUKUTSU  

     
    PAPER

      Vol:
    E96-C No:2
      Page(s):
    227-234

    This article describes the frequency response and the applications of the optical electric-field sensor consisting of a 1 mm1 mm1 mm CdTe crystal mounted on the tip of an optical fiber, which theoretically possesses the potential to cover the frequency band from below megahertz to terahertz. We utilize a capacitor, GTEM-Cell, and standard gain horn antennas for applying a free-space electric field to the optical sensor at frequencies from 20 kHz to 1 GHz, from 1 GHz to 18 GHz, and from 10 to 180 GHz, respectively. An electric-field measurement demonstrates its flat frequency response within a 6-dB range from 20 kHz to 50 GHz except for the resonance due to the piezo-electric effect at a frequency around 1 MHz. The sensitivity increases due to the resonance of the radio frequency wave propagating in the crystal at the frequencies higher than 50 GHz. These experimental results demonstrate that the optical electric-field sensor is a superior tool for the wide-band measurement which is impossible with conventional sensors such as a dipole, a loop, and a horn antenna. In transient electrostatic discharge measurements, electric-field mapping, and near-field antenna measurements, the optical electric-field sensor provide the useful information for the deterioration diagnosis and the lifetime prognosis of electric circuits and devices. These applications of the optical electric-field sensor are regarded as promising ways for sowing the seeds of evolution in electric-field measurements for antenna measurement, EMC, and EMI.

  • A Low-Cost, Distributed and Conflict-Aware Measurement Method for Overlay Network Services Utilizing Local Information Exchange

    Tien Hoang DINH  Go HASEGAWA  Masayuki MURATA  

     
    PAPER

      Vol:
    E96-B No:2
      Page(s):
    459-469

    Measuring network resource information, including available bandwidth, propagation delay, and packet loss ratio, is an important task for efficient operation of overlay network services. Although measurement accuracy can be enhanced by frequent measurements, performing measurements with high frequency can cause measurement conflict problem that increases the network load and degrades measurement accuracy. In this paper, we propose a low-cost, distributed and conflict-aware measurement method that reduces measurement conflicts while maintaining high measurement accuracy. The main idea is that the overlay node exchanges the route information and the measurement results with its neighboring overlay nodes while decreasing the measurement frequency. This means our method trades the overhead of conducting measurements for the overhead of information exchange to enhance measurement accuracy. Simulation results show that the relative error in the measurement results of our method can be decreased by half compared with the existing method when the total measurement overheads of both methods are equal. We also confirm that exchanging measurement results contributes more to the enhancement of measurement accuracy than performing measurements.

  • Method of Image Green's Function in Grating Theory: Reflection Extinction Theorem

    Junichi NAKAYAMA  Yasuhiko TAMURA  

     
    BRIEF PAPER-Scattering and Diffraction

      Vol:
    E96-C No:1
      Page(s):
    51-54

    In the theory of diffraction gratings, the conventional integral method is considered as a powerful tool of numerical analysis. But it fails to work at a critical angle of incidence, because a periodic Green's function (integral kernel) diverges. This problem was resolved by the image integral equation in a previous paper. Newly introducing the reflection extinction theorem, this paper derives the image extinction theorem and the image integral equation. Then, it is concluded that the image integral equation is made up of two physical processes: the image surface radiates a reflected plane wave, whereas the periodic surface radiates the diffracted wave.

  • Highest Probability Data Association for Multi-Target Particle Filtering with Nonlinear Measurements

    Da Sol KIM  Taek Lyul SONG  Darko MUŠICKI  

     
    PAPER-Sensing

      Vol:
    E96-B No:1
      Page(s):
    281-290

    In this paper, we propose a new data association method termed the highest probability data association (HPDA) and apply it to real-time recursive nonlinear tracking in heavy clutter. The proposed method combines the probabilistic nearest neighbor (PNN) with a modified probabilistic strongest neighbor (PSN) approach. The modified PSN approach uses only the rank of the measurement amplitudes. This approach is robust as exact shape of amplitude probability density function is not used. In this paper, the HPDA is combined with particle filtering for nonlinear target tracking in clutter. The measurement with the highest measurement-to-track data association probability is selected for track update. The HPDA provides the track quality information which can be used in for the false track termination and the true track confirmation. It can be easily extended to multi-target tracking with nonlinear particle filtering. The simulation studies demonstrate the HPDA functionality in a hostile environment with high clutter density and low target detection probability.

  • A Distributed TDMA Scheduling Algorithm with Distance-Measurement-Based Power Control for Sensor Networks

    Koji SATO  Shiro SAKATA  

     
    PAPER-Network and Communication

      Vol:
    E95-D No:12
      Page(s):
    2879-2887

    This paper proposes a distributed TDMA slot scheduling algorithm with power control, which the slot allocation priority is controlled by distance measurement information. In the proposed scheme, Lamport's bakery algorithm for mutual exclusion is applied for prioritized slot allocation based on the distance measurement information between nodes, and a packet-based transmission power control scheme is combined. This aims at achieving media access control methods which can construct a local network practically by limiting the scope. The proposed scheme can be shown as a possible replacement of DRAND algorithm for Z-MAC scheme in a distance-measurement-oriented manner. The scheme can contribute to the efficient TDMA slot allocation.

  • Analytical Modeling of Network Throughput Prediction on the Internet

    Chunghan LEE  Hirotake ABE  Toshio HIROTSU  Kyoji UMEMURA  

     
    PAPER-Network and Communication

      Vol:
    E95-D No:12
      Page(s):
    2870-2878

    Predicting network throughput is important for network-aware applications. Network throughput depends on a number of factors, and many throughput prediction methods have been proposed. However, many of these methods are suffering from the fact that a distribution of traffic fluctuation is unclear and the scale and the bandwidth of networks are rapidly increasing. Furthermore, virtual machines are used as platforms in many network research and services fields, and they can affect network measurement. A prediction method that uses pairs of differently sized connections has been proposed. This method, which we call connection pair, features a small probe transfer using the TCP that can be used to predict the throughput of a large data transfer. We focus on measurements, analyses, and modeling for precise prediction results. We first clarified that the actual throughput for the connection pair is non-linearly and monotonically changed with noise. Second, we built a previously proposed predictor using the same training data sets as for our proposed method, and it was unsuitable for considering the above characteristics. We propose a throughput prediction method based on the connection pair that uses ν-support vector regression and the polynomial kernel to deal with prediction models represented as a non-linear and continuous monotonic function. The prediction results of our method compared to those of the previous predictor are more accurate. Moreover, under an unstable network state, the drop in accuracy is also smaller than that of the previous predictor.

  • GREAT-CEO: larGe scale distRibuted dEcision mAking Techniques for Wireless Chief Executive Officer Problems Open Access

    Xiaobo ZHOU  Xin HE  Khoirul ANWAR  Tad MATSUMOTO  

     
    INVITED PAPER

      Vol:
    E95-B No:12
      Page(s):
    3654-3662

    In this paper, we reformulate the issue related to wireless mesh networks (WMNs) from the Chief Executive Officer (CEO) problem viewpoint, and provide a practical solution to a simple case of the problem. It is well known that the CEO problem is a theoretical basis for sensor networks. The problem investigated in this paper is described as follows: an originator broadcasts its binary information sequence to several forwarding nodes (relays) over Binary Symmetric Channels (BSC); the originator's information sequence suffers from independent random binary errors; at the forwarding nodes, they just further interleave, encode the received bit sequence, and then forward it, without making heavy efforts for correcting errors that may occur in the originator-relay links, to the final destination (FD) over Additive White Gaussian Noise (AWGN) channels. Hence, this strategy reduces the complexity of the relay significantly. A joint iterative decoding technique at the FD is proposed by utilizing the knowledge of the correlation due to the errors occurring in the link between the originator and forwarding nodes (referred to as intra-link). The bit-error-rate (BER) performances show that the originator's information can be reconstructed at the FD even by using a very simple coding scheme. We provide BER performance comparison between joint decoding and separate decoding strategies. The simulation results show that excellent performance can be achieved by the proposed system. Furthermore, extrinsic information transfer (EXIT) chart analysis is performed to investigate convergence property of the proposed technique, with the aim of, in part, optimizing the code rate at the originator.

301-320hit(1013hit)