The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] TE(21534hit)

9461-9480hit(21534hit)

  • Characterization of Minimum Route ETX in Multi-Hop Wireless Networks

    Kazuyuki MIYAKITA  Keisuke NAKANO  Yusuke MORIOKA  Masakazu SENGOKU  Shoji SHINODA  

     
    PAPER

      Vol:
    E92-B No:3
      Page(s):
    745-754

    In multi-hop wireless networks, communication quality depends on the selection of a path between source and destination nodes from several candidate paths. Exploring how path selection affects communication quality is important to characterize the best path. To do this, in [1], we used expected transmission count (ETX) as a metric of communication quality and theoretically characterized minimum route ETX, which is the ETX of the best path, in a static one-dimensional random multi-hop network. In this paper, we characterize minimum route ETX in static two-dimensional multi-hop networks. We give the exact formula of minimum route ETX in a two-dimensional network, assuming that nodes are located with lattice structure and that the ETX function satisfies three conditions for simplifying analysis. This formula can be used as an upper bound of minimum route ETX without two of the three conditions. We show that this upper bound is close to minimum route ETX by comparing it with simulation results. Before deriving the formula, we also give the formula for a one-dimensional network where nodes are located at constant intervals. We also show that minimum route ETX in the lattice network is close to that in a two-dimensional random network if the node density is large, based on a comparison between the numerical and simulation results.

  • Controlling the Display of Capsule Endoscopy Video for Diagnostic Assistance

    Hai VU  Tomio ECHIGO  Ryusuke SAGAWA  Keiko YAGI  Masatsugu SHIBA  Kazuhide HIGUCHI  Tetsuo ARAKAWA  Yasushi YAGI  

     
    PAPER-Biological Engineering

      Vol:
    E92-D No:3
      Page(s):
    512-528

    Interpretations by physicians of capsule endoscopy image sequences captured over periods of 7-8 hours usually require 45 to 120 minutes of extreme concentration. This paper describes a novel method to reduce diagnostic time by automatically controlling the display frame rate. Unlike existing techniques, this method displays original images with no skipping of frames. The sequence can be played at a high frame rate in stable regions to save time. Then, in regions with rough changes, the speed is decreased to more conveniently ascertain suspicious findings. To realize such a system, cue information about the disparity of consecutive frames, including color similarity and motion displacements is extracted. A decision tree utilizes these features to classify the states of the image acquisitions. For each classified state, the delay time between frames is calculated by parametric functions. A scheme selecting the optimal parameters set determined from assessments by physicians is deployed. Experiments involved clinical evaluations to investigate the effectiveness of this method compared to a standard-view using an existing system. Results from logged action based analysis show that compared with an existing system the proposed method reduced diagnostic time to around 32.5 7 minutes per full sequence while the number of abnormalities found was similar. As well, physicians needed less effort because of the systems efficient operability. The results of the evaluations should convince physicians that they can safely use this method and obtain reduced diagnostic times.

  • Zero-Forcing Beamforming Multiuser-MIMO Systems with Finite Rate Feedback for Multiple Stream Transmission per User

    Masaaki FUJII  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E92-B No:3
      Page(s):
    1035-1038

    We describe a channel-vector quantization scheme that is suitable for multiple stream transmission per user in zero-forcing beamforming (ZFBF) multiuser multiple-input and multiple output (MU-MIMO) systems with finite rate feedback. Multiple subsets of a channel matrix are quantized to vectors from random vector codebooks for finite rate feedback. The quantization vectors with an angle difference that is closer to orthogonal are then selected and their indexes are fed back to the transmitter. Simulation results demonstrate that the proposed scheme achieves a better average throughput than that serving a single stream per user when the number of active users is smaller than the number of transmit antennas and that it provides an average throughput close to that serving a single stream per user when the number of users is equal to the number of transmit antennas.

  • Design of a Non-linear Quantizer for Transform Domain DVC

    Murat B. BADEM  Rajitha WEERAKKODY  Anil FERNANDO  Ahmet M. KONDOZ  

     
    PAPER-Digital Signal Processing

      Vol:
    E92-A No:3
      Page(s):
    847-852

    Distributed Video Coding (DVC) is an emerging video coding paradigm that is characterized by a flexible architecture for designing very low cost video encoders. This feature could be very effectively utilized in a number of potential many-to-one type video coding applications. However, the compression efficiency of the latest DVC implementations still falls behind the state-of-the-art in conventional video coding technologies, namely H.264/AVC. In this paper, a novel non-linear quantization algorithm is proposed for DVC in order to improve the rate-distortion (RD) performance. The proposed solution is expected to exploit the dominant contribution to the picture quality from the relatively small coefficients when the high concentration of the coefficients near zero as evident when the residual input video signal for the Wyner-Ziv frames is considered in the transform domain. The performance of the proposed solution incorporating the non-linear quantizer is compared with the performance of an existing transform domain DVC solution that uses a linear quantizer. The simulation results show a consistently improved RD performance at all bitrates when different test video sequences with varying motion levels are considered.

  • Link Correlation Based Transmit Sector Antenna Selection for Alamouti Coded OFDM

    Chang-Jun AHN  

     
    PAPER

      Vol:
    E92-A No:3
      Page(s):
    816-823

    In MIMO systems, the deployment of a multiple antenna technique can enhance the system performance. However, since the cost of RF transmitters is much higher than that of antennas, there is growing interest in techniques that use a larger number of antennas than the number of RF transmitters. These methods rely on selecting the optimal transmitter antennas and connecting them to the respective. In this case, feedback information (FBI) is required to select the optimal transmitter antenna elements. Since FBI is control overhead, the rate of the feedback is limited. This motivates the study of limited feedback techniques where only partial or quantized information from the receiver is conveyed back to the transmitter. However, in MIMO/OFDM systems, it is difficult to develop an effective FBI quantization method for choosing the space-time, space-frequency, or space-time-frequency processing due to the numerous subchannels. Moreover, MIMO/OFDM systems require antenna separation of 5 10 wavelengths to keep the correlation coefficient below 0.7 to achieve a diversity gain. In this case, the base station requires a large space to set up multiple antennas. To reduce these problems, in this paper, we propose the link correlation based transmit sector antenna selection for Alamouti coded OFDM without FBI.

  • Kalman Filter-Based Error Concealment for Video Transmission

    Shigeki TAKAHASHI  Takahiro OGAWA  Hirokazu TANAKA  Miki HASEYAMA  

     
    PAPER

      Vol:
    E92-A No:3
      Page(s):
    779-787

    A novel error concealment method using a Kalman filter is presented in this paper. In order to successfully utilize the Kalman filter, its state transition and observation models that are suitable for the video error concealment are newly defined as follows. The state transition model represents the video decoding process by a motion-compensated prediction. Furthermore, the new observation model that represents an image blurring process is defined, and calculation of the Kalman gain becomes possible. The problem of the traditional methods is solved by using the Kalman filter in the proposed method, and accurate reconstruction of corrupted video frames is achieved. Consequently, an effective error concealment method using the Kalman filter is realized. Experimental results showed that the proposed method has better performance than that of traditional methods.

  • Heart Instantaneous Frequency Based Estimation of HRV from Blood Pressure Waveforms

    Fausto LUCENA  Allan Kardec BARROS  Yoshinori TAKEUCHI  Noboru OHNISHI  

     
    PAPER-Biological Engineering

      Vol:
    E92-D No:3
      Page(s):
    529-537

    The heart rate variability (HRV) is a measure based on the time position of the electrocardiogram (ECG) R-waves. There is a discussion whether or not we can obtain the HRV pattern from blood pressure (BP). In this paper, we propose a method for estimating HRV from a BP signal based on a HIF algorithm and carrying out experiments to compare BP as an alternative measurement of ECG to calculate HRV. Based on the hypotheses that ECG and BP have the same harmonic behavior, we model an alternative HRV signal using a nonlinear algorithm, called heart instantaneous frequency (HIF). It tracks the instantaneous frequency through a rough fundamental frequency using power spectral density (PSD). A novelty in this work is to use fundamental frequency instead of wave-peaks as a parameter to estimate and quantify beat-to-beat heart rate variability from BP waveforms. To verify how the estimate HRV signals derived from BP using HIF correlates to the standard gold measures, i.e. HRV derived from ECG, we use a traditional algorithm based on QRS detectors followed by thresholding to localize the R-wave time peak. The results show the following: 1) The spectral error caused by misestimation of time by R-peak detectors is demonstrated by an increase in high-frequency bands followed by the loss of time domain pattern. 2) The HIF was shown to be robust against noise and nuisances. 3) By using statistical methods and nonlinear analysis no difference between HIF derived from BP and HRV derived from ECG was observed.

  • Self-Protected Spanning Tree Based Recovery Scheme to Protect against Single Failure

    Depeng JIN  Wentao CHEN  Li SU  Yong LI  Lieguang ZENG  

     
    PAPER-Network Management/Operation

      Vol:
    E92-B No:3
      Page(s):
    909-921

    We present a recovery scheme based on Self-protected Spanning Tree (SST), which recovers from failure all by itself. In the recovery scheme, the links are assigned birthdays to denote the order in which they are to be considered for adding to the SST. The recovery mechanism, named Birthday-based Link Replacing Mechanism (BLRM), is able to transform a SST into a new spanning tree by replacing some tree links with some non-tree links of the same birthday, which ensures the network connectivity after any single link or node failure. First, we theoretically prove that the SST-based recovery scheme can be applied to arbitrary two-edge connected or two connected networks. Then, the recovery time of BLRM is analyzed and evaluated using Ethernet, and the simulation results demonstrate the effectiveness of BLRM in achieving fast recovery. Also, we point out that BLRM provides a novel load balancing mechanism by fast changing the topology of the SST.

  • Training Set Selection for Building Compact and Efficient Language Models

    Keiji YASUDA  Hirofumi YAMAMOTO  Eiichiro SUMITA  

     
    PAPER-Natural Language Processing

      Vol:
    E92-D No:3
      Page(s):
    506-511

    For statistical language model training, target domain matched corpora are required. However, training corpora sometimes include both target domain matched and unmatched sentences. In such a case, training set selection is effective for both reducing model size and improving model performance. In this paper, training set selection method for statistical language model training is described. The method provides two advantages for training a language model. One is its capacity to improve the language model performance, and the other is its capacity to reduce computational loads for the language model. The method has four steps. 1) Sentence clustering is applied to all available corpora. 2) Language models are trained on each cluster. 3) Perplexity on the development set is calculated using the language models. 4) For the final language model training, we use the clusters whose language models yield low perplexities. The experimental results indicate that the language model trained on the data selected by our method gives lower perplexity on an open test set than a language model trained on all available corpora.

  • Visual Software Development Environment Based on Graph Grammars

    Takaaki GOTO  Kenji RUISE  Takeo YAKU  Kensei TSUCHIDA  

     
    PAPER-Software Engineering

      Vol:
    E92-D No:3
      Page(s):
    401-412

    In software design and development, program diagrams are often used for good visualization. Many kinds of program diagrams have been proposed and used. To process such diagrams automatically and efficiently, the program diagram structure needs to be formalized. We aim to construct a diagram processing system with an efficient parser for our program diagram Hichart. In this paper, we give a precedence graph grammar for Hichart that can parse in linear time. We also describe a parsing method and processing system incorporating the Hichart graphical editor that is based on the precedence graph grammar.

  • Adaptive Subframe Partitioning and Efficient Packet Scheduling in OFDMA Cellular System with Fixed Decode-and-Forward Relays

    Liping WANG  Yusheng JI  Fuqiang LIU  

     
    PAPER

      Vol:
    E92-B No:3
      Page(s):
    755-765

    The integration of multihop relays with orthogonal frequency-division multiple access (OFDMA) cellular infrastructures can meet the growing demands for better coverage and higher throughput. Resource allocation in the OFDMA two-hop relay system is more complex than that in the conventional single-hop OFDMA system. With time division between transmissions from the base station (BS) and those from relay stations (RSs), fixed partitioning of the BS subframe and RS subframes can not adapt to various traffic demands. Moreover, single-hop scheduling algorithms can not be used directly in the two-hop system. Therefore, we propose a semi-distributed algorithm called ASP to adjust the length of every subframe adaptively, and suggest two ways to extend single-hop scheduling algorithms into multihop scenarios: link-based and end-to-end approaches. Simulation results indicate that the ASP algorithm increases system utilization and fairness. The max carrier-to-interference ratio (Max C/I) and proportional fairness (PF) scheduling algorithms extended using the end-to-end approach obtain higher throughput than those using the link-based approach, but at the expense of more overhead for information exchange between the BS and RSs. The resource allocation scheme using ASP and end-to-end PF scheduling achieves a tradeoff between system throughput maximization and fairness.

  • User-Perceived Reliability of M-for-N (M:N) Shared Protection Systems

    Hirokazu OZAKI  Atsushi KARA  Zixue CHENG  

     
    PAPER-Dependable Computing

      Vol:
    E92-D No:3
      Page(s):
    443-450

    In this paper we investigate the reliability of general type shared protection systems i.e. M for N (M:N) that can typically be applied to various telecommunication network devices. We focus on the reliability that is perceived by an end user of one of N units. We assume that any failed unit is instantly replaced by one of the M units (if available). We describe the effectiveness of such a protection system in a quantitative manner. The mathematical analysis gives the closed-form solution of the availability, the recursive computing algorithm of the MTTFF (Mean Time to First Failure) and the MTTF (Mean Time to Failure) perceived by an arbitrary end user. We also show that, under a certain condition, the probability distribution of TTFF (Time to First Failure) can be approximated by a simple exponential distribution. The analysis provides useful information for the analysis and the design of not only the telecommunication network devices but also other general shared protection systems that are subject to service level agreements (SLA) involving user-perceived reliability measures.

  • Hybrid Model for Cascading Outage in a Power System: A Numerical Study

    Yoshihiko SUSUKI  Yu TAKATSUJI  Takashi HIKIHARA  

     
    PAPER-Nonlinear Problems

      Vol:
    E92-A No:3
      Page(s):
    871-879

    Analysis of cascading outages in power systems is important for understanding why large blackouts emerge and how to prevent them. Cascading outages are complex dynamics of power systems, and one cause of them is the interaction between swing dynamics of synchronous machines and protection operation of relays and circuit breakers. This paper uses hybrid dynamical systems as a mathematical model for cascading outages caused by the interaction. Hybrid dynamical systems can combine families of flows describing swing dynamics with switching rules that are based on protection operation. This paper refers to data on a cascading outage in the September 2003 blackout in Italy and shows a hybrid dynamical system by which propagation of outages reproduced is consistent with the data. This result suggests that hybrid dynamical systems can provide an effective model for the analysis of cascading outages in power systems.

  • Object-Based Auto Exposure and Focus Algorithms Based on the Human Visual System

    Kwanghyun LEE  Suyoung PARK  Sanghoon LEE  

     
    LETTER

      Vol:
    E92-A No:3
      Page(s):
    832-835

    For the acquisition of visual information, the nonuniform sampling process by photoreceptors on the retina occurs at the earliest stage of visual processing. From objects of interest, the human eye receives high visual resolution through nonuniform distribution of photoreceptors. Therefore, this paper proposes auto exposure and focus algorithms for the real-time video camera system based on the visual characteristic of the human eye. For given moving objects, the visual weight is modeled for quantifying the visual importance and the associated auto exposure and focus parameters are derived by applying the weight to the traditional numerical expression, i.e., the DoM (Difference of Median) and Tenengrad methods for auto focus.

  • A More Efficient COPE Architecture for Network Coding in Multihop Wireless Networks

    Kaikai CHI  Xiaohong JIANG  Susumu HORIGUCHI  

     
    PAPER

      Vol:
    E92-B No:3
      Page(s):
    766-775

    Recently, a promising packet forwarding architecture COPE was proposed to essentially improve the throughput of multihop wireless networks, where each network node can intelligently encode multiple packets together and forward them in a single transmission. However, COPE is still in its infancy and has the following limitations: (1) COPE adopts the FIFO packet scheduling and thus does not provide different priorities for different types of packets. (2) COPE simply classifies all packets destined to the same nexthop into small-size or large-size virtual queues and examines only the head packet of each virtual queue to find coding solutions. Such a queueing structure will lose some potential coding opportunities, because among packets destined to the same nexthop at most two packets (the head packets of small-size and large-size queues) will be examined in the coding process, regardless of the number of flows. (3) The coding algorithm adopted in COPE is fast but cannot always find good solutions. In order to address the above limitations, in this paper we first present a new queueing structure for COPE, which can provide more potential coding opportunities, and then propose a new packet scheduling algorithm for this queueing structure to assign different priorities to different types of packets. Finally, we propose an efficient coding algorithm to find appropriate packets for coding. Simulation results demonstrate that this new COPE architecture can further greatly improve the node transmission efficiency.

  • Fast Local Algorithms for Large Scale Nonnegative Matrix and Tensor Factorizations

    Andrzej CICHOCKI  Anh-Huy PHAN  

     
    INVITED PAPER

      Vol:
    E92-A No:3
      Page(s):
    708-721

    Nonnegative matrix factorization (NMF) and its extensions such as Nonnegative Tensor Factorization (NTF) have become prominent techniques for blind sources separation (BSS), analysis of image databases, data mining and other information retrieval and clustering applications. In this paper we propose a family of efficient algorithms for NMF/NTF, as well as sparse nonnegative coding and representation, that has many potential applications in computational neuroscience, multi-sensory processing, compressed sensing and multidimensional data analysis. We have developed a class of optimized local algorithms which are referred to as Hierarchical Alternating Least Squares (HALS) algorithms. For these purposes, we have performed sequential constrained minimization on a set of squared Euclidean distances. We then extend this approach to robust cost functions using the alpha and beta divergences and derive flexible update rules. Our algorithms are locally stable and work well for NMF-based blind source separation (BSS) not only for the over-determined case but also for an under-determined (over-complete) case (i.e., for a system which has less sensors than sources) if data are sufficiently sparse. The NMF learning rules are extended and generalized for N-th order nonnegative tensor factorization (NTF). Moreover, these algorithms can be tuned to different noise statistics by adjusting a single parameter. Extensive experimental results confirm the accuracy and computational performance of the developed algorithms, especially, with usage of multi-layer hierarchical NMF approach [3].

  • An Equivalent Division Method for Reducing Test Cases in State Transition Testing of MANET Protocols

    Hideharu KOJIMA  Juichi TAKAHASHI  Tomoyuki OHTA  Yoshiaki KAKUDA  

     
    PAPER

      Vol:
    E92-B No:3
      Page(s):
    794-806

    A typical feature of MANETs is that network topology is dynamically changed by node movement. When we execute state transition testing for such protocols, first we draw the Finite State Machine (FSM) with respect to each number of neighbor nodes. Next, we create the state transition matrix from the FSMs. Then, we generate test cases from the state transition matrix. However, the state transition matrix is getting much large because the number of states and the number of transitions increase explosively with increase of the number of neighbor nodes. As a result, the number of test cases increases, too. In this paper, we propose a new method to reduce the number of test cases by using equivalent division method. In this method, we decide a representative input to each state, which is selected from equivalent inputs to the states. By using our proposed method, we can generate state transition matrix which is hard to affect increasing the number of neighbor nodes. As a consequence, the number of test cases can be reduced.

  • Multiuser Distortion Management Scheme for H.264 Video Transmission in OFDM Systems

    Hojin HA  Young Yong KIM  

     
    PAPER-Network

      Vol:
    E92-B No:3
      Page(s):
    850-857

    In this paper, we propose a subcarrier resource allocation algorithm for managing the video quality degradation for multiuser orthogonal frequency division multiplex (OFDM) systems. The proposed algorithm exploits the unequal importance existing in different picture types for video coding and the diversity of subcarriers for multiuser systems. A model-based performance metric is first derived considering the error concealment and error propagation properties of the H.264 video coding structure. Based on the information on video quality enhancement existing in a packet to be transmitted, we propose the distortion management algorithm for balancing the subcarriers and power usages for each user and minimizing the overall video quality degradation. In the simulation results, the proposed algorithm demonstrates a more gradual video quality degradation for different numbers of users compared with other resource allocation schemes.

  • Privacy Protection by Masking Moving Objects for Security Cameras

    Kenichi YABUTA  Hitoshi KITAZAWA  Toshihisa TANAKA  

     
    PAPER-Image

      Vol:
    E92-A No:3
      Page(s):
    919-927

    Because of an increasing number of security cameras, it is crucial to establish a system that protects the privacy of objects in the recorded images. To this end, we propose a framework of image processing and data hiding for security monitoring and privacy protection. First, we state the requirements of the proposed monitoring systems and suggest possible implementation that satisfies those requirements. The underlying concept of our proposed framework is as follows: (1) in the recorded images, the objects whose privacy should be protected are deteriorated by appropriate image processing; (2) the original objects are encrypted and watermarked into the output image, which is encoded using an image compression standard; (3) real-time processing is performed such that no future frame is required to generate on output bitstream. It should be noted that in this framework, anyone can observe the decoded image that includes the deteriorated objects that are unrecognizable or invisible. On the other hand, for crime investigation, this system allows a limited number of users to observe the original objects by using a special viewer that decrypts and decodes the watermarked objects with a decoding password. Moreover, the special viewer allows us to select the objects to be decoded and displayed. We provide an implementation example, experimental results, and performance evaluations to support our proposed framework.

  • Short-Exponent RSA

    Hung-Min SUN  Cheng-Ta YANG  Mu-En WU  

     
    PAPER-Cryptography and Information Security

      Vol:
    E92-A No:3
      Page(s):
    912-918

    In some applications, a short private exponent d is chosen to improve the decryption or signing process for RSA public key cryptosystem. However, in a typical RSA, if the private exponent d is selected first, the public exponent e should be of the same order of magnitude as φ(N). Sun et al. devised three RSA variants using unbalanced prime factors p and q to lower the computational cost. Unfortunately, Durfee & Nguyen broke the illustrated instances of the first and third variants by solving small roots to trivariate modular polynomial equations. They also indicated that the instances with unbalanced primes p and q are more insecure than the instances with balanced p and q. This investigation focuses on designing a new RSA variant with balanced p and q, and short exponents d and e, to improve the security of an RSA variant against the Durfee & Nguyen's attack, and the other existing attacks. Furthermore, the proposed variant (Scheme A) is also extended to another RSA variant (Scheme B) in which p and q are balanced, and a trade-off between the lengths of d and e is enable. In addition, we provide the security analysis and feasibility analysis of the proposed schemes.

9461-9480hit(21534hit)