The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] ATI(18690hit)

2401-2420hit(18690hit)

  • GNSS Correction Using Altitude Map and Its Integration with Pedestrian Dead Reckoning

    Yuyang HUANG  Li-Ta HSU  Yanlei GU  Shunsuke KAMIJO  

     
    PAPER-Intelligent Transport System

      Vol:
    E101-A No:8
      Page(s):
    1245-1256

    Accurate pedestrian navigation remains a challenge in urban environments. GNSS receiver behaves poorly because the reflection and blockage of the GNSS signals by buildings or other obstacles. Integration of GNSS positioning and Pedestrian Dead Reckoning (PDR) could provide a more smooth navigation trajectory. However, the integration system cannot present the satisfied performance if GNSS positioning has large error. This situation often happens in the urban scenario. This paper focuses on improving the accuracy of the pedestrian navigation in urban environment using a proposed altitude map aided GNSS positioning method. Firstly, we use consistency check algorithm, which is similar to receiver autonomous integrity monitoring (RAIM) fault detection, to distinguish healthy and multipath contaminated measurements. Afterwards, the erroneous signals are corrected with the help of an altitude map. We called the proposed method altitude map aided GNSS. After correcting the erroneous satellite signals, the positioning mean error could be reduced from 17 meters to 12 meters. Usually, good performance for integration system needs accurately calculated GNSS accuracy value. However, the conventional GNSS accuracy calculation is not reliable in urban canyon. In this paper, the altitude map is also utilized to calculate the GNSS localization accuracy in order to indicate the reliability of the estimated position solution. The altitude map aided GNSS and accuracy are used in the integration with PDR system in order to provide more accurate and continuous positioning results. With the help of the proposed GNSS accuracy, the integration system could achieve 6.5 meters horizontal positioning accuracy in urban environment.

  • Full-Duplex Cooperative Cognitive Radio Networks with Simultaneous Transmit and Receive Antennas in MIMO Channels

    Sangwoo PARK  Iickho SONG  Seungwon LEE  Seokho YOON  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2018/01/31
      Vol:
    E101-B No:8
      Page(s):
    1903-1915

    We propose a cooperative cognitive radio network (CCRN) with secondary users (SUs) employing two simultaneous transmit and receive (STAR) antennas. In the proposed framework of full-duplex (FD) multiple-input-multiple-output (MIMO) CCRN, the region of achievable rate is expanded via FD communication among SUs enabled by the STAR antennas adopted for the SUs. The link capacity of the proposed framework is analyzed theoretically. It is shown through numerical analysis that the proposed FD MIMO-CCRN framework can provide a considerable performance gain over the conventional frameworks of CCRN and MIMO-CCRN.

  • Adaptive Beamforming Based on Compressed Sensing with Gain/Phase Uncertainties

    Bin HU  Xiaochuan WU  Xin ZHANG  Qiang YANG  Di YAO  Weibo DENG  

     
    LETTER-Digital Signal Processing

      Vol:
    E101-A No:8
      Page(s):
    1257-1262

    A new method for adaptive digital beamforming technique with compressed sensing (CS) for sparse receiving arrays with gain/phase uncertainties is presented. Because of the sparsity of the arriving signals, CS theory can be adopted to sample and recover receiving signals with less data. But due to the existence of the gain/phase uncertainties, the sparse representation of the signal is not optimal. In order to eliminating the influence of the gain/phase uncertainties to the sparse representation, most present study focus on calibrating the gain/phase uncertainties first. To overcome the effect of the gain/phase uncertainties, a new dictionary optimization method based on the total least squares (TLS) algorithm is proposed in this paper. We transfer the array signal receiving model with the gain/phase uncertainties into an EIV model, treating the gain/phase uncertainties effect as an additive error matrix. The method we proposed in this paper reconstructs the data by estimating the sparse coefficients using CS signal reconstruction algorithm and using TLS method toupdate error matrix with gain/phase uncertainties. Simulation results show that the sparse regularized total least squares algorithm can recover the receiving signals better with the effect of gain/phase uncertainties. Then adaptive digital beamforming algorithms are adopted to form antenna beam using the recovered data.

  • Path Loss Model Considering Blockage Effects of Traffic Signs Up to 40GHz in Urban Microcell Environments

    Motoharu SASAKI  Minoru INOMATA  Wataru YAMADA  Naoki KITA  Takeshi ONIZAWA  Masashi NAKATSUGAWA  Koshiro KITAO  Tetsuro IMAI  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2018/02/21
      Vol:
    E101-B No:8
      Page(s):
    1891-1902

    This paper presents the characteristics of path loss produced by traffic sign blockage. Multi frequency bands including high frequency bands up to 40 GHz are analyzed on the basis of measurement results in urban microcell environments. It is shown that the measured path loss increases compared to free space path loss even on a straight line-of-sight road, and that the excess attenuation is caused by the blockage effects of traffic signs. It is also shown that the measurement area affected by the blockage becomes small as frequency increases. The blocking object occupies the same area for all frequencies, but it takes up a larger portion of the Fresnel Zone as frequency increases. Therefore, if blockage occurs, the excess loss in high frequency bands becomes larger than in low frequency bands. In addition, the validity of two blockage path loss models is verified on the basis of measurement results. The first is the 3GPP blockage model and the second is the proposed blockage model, which is an expanded version of the basic diffraction model in ITU-R P.526. It is shown that these blockage models can predict the path loss increased by the traffic sign blockage and that their root mean square error can be improved compared to that of the 3GPP two slope model and a free space path loss model. The 3GPP blockage model is found to be more accurate for 26.4 and 37.1GHz, while the proposed model is more accurate for 0.8, 2.2, and 4.7GHz. The results show the blockage path loss due to traffic signs is clarified in a wide frequency range, and it is verified that the 3GPP blockage model and the proposed blockage model can accurately predict the blockage path loss.

  • ZINK: An Efficient Information Centric Networking Utilizing Layered Network Architecture

    Takao KONDO  Shuto YOSHIHARA  Kunitake KANEKO  Fumio TERAOKA  

     
    PAPER-Network

      Pubricized:
    2018/02/16
      Vol:
    E101-B No:8
      Page(s):
    1853-1865

    This paper argues that a layered approach is more suitable for Information Centric Networking (ICN) than a narrow-waist approach and proposes an ICN mechanism called ZINK. In ZINK, a location-independent content name is resolved to a list of node IDs of content servers in the application layer and a node ID is mapped to a node locator in the network layer, which results in scalable locator-based routing. An ID/Locator split approach in the network layer can efficiently support client/serever mobility. Efficient content transfer is achieved by using sophisticated functions in the transport layer such as multipath transfer for bandwidth aggregation or fault tolerance. Existing well-tuned congestion control in the transport layer achieves fairness not only among ICN flows but also among ICN flows and other flows. A proof-of concept prototype of ZINK is implemented on an IPv6 stack. Evaluation results show that the time for content finding is practical, efficient content transfer is possible by using multipath transfer, and the mobility support mechanism is scalable as shown in a nationwide experiment environment in Japan.

  • Revealing of the Underlying Mechanism of Different Node Centralities Based on Oscillation Dynamics on Networks

    Chisa TAKANO  Masaki AIDA  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2018/02/01
      Vol:
    E101-B No:8
      Page(s):
    1820-1832

    In recent years, with the rapid development of the Internet and cloud computing, an enormous amount of information is exchanged on various social networking services. In order to handle and maintain such a mountain of information properly by limited resources in the network, it is very important to comprehend the dynamics for propagation of information or activity on the social network. One of many indices used by social network analysis which investigates the network structure is “node centrality”. A common characteristic of conventional node centralities is that it depends on the topological structure of network and the value of node centrality does not change unless the topology changes. The network dynamics is generated by interaction between users whose strength is asymmetric in general. Network structure reflecting the asymmetric interaction between users is modeled by a directed graph, and it is described by an asymmetric matrix in matrix-based network model. In this paper, we showed an oscillation model for describing dynamics on networks generated from a certain kind of asymmetric interaction between nodes by using a symmetric matrix. Moreover, we propose a new extended index of well-known two node centralities based on the oscillation model. In addition, we show that the proposed index can describe various aspect of node centrality that considers not only the topological structure of the network, but also asymmetry of links, the distribution of source node of activity, and temporal evolution of activity propagation by properly assigning the weight of each link. The proposed model is regarded as the fundamental framework for different node centralities.

  • Reconstruction of Feedback Polynomial of Synchronous Scrambler Based on Triple Correlation Characteristics of M-Sequences

    Shu nan HAN  Min ZHANG  Xin hao LI  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2018/01/16
      Vol:
    E101-B No:7
      Page(s):
    1723-1732

    For the reconstruction of the feedback polynomial of a synchronous scrambler placed after a convolutional encoder, the existing algorithms require the prior knowledge of a dual word of the convolutional code. To address the case of a dual word being unknown, a new algorithm for the reconstruction of the feedback polynomial based on triple correlation characteristic of an m-sequence is proposed. First, the scrambled convolutional code sequence is divided into bit blocks; the product of the scrambled bit blocks with a dual word is proven to be an m-sequence with the same period as the synchronous scrambler. Second, based on the triple correlation characteristic of the generated m-sequence, a dual word is estimated; the generator polynomial of the generated m-sequence is computed by two locations of the triple correlation peaks. Finally, the feedback polynomial is reconstructed using the generator polynomial of the generated m-sequence. As the received sequence may contain bit errors, a method for detecting triple correlation peaks based on the constant false-alarm criterion is elaborated. Experimental results show that the proposed algorithm is effective. Ulike the existing algorithms available, there is no need to know a dual word a priori and the reconstruction result is more accurate. Moreover, the proposed algorithm is robust to bit errors.

  • An Investigative Study on How Developers Filter and Prioritize Code Smells

    Natthawute SAE-LIM  Shinpei HAYASHI  Motoshi SAEKI  

     
    PAPER

      Pubricized:
    2018/04/20
      Vol:
    E101-D No:7
      Page(s):
    1733-1742

    Code smells are indicators of design flaws or problems in the source code. Various tools and techniques have been proposed for detecting code smells. These tools generally detect a large number of code smells, so approaches have also been developed for prioritizing and filtering code smells. However, lack of empirical data detailing how developers filter and prioritize code smells hinders improvements to these approaches. In this study, we investigated ten professional developers to determine the factors they use for filtering and prioritizing code smells in an open source project under the condition that they complete a list of five tasks. In total, we obtained 69 responses for code smell filtration and 50 responses for code smell prioritization from the ten professional developers. We found that Task relevance and Smell severity were most commonly considered during code smell filtration, while Module importance and Task relevance were employed most often for code smell prioritization. These results may facilitate further research into code smell detection, prioritization, and filtration to better focus on the actual needs of developers.

  • Active Contours Driven by Local Rayleigh Distribution Fitting Energy for Ultrasound Image Segmentation

    Hui BI  Yibo JIANG  Hui LI  Xuan SHA  Yi WANG  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/02/08
      Vol:
    E101-D No:7
      Page(s):
    1933-1937

    The ultrasound image segmentation is a crucial task in many clinical applications. However, the ultrasound image is difficult to segment due to image inhomogeneity caused by the ultrasound imaging technique. In this paper, to deal with image inhomogeneity with considering ultrasound image properties the Local Rayleigh Distribution Fitting (LRDF) energy term is introduced into the traditional level set method newly. While the curve evolution equation is derived for energy minimization, and self-driven uterus contour is achieved on the ultrasound images. The experimental segmentation results on synthetic images and in-vivo ultrasound images present that the proposed approach is effective and accurate, with the Dice Score Coefficient (DSC) of 0.95 ± 0.02.

  • A Method of Verifying Time-Response Requirements

    Yuma MATSUMOTO  Takayuki OMORI  Hiroya ITOGA  Atsushi OHNISHI  

     
    PAPER

      Pubricized:
    2018/04/20
      Vol:
    E101-D No:7
      Page(s):
    1725-1732

    In order to verify the correctness of functional requirements, we have been developing a verification method of the correctness of functional requirements specification using the Requirements Frame model. In this paper, we propose a verification method of non-functional requirements specification in terms of time-response requirements written with a natural language. We established a verification method by extending the Requirements Frame model. We have also developed a prototype system based on the method using Java. The extended Requirements Frame model and the verification method will be illustrated with examples.

  • Implementing Adaptive Decisions in Stochastic Simulations via AOP

    Pilsung KANG  

     
    LETTER-Software Engineering

      Pubricized:
    2018/04/05
      Vol:
    E101-D No:7
      Page(s):
    1950-1953

    We present a modular way of implementing adaptive decisions in performing scientific simulations. The proposed method employs modern software engineering mechanisms to allow for better software management in scientific computing, where software adaptation has often been implemented manually by the programmer or by using in-house tools, which complicates software management over time. By applying the aspect-oriented programming (AOP) paradigm, we consider software adaptation as a separate concern and, using popular AOP constructs, implement adaptive decision separately from the original code base, thereby improving software management. We demonstrate the effectiveness of our approach with applications to stochastic simulation software.

  • Efficient Transceiver Design for Large-Scale SWIPT System with Time-Switching and Power-Splitting Receivers

    Pham-Viet TUAN  Insoo KOO  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2018/01/12
      Vol:
    E101-B No:7
      Page(s):
    1744-1751

    The combination of large-scale antenna arrays and simultaneous wireless information and power transfer (SWIPT), which can provide enormous increase of throughput and energy efficiency is a promising key in next generation wireless system (5G). This paper investigates efficient transceiver design to minimize transmit power, subject to users' required data rates and energy harvesting, in large-scale SWIPT system where the base station utilizes a very large number of antennas for transmitting both data and energy to multiple users equipped with time-switching (TS) or power-splitting (PS) receive structures. We first propose the well-known semidefinite relaxation (SDR) and Gaussian randomization techniques to solve the minimum transmit power problems. However, for these large-scale SWIPT problems, the proposed scheme, which is based on conventional SDR method, is not suitable due to its excessive computation costs, and a consensus alternating direction method of multipliers (ADMM) cannot be directly applied to the case that TS or PS ratios are involved in the optimization problem. Therefore, in the second solution, our first step is to optimize the variables of TS or PS ratios, and to achieve simplified problems. After then, we propose fast algorithms for solving these problems, where the outer loop of sequential parametric convex approximation (SPCA) is combined with the inner loop of ADMM. Numerical simulations show the fast convergence and superiority of the proposed solutions.

  • Representation Learning for Users' Web Browsing Sequences

    Yukihiro TAGAMI  Hayato KOBAYASHI  Shingo ONO  Akira TAJIMA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/04/20
      Vol:
    E101-D No:7
      Page(s):
    1870-1879

    Modeling user activities on the Web is a key problem for various Web services, such as news article recommendation and ad click prediction. In our work-in-progress paper[1], we introduced an approach that summarizes each sequence of user Web page visits using Paragraph Vector[3], considering users and URLs as paragraphs and words, respectively. The learned user representations are used among the user-related prediction tasks in common. In this paper, on the basis of analysis of our Web page visit data, we propose Backward PV-DM, which is a modified version of Paragraph Vector. We show experimental results on two ad-related data sets based on logs from Web services of Yahoo! JAPAN. Our proposed method achieved better results than those of existing vector models.

  • On the Feasibility of an Adaptive Movable Access Point System in a Static Indoor WLAN Environment

    Tomoki MURAKAMI  Shingo OKA  Yasushi TAKATORI  Masato MIZOGUCHI  Fumiaki MAEHARA  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2018/01/10
      Vol:
    E101-B No:7
      Page(s):
    1693-1700

    This paper investigates an adaptive movable access point (AMAP) system and explores its feasibility in a static indoor classroom environment with an applied wireless local area network (WLAN) system. In the AMAP system, the positions of multiple access points (APs) are adaptively moved in accordance with clustered user groups, which ensures effective coverage for non-uniform user distributions over the target area. This enhances the signal to interference and noise power ratio (SINR) performance. In order to derive the appropriate AP positions, we utilize the k-means method in the AMAP system. To accurately estimate the position of each user within the target area for user clustering, we use the general methods of received signal strength indicator (RSSI) or time of arrival (ToA), measured by the WLAN systems. To clarify the basic effectiveness of the AMAP system, we first evaluate the SINR performance of the AMAP system and a conventional fixed-position AP system with equal intervals using computer simulations. Moreover, we demonstrate the quantitative improvement of the SINR performance by analyzing the ToA and RSSI data measured in an indoor classroom environment in order to clarify the feasibility of the AMAP system.

  • Usability Evaluation Method of Applications for Mobile Computers Using Operation Histories

    Junko SHIROGANE  Misaki MATSUZAWA  Hajime IWATA  Yoshiaki FUKAZAWA  

     
    PAPER

      Pubricized:
    2018/04/20
      Vol:
    E101-D No:7
      Page(s):
    1790-1800

    Various applications have been realized on mobile computers such as smart phones and tablet computers. Because mobile computers have smaller monitors than conventional computers, strategies to develop user interfaces differ from conventional computer applications. For example, contents in a window are reduced or divided into multiple windows on mobile computers. To realize usable applications in this situation, usability evaluations are important. Although various usability evaluation methods for mobile computers have been proposed, few evaluate applications and identify problems automatically. Herein we propose a systematic usability evaluation method. In our method, operation histories by users are recorded and analyzed to identify steps with usability problems. Our method automatically analyzes usability problems, allowing usability evaluations in software development to be implemented easily and economically. As a case study, the operation histories were recorded and analyzed when 20 subjects operated an application on a tablet computer. Our method automatically identified many usability problems, confirming its effectiveness.

  • Robust Human-Computer Interaction for Unstable Camera Systems

    Hao ZHU  Qing YOU  Wenjie CHEN  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/03/26
      Vol:
    E101-D No:7
      Page(s):
    1915-1923

    A lot of vision systems have been embedded in devices around us, like mobile phones, vehicles and UAVs. Many of them still need interactive operations of human users. However, specifying accurate object information could be a challenging task due to video jitters caused by camera shakes and target motions. In this paper, we first collect practical hand drawn bounding boxes on real-life videos which are captured by hand-held cameras and UAV-based cameras. We give a deep look into human-computer interactive operations on unstable images. The collected data shows that human input suffers heavy deviations which are harmful to interaction accuracy. To achieve robust interactions on unstable platforms, we propose a target-focused video stabilization method which utilizes a proposal-based object detector and a tracking-based motion estimation component. This method starts with a single manual click and outputs stabilized video stream in which the specified target stays almost stationary. Our method removes not only camera jitters but also target motions simultaneously, therefore offering an comfortable environment for users to do further interactive operations. The experiments demonstrate that the proposed method effectively eliminates image vibrations and significantly increases human input accuracy.

  • Towards an Improvement of Bug Report Summarization Using Two-Layer Semantic Information

    Cheng-Zen YANG  Cheng-Min AO  Yu-Han CHUNG  

     
    PAPER

      Pubricized:
    2018/04/20
      Vol:
    E101-D No:7
      Page(s):
    1743-1750

    Bug report summarization has been explored in past research to help developers comprehend important information for bug resolution process. As text mining technology advances, many summarization approaches have been proposed to provide substantial summaries on bug reports. In this paper, we propose an enhanced summarization approach called TSM by first extending a semantic model used in AUSUM with the anthropogenic and procedural information in bug reports and then integrating the extended semantic model with the shallow textual information used in BRC. We have conducted experiments with a dataset of realistic software projects. Compared with the baseline approaches BRC and AUSUM, TSM demonstrates the enhanced performance in achieving relative improvements of 34.3% and 7.4% in the F1 measure, respectively. The experimental results show that TSM can effectively improve the performance.

  • Growth Mechanism of Polar-Plane-Free Faceted InGaN Quantum Wells Open Access

    Yoshinobu MATSUDA  Mitsuru FUNATO  Yoichi KAWAKAMI  

     
    INVITED PAPER

      Vol:
    E101-C No:7
      Page(s):
    532-536

    The growth mechanisms of three-dimensionally (3D) faceted InGaN quantum wells (QWs) on (=1=12=2) GaN substrates are discussed. The structure is composed of (=1=12=2), {=110=1}, and {=1100} planes, and the cross sectional shape is similar to that of 3D QWs on (0001). However, the 3D QWs on (=1=12=2) and (0001) show quite different inter-facet variation of In compositions. To clarify this observation, the local thicknesses of constituent InN and GaN on the 3D GaN are fitted with a formula derived from the diffusion equation. It is suggested that the difference in the In incorporation efficiency of each crystallographic plane strongly affects the surface In adatom migration.

  • A Relaxed Bit-Write-Reducing and Error-Correcting Code for Non-Volatile Memories

    Tatsuro KOJO  Masashi TAWADA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    LETTER

      Vol:
    E101-A No:7
      Page(s):
    1045-1052

    Non-volatile memories are a promising alternative to memory design but data stored in them still may be destructed due to crosstalk and radiation. The data stored in them can be restored by using error-correcting codes but they require extra bits to correct bit errors. One of the largest problems in non-volatile memories is that they consume ten to hundred times more energy than normal memories in bit-writing. It is quite necessary to reduce writing bits. Recently, a REC code (bit-write-reducing and error-correcting code) is proposed for non-volatile memories which can reduce writing bits and has a capability of error correction. The REC code is generated from a linear systematic error-correcting code but it must include the codeword of all 1's, i.e., 11…1. The codeword bit length must be longer in order to satisfy this condition. In this letter, we propose a method to generate a relaxed REC code which is generated from a relaxed error-correcting code, which does not necessarily include the codeword of all 1's and thus its codeword bit length can be shorter. We prove that the maximum flipping bits of the relaxed REC code is still limited theoretically. Experimental results show that the relaxed REC code efficiently reduce the number of writing bits.

  • Two High Accuracy Frequency Estimation Algorithms Based on New Autocorrelation-Like Function for Noncircular/Sinusoid Signal

    Kai WANG  Jiaying DING  Yili XIA  Xu LIU  Jinguang HAO  Wenjiang PEI  

     
    PAPER-Digital Signal Processing

      Vol:
    E101-A No:7
      Page(s):
    1065-1073

    Computing autocorrelation coefficient can effectively reduce the influence of additive white noise, thus estimation precision will be improved. In this paper, an autocorrelation-like function, different from the ordinary one, is defined, and is proven to own better linear predictive performance. Two algorithms for signal model are developed to achieve frequency estimates. We analyze the theoretical properties of the algorithms in the additive white Gaussian noise. The simulation results match with the theoretical values well in the sense of mean square error. The proposed algorithms compare with existing estimators, are closer to the Cramer-Rao bound (CRLB). In addition, computer simulations demonstrate that the proposed algorithms provide high accuracy and good anti-noise capability.

2401-2420hit(18690hit)