The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] PA(8249hit)

1081-1100hit(8249hit)

  • Identification and Application of Invariant Critical Paths under NBTI Degradation

    Song BIAN  Shumpei MORITA  Michihiro SHINTANI  Hiromitsu AWANO  Masayuki HIROMOTO  Takashi SATO  

     
    PAPER

      Vol:
    E100-A No:12
      Page(s):
    2797-2806

    As technology further scales semiconductor devices, aging-induced device degradation has become one of the major threats to device reliability. In addition, aging mechanisms like the negative bias temperature instability (NBTI) are known to be sensitive to workload (i.e., signal probability) that is hard to be assumed at design phase. In this work, we analyze the workload dependence of NBTI degradation using a processor, and propose a novel technique to estimate the worst-case paths. In our approach, we exploit the fact that the deterministic nature of circuit structure limits the amount of NBTI degradation on different paths, and propose a two-stage path extraction algorithm to identify the invariant critical paths (ICPs) in the processor. Utilizing these paths, we also propose an optimization technique for the replacement of internal node control logic that mitigates the NBTI degradation in the design. Through numerical experiment on two processor designs, we achieved nearly 300x reduction in the sheer number of paths on both designs. Utilizing the extracted ICPs, we achieved 96x-197x speedup without loss in mitigation gain.

  • A New Energy Efficient Clustering Algorithm Based on Routing Spanning Tree for Wireless Sensor Network

    Yating GAO  Guixia KANG  Jianming CHENG  Ningbo ZHANG  

     
    PAPER-Network

      Pubricized:
    2017/05/26
      Vol:
    E100-B No:12
      Page(s):
    2110-2120

    Wireless sensor networks usually deploy sensor nodes with limited energy resources in unattended environments so that people have difficulty in replacing or recharging the depleted devices. In order to balance the energy dissipation and prolong the network lifetime, this paper proposes a routing spanning tree-based clustering algorithm (RSTCA) which uses routing spanning tree to analyze clustering. In this study, the proposed scheme consists of three phases: setup phase, cluster head (CH) selection phase and steady phase. In the setup phase, several clusters are formed by adopting the K-means algorithm to balance network load on the basis of geographic location, which solves the randomness problem in traditional distributed clustering algorithm. Meanwhile, a conditional inter-cluster data traffic routing strategy is created to simplify the networks into subsystems. For the CH selection phase, a novel CH selection method, where CH is selected by a probability based on the residual energy of each node and its estimated next-time energy consumption as a function of distance, is formulated for optimizing the energy dissipation among the nodes in the same cluster. In the steady phase, an effective modification that counters the boundary node problem by adjusting the data traffic routing is designed. Additionally, by the simulation, the construction procedure of routing spanning tree (RST) and the effect of the three phases are presented. Finally, a comparison is made between the RSTCA and the current distributed clustering protocols such as LEACH and LEACH-DT. The results show that RSTCA outperforms other protocols in terms of network lifetime, energy dissipation and coverage ratio.

  • On Zero Error Capacity of Nearest Neighbor Error Channels with Multilevel Alphabet

    Takafumi NAKANO  Tadashi WADAYAMA  

     
    PAPER-Channel Coding

      Vol:
    E100-A No:12
      Page(s):
    2647-2653

    This paper studies the zero error capacity of the Nearest Neighbor Error (NNE) channels with a multilevel alphabet. In the NNE channels, a transmitted symbol is a d-tuple of elements in {0,1,2,...,l-1}. It is assumed that only one element error to a nearest neighbor element in a transmitted symbol can occur. The NNE channels can be considered as a special type of limited magnitude error channels, and it is closely related to error models for flash memories. In this paper, we derive a lower bound of the zero error capacity of the NNE channels based on a result of the perfect Lee codes. An upper bound of the zero error capacity of the NNE channels is also derived from a feasible solution of a linear programming problem defined based on the confusion graphs of the NNE channels. As a result, a concise formula of the zero error capacity is obtained using the lower and upper bounds.

  • A Static Packet Scheduling Approach for Fast Collective Communication by Using PSO

    Takashi YOKOTA  Kanemitsu OOTSU  Takeshi OHKAWA  

     
    PAPER-Interconnection networks

      Pubricized:
    2017/07/14
      Vol:
    E100-D No:12
      Page(s):
    2781-2795

    Interconnection network is one of the inevitable components in parallel computers, since it is responsible to communication capabilities of the systems. It affects the system-level performance as well as the physical and logical structure of the systems. Although many studies are reported to enhance the interconnection network technology, we have to discuss many issues remaining. One of the most important issues is congestion management. In an interconnection network, many packets are transferred simultaneously and the packets interfere to each other in the network. Congestion arises as a result of the interferences. Its fast spreading speed seriously degrades communication performance and it continues for long time. Thus, we should appropriately control the network to suppress the congested situation for maintaining the maximum performance. Many studies address the problem and present effective methods, however, the maximal performance in an ideal situation is not sufficiently clarified. Solving the ideal performance is, in general, an NP-hard problem. This paper introduces particle swarm optimization (PSO) methodology to overcome the problem. In this paper, we first formalize the optimization problem suitable for the PSO method and present a simple PSO application as naive models. Then, we discuss reduction of the size of search space and introduce three practical variations of the PSO computation models as repetitive model, expansion model, and coding model. We furthermore introduce some non-PSO methods for comparison. Our evaluation results reveal high potentials of the PSO method. The repetitive and expansion models achieve significant acceleration of collective communication performance at most 1.72 times faster than that in the bursty communication condition.

  • Provably Secure Gateway Threshold Password-Based Authenticated Key Exchange Secure against Undetectable On-Line Dictionary Attack

    Yukou KOBAYASHI  Naoto YANAI  Kazuki YONEYAMA  Takashi NISHIDE  Goichiro HANAOKA  Kwangjo KIM  Eiji OKAMOTO  

     
    PAPER-Cryptography and Information Security

      Vol:
    E100-A No:12
      Page(s):
    2991-3006

    By using Password-based Authenticated Key Exchange (PAKE), a server can authenticate a user who has only the same password shared with the server in advance and establish a session key with the user simultaneously. However, in the real applications, we may have a situation where a user needs to share a session key with server A, but the authentication needs to be done by a different server B that shares the password with the user. Further, to achieve higher security on the server side, it may be required to make PAKE tolerant of a server breach by having multiple authentication servers. To deal with such a situation, Abdalla et al. proposed a variant of PAKE called Gateway Threshold PAKE (GTPAKE) where a gateway corresponds to the aforementioned server A being an on-line service provider and also a potential adversary that may try to guess the passwords. However, the schemes of Abdalla et al. turned out to be vulnerable to Undetectable On-line Dictionary Attack (UDonDA). In this paper, we propose the first GTPAKE provably secure against UDonDA, and in the security analysis, we prove that our GTPAKE is secure even if an adversary breaks into parts of multiple authentication servers.

  • Blur Map Generation Based on Local Natural Image Statistics for Partial Blur Segmentation

    Natsuki TAKAYAMA  Hiroki TAKAHASHI  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/09/05
      Vol:
    E100-D No:12
      Page(s):
    2984-2992

    Partial blur segmentation is one of the most interesting topics in computer vision, and it has practical value. The generation of blur maps is a crucial part of partial blur segmentation because partial blur segmentation involves producing a blur map and applying a segmentation algorithm to the blur map. In this study, we address two important issues in order to improve the discrimination of blur maps: (1) estimating a robust local blur feature to consider variations in the intensity amplitude and (2) a scheme for generating blur maps. We propose the ANGHS (Amplitude-Normalized Gradient Histogram Span) as a local blur feature. ANGHS represents the heavy-tailedness of a gradient distribution, where it is calculated from an image gradient normalized using the intensity amplitude. ANGHS is robust to variations in the intensity amplitude, and it can handle local regions in a more appropriate manner than previously proposed local blur features. Blur maps are affected by local blur features but also by the contents and sizes of local regions, and the assignment of blur feature values to pixels. Thus, multiple-sized grids and the EAI (Edge-Aware Interpolation) are employed in each task to improve the discrimination of blur maps. The discrimination of the generated blur maps is evaluated visually and statistically using numerous partial blur images. Comparisons with the results obtained by state-of-the-art methods demonstrate the high discrimination of the blur maps generated using the proposed method.

  • Design of New Spatial Modulation Scheme Based on Quaternary Quasi-Orthogonal Sequences

    Hojun KIM  Yulong SHANG  Taejin JUNG  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/06/02
      Vol:
    E100-B No:12
      Page(s):
    2129-2138

    In this paper, we propose a new spatial modulation (SM) scheme based on quaternary quasi-orthogonal sequences (Q-QOSs), referred to as Q-QOS-SM. First, the conventional SM and generalized-SM (GSM) schemes are reinterpreted as a new transmission scheme based on a spatial modulation matrix (SMM), whose column indices are used for the mapping of spatial-information bits unlike the conventional ones. Next, by adopting the SMM comprising the Q-QOSs, the proposed Q-QOS-SM that guarantees twice the number of spatial bits at a transmitter compared with the SM with a constraint of transmit antennas, is designed. From the computer-simulation results, the Q-QOS-SM is shown to achieve a greatly improved throughput compared with the conventional SM and GSM schemes, especially, for a large number of the receive antennas. This finding is mainly because the new scheme offers a much higher minimum Euclidean distance than the other schemes.

  • Wireless Packet Communications Protected by Secret Sharing and Vector Coding

    Shoichiro YAMASAKI  Tomoko K. MATSUSHIMA  Shinichiro MIYAZAKI  Kotoku OMURA  Hirokazu TANAKA  

     
    PAPER-Communication Theory and Systems

      Vol:
    E100-A No:12
      Page(s):
    2680-2690

    Secret sharing is a method to protect information for security. The information is divided into n shares, and the information is reconstructed from any k shares but no knowledge of it is revealed from k-1 shares. Physical layer security is a method to yield a favorable receive condition to an authorized destination terminal in wireless communications based on multi-antenna transmission. In this study, we propose wireless packet communications protected by the secret sharing based on Reed Solomon coding and the physical layer security based on vector coding, which implements a single-antenna system and a multi-antenna system. Evaluation results show the validity of the proposed scheme.

  • Distributed Pareto Local Search for Multi-Objective DCOPs

    Maxime CLEMENT  Tenda OKIMOTO  Katsumi INOUE  

     
    PAPER-Information Network

      Pubricized:
    2017/09/15
      Vol:
    E100-D No:12
      Page(s):
    2897-2905

    Many real world optimization problems involving sets of agents can be modeled as Distributed Constraint Optimization Problems (DCOPs). A DCOP is defined as a set of variables taking values from finite domains, and a set of constraints that yield costs based on the variables' values. Agents are in charge of the variables and must communicate to find a solution minimizing the sum of costs over all constraints. Many applications of DCOPs include multiple criteria. For example, mobile sensor networks must optimize the quality of the measurements and the quality of communication between the agents. This introduces trade-offs between solutions that are compared using the concept of Pareto dominance. Multi-Objective Distributed Constraint Optimization Problems (MO-DCOPs) are used to model such problems where the goal is to find the set of Pareto optimal solutions. This set being exponential in the number of variables, it is important to consider fast approximation algorithms for MO-DCOPs. The bounded multi-objective max-sum (B-MOMS) algorithm is the first and only existing approximation algorithm for MO-DCOPs and is suited for solving a less-constrained problem. In this paper, we propose a novel approximation MO-DCOP algorithm called Distributed Pareto Local Search (DPLS) that uses a local search approach to find an approximation of the set of Pareto optimal solutions. DPLS provides a distributed version of an existing centralized algorithm by complying with the communication limitations and the privacy concerns of multi-agent systems. Experiments on a multi-objective extension of the graph-coloring problem show that DPLS finds significantly better solutions than B-MOMS for problems with medium to high constraint density while requiring a similar runtime.

  • Gauss-Seidel HALS Algorithm for Nonnegative Matrix Factorization with Sparseness and Smoothness Constraints

    Takumi KIMURA  Norikazu TAKAHASHI  

     
    PAPER-Digital Signal Processing

      Vol:
    E100-A No:12
      Page(s):
    2925-2935

    Nonnegative Matrix Factorization (NMF) with sparseness and smoothness constraints has attracted increasing attention. When these properties are considered, NMF is usually formulated as an optimization problem in which a linear combination of an approximation error term and some regularization terms must be minimized under the constraint that the factor matrices are nonnegative. In this paper, we focus our attention on the error measure based on the Euclidean distance and propose a new iterative method for solving those optimization problems. The proposed method is based on the Hierarchical Alternating Least Squares (HALS) algorithm developed by Cichocki et al. We first present an example to show that the original HALS algorithm can increase the objective value. We then propose a new algorithm called the Gauss-Seidel HALS algorithm that decreases the objective value monotonically. We also prove that it has the global convergence property in the sense of Zangwill. We finally verify the effectiveness of the proposed algorithm through numerical experiments using synthetic and real data.

  • An Efficient GPU Implementation of CKY Parsing Using the Bitwise Parallel Bulk Computation Technique

    Toru FUJITA  Koji NAKANO  Yasuaki ITO  Daisuke TAKAFUJI  

     
    PAPER-GPU computing

      Pubricized:
    2017/08/04
      Vol:
    E100-D No:12
      Page(s):
    2857-2865

    The main contribution of this paper is to present an efficient GPU implementation of bulk computation of the CKY parsing for a context-free grammar, which determines if a context-free grammar derives each of a lot of input strings. The bulk computation is to execute the same algorithm for a lot of inputs in turn or at the same time. The CKY parsing is to determine if a context-free grammar derives a given string. We show that the bulk computation of the CKY parsing can be implemented in the GPU efficiently using Bitwise Parallel Bulk Computation (BPBC) technique. We also show the rule minimization technique and the dynamic scheduling method for further acceleration of the CKY parsing on the GPU. The experimental results using NVIDIA TITAN X GPU show that our implementation of the bitwise-parallel CKY parsing for strings of length 32 takes 395µs per string with 131072 production rules for 512 non-terminal symbols.

  • A Region-Based Through-Silicon via Repair Method for Clustered Faults

    Tianming NI  Huaguo LIANG  Mu NIE  Xiumin XU  Aibin YAN  Zhengfeng HUANG  

     
    PAPER-Integrated Electronics

      Vol:
    E100-C No:12
      Page(s):
    1108-1117

    Three-dimensional integrated circuits (3D ICs) that employ through-silicon vias (TSVs) integrating multiple dies vertically have opened up the potential of highly improved circuit designs. However, various types of TSV defects may occur during the assembly process, especially the clustered TSV faults because of the winding level of thinned wafer, the surface roughness and cleanness of silicon dies,inducing TSV yield reduction greatly. To tackle this fault clustering problem, router-based and ring-based TSV redundancy architectures were previously proposed. However, these schemes either require too much area overhead or have limited reparability to tolerant clustered TSV faults. Furthermore, the repairing lengths of these schemes are too long to be ignored, leading to additional delay overhead, which may cause timing violation. In this paper, we propose a region-based TSV redundancy design to achieve relatively high reparability as well as low additional delay overhead. Simulation results show that for a given number of TSVs (8*8) and TSV failure rate (1%), our design achieves 11.27% and 20.79% reduction of delay overhead as compared with router-based design and ring-based scheme, respectively. In addition, the reparability of our proposed scheme is much better than ring-based design by 30.84%, while it is close to that of the router-based scheme. More importantly, the overall TSV yield of our design achieves 99.88%, which is slightly higher than that of both router-based method (99.53%) and ring-based design (99.00%).

  • Spatially “Mt. Fuji” Coupled LDPC Codes

    Yuta NAKAHARA  Shota SAITO  Toshiyasu MATSUSHIMA  

     
    PAPER-Coding Theory and Techniques

      Vol:
    E100-A No:12
      Page(s):
    2594-2606

    A new type of spatially coupled low density parity check (SCLDPC) code is proposed. This code has two benefits. (1) This code requires less number of iterations to correct the erasures occurring through the binary erasure channel in the waterfall region than that of the usual SCLDPC code. (2) This code has lower error floor than that of the usual SCLDPC code. Proposed code is constructed as a coupled chain of the underlying LDPC codes whose code lengths exponentially increase as the position where the codes exist is close to the middle of the chain. We call our code spatially “Mt. Fuji” coupled LDPC (SFCLDPC) code because the shape of the graph representing the code lengths of underlying LDPC codes at each position looks like Mt. Fuji. By this structure, when the proposed SFCLDPC code and the original SCLDPC code are constructed with the same code rate and the same code length, L (the number of the underlying LDPC codes) of the proposed SFCLDPC code becomes smaller and M (the code lengths of the underlying LDPC codes) of the proposed SFCLDPC code becomes larger than those of the SCLDPC code. These properties of L and M enables the above reduction of the number of iterations and the bit error rate in the error floor region, which are confirmed by the density evolution and computer simulations.

  • An Efficient Weighted Bit-Flipping Algorithm for Decoding LDPC Codes Based on Log-Likelihood Ratio of Bit Error Probability

    Tso-Cho CHEN  Erl-Huei LU  Chia-Jung LI  Kuo-Tsang HUANG  

     
    PAPER-Fundamental Theories for Communications

      Pubricized:
    2017/05/29
      Vol:
    E100-B No:12
      Page(s):
    2095-2103

    In this paper, a weighted multiple bit flipping (WMBF) algorithman for decoding low-density parity-check (LDPC) codes is proposed first. Then the improved WMBF algorithm which we call the efficient weighted bit-flipping (EWBF) algorithm is developed. The EWBF algorithm can dynamically choose either multiple bit-flipping or single bit-flipping in each iteration according to the log-likelihood ratio of the error probability of the received bits. Thus, it can efficiently increase the convergence speed of decoding and prevent the decoding process from falling into loop traps. Compared with the parallel weighted bit-flipping (PWBF) algorithm, the EWBF algorithm can achieve significantly lower computational complexity without performance degradation when the Euclidean geometry (EG)-LDPC codes are decoded. Furthermore, the flipping criterion does not require any parameter adjustment.

  • Protocol-Aware Packet Scheduling Algorithm for Multi-Protocol Processing in Multi-Core MPL Architecture

    Runzi ZHANG  Jinlin WANG  Yiqiang SHENG  Xiao CHEN  Xiaozhou YE  

     
    PAPER-Architecture

      Pubricized:
    2017/07/14
      Vol:
    E100-D No:12
      Page(s):
    2837-2846

    Cache affinity has been proved to have great impact on the performance of packet processing applications on multi-core platforms. Flow-based packet scheduling can make the best of data cache affinity with flow associated data and context structures. However, little work on packet scheduling algorithms has been conducted when it comes to instruction cache (I-Cache) affinity in modified pipelining (MPL) architecture for multi-core systems. In this paper, we propose a protocol-aware packet scheduling (PAPS) algorithm aiming at maximizing I-Cache affinity at protocol dependent stages in MPL architecture for multi-protocol processing (MPP) scenario. The characteristics of applications in MPL are analyzed and a mapping model is introduced to illustrate the procedure of MPP. Besides, a stage processing time model for MPL is presented based on the analysis of multi-core cache hierarchy. PAPS is a kind of flow-based packet scheduling algorithm and it schedules flows in consideration of both application-level protocol of flows and load balancing. Experiments demonstrate that PAPS outperforms the Round Robin algorithm and the HRW-based (HRW) algorithm for MPP applications. In particular, PAPS can eliminate all I-Cache misses at protocol dependent stage and reduce the average CPU cycle consumption per packet by more than 10% in comparison with HRW.

  • Query Rewriting or Ontology Modification? Toward a Faster Approximate Reasoning on LOD Endpoints

    Naoki YAMADA  Yuji YAMAGATA  Naoki FUKUTA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2017/09/15
      Vol:
    E100-D No:12
      Page(s):
    2923-2930

    On an inference-enabled Linked Open Data (LOD) endpoint, usually a query execution takes longer than on an LOD endpoint without inference engine due to its processing of reasoning. Although there are two separate kind of approaches, query modification approaches, and ontology modifications have been investigated on the different contexts, there have been discussions about how they can be chosen or combined for various settings. In this paper, for reducing query execution time on an inference-enabled LOD endpoint, we compare these two promising methods: query rewriting and ontology modification, as well as trying to combine them into a cluster of such systems. We employ an evolutionary approach to make such rewriting and modification of queries and ontologies based on the past-processed queries and their results. We show how those two approaches work well on implementing an inference-enabled LOD endpoint by a cluster of SPARQL endpoints.

  • A Segmentation Method of Single- and Multiple-Touching Characters in Offline Handwritten Japanese Text Recognition

    Kha Cong NGUYEN  Cuong Tuan NGUYEN  Masaki NAKAGAWA  

     
    PAPER-Pattern Recognition

      Pubricized:
    2017/08/23
      Vol:
    E100-D No:12
      Page(s):
    2962-2972

    This paper presents a method to segment single- and multiple-touching characters in offline handwritten Japanese text recognition with practical speed. Distortions due to handwriting and a mix of complex Chinese characters with simple phonetic and alphanumeric characters leave optical handwritten text recognition (OHTR) for Japanese still far from perfection. Segmentation of characters, which touch neighbors on multiple points, is a serious unsolved problem. Therefore, we propose a method to segment them which is made in two steps: coarse segmentation and fine segmentation. The coarse segmentation employs vertical projection, stroke-width estimation while the fine segmentation takes a graph-based approach for thinned text images, which employs a new bridge finding process and Voronoi diagrams with two improvements. Unlike previous methods, it locates character centers and seeks segmentation candidates between them. It draws vertical lines explicitly at estimated character centers in order to prevent vertically unconnected components from being left behind in the bridge finding. Multiple candidates of separation are produced by removing touching points combinatorially. SVM is applied to discard improbable segmentation boundaries. Then, ambiguities are finally solved by the text recognition employing linguistic context and geometric context to recognize segmented characters. The results of our experiments show that the proposed method can segment not only single-touching characters but also multiple-touching characters, and each component in our proposed method contributes to the improvement of segmentation and recognition rates.

  • Simulation of Reconstructed Holographic Images Considering Optical Phase Distribution in Small Liquid Crystal Pixels

    Yoshitomo ISOMAE  Yosei SHIBATA  Takahiro ISHINABE  Hideo FUJIKAKE  

     
    BRIEF PAPER

      Vol:
    E100-C No:11
      Page(s):
    1043-1046

    We proposed the simulation method of reconstructed holographic images in considering phase distribution in the small pixels of liquid crystal spatial light modulator (LC-SLM) and clarified zero-order diffraction appeared on the reconstructed images when the phase distribution in a single pixel is non-uniform. These results are useful for design of fine LC-SLM for realizing wide-viewing-angle holographic displays.

  • Foldable Liquid Crystal Devices Using Ultra-Thin Polyimide Substrates and Bonding Polymer Spacers

    Yuusuke OBONAI  Yosei SHIBATA  Takahiro ISHINABE  Hideo FUJIKAKE  

     
    BRIEF PAPER

      Vol:
    E100-C No:11
      Page(s):
    1039-1042

    We developed flexible liquid crystal devices using ultra-thin polyimide substrates and bonding polymer spacers, and discussed the effects of polymer spacer structure on the cell thickness uniformity of flexible LCDs. We clarified that the lattice-shaped polymer spacer is effective to stabilize the cell thickness by suppressing the flow of the liquid crystal during bending process.

  • Precise Indoor Localization Method Using Dual-Facing Cameras on a Smart Device via Visible Light Communication

    Yohei NAKAZAWA  Hideo MAKINO  Kentaro NISHIMORI  Daisuke WAKATSUKI  Makoto KOBAYASHI  Hideki KOMAGATA  

     
    PAPER-Vision

      Vol:
    E100-A No:11
      Page(s):
    2295-2303

    In this paper, we propose a precise indoor localization method using visible light communication (VLC) with dual-facing cameras on a smart device (mobile phone, smartphone, or tablet device). This approach can assist the visually impaired with navigation, or provide mobile-robot control. The proposed method is different from conventional techniques in that dual-facing cameras are used to expand the localization area. The smart device is used as the receiver, and light-emitting diodes on the ceiling are used as localization landmarks. These are identified by VLC using a rolling shutter effect of complementary metal-oxide semiconductor image sensors. The front-facing camera captures the direct incident light of the landmarks, while the rear-facing camera captures mirror images of landmarks reflected from the floor face. We formulated the relationship between the poses (position and attitude) of the two cameras and the arrangement of landmarks using tilt detection by the smart device accelerometer. The equations can be analytically solved with a constant processing time, unlike conventional numerical methods, such as least-squares. We conducted a simulation and confirmed that the localization area was 75.6% using the dual-facing cameras, which was 3.8 times larger than that using only the front-facing camera. As a result of the experiment using two landmarks and a tablet device, the localization error in the horizontal direction was less than 98 mm at 90% of the measurement points. Moreover, the error estimation index can be used for appropriate route selection for pedestrians.

1081-1100hit(8249hit)