Yuyang HUANG Li-Ta HSU Yanlei GU Shunsuke KAMIJO
Accurate pedestrian navigation remains a challenge in urban environments. GNSS receiver behaves poorly because the reflection and blockage of the GNSS signals by buildings or other obstacles. Integration of GNSS positioning and Pedestrian Dead Reckoning (PDR) could provide a more smooth navigation trajectory. However, the integration system cannot present the satisfied performance if GNSS positioning has large error. This situation often happens in the urban scenario. This paper focuses on improving the accuracy of the pedestrian navigation in urban environment using a proposed altitude map aided GNSS positioning method. Firstly, we use consistency check algorithm, which is similar to receiver autonomous integrity monitoring (RAIM) fault detection, to distinguish healthy and multipath contaminated measurements. Afterwards, the erroneous signals are corrected with the help of an altitude map. We called the proposed method altitude map aided GNSS. After correcting the erroneous satellite signals, the positioning mean error could be reduced from 17 meters to 12 meters. Usually, good performance for integration system needs accurately calculated GNSS accuracy value. However, the conventional GNSS accuracy calculation is not reliable in urban canyon. In this paper, the altitude map is also utilized to calculate the GNSS localization accuracy in order to indicate the reliability of the estimated position solution. The altitude map aided GNSS and accuracy are used in the integration with PDR system in order to provide more accurate and continuous positioning results. With the help of the proposed GNSS accuracy, the integration system could achieve 6.5 meters horizontal positioning accuracy in urban environment.
Real-time weather radar imaging technology is required for generating short-time weather forecasts. Moreover, such technology plays an important role in critical-weather warning systems that are based on vast Doppler weather radar data. In this study, we propose a weather radar imaging method that uses multi-layer contour detection and segmentation based on MAP-MRF estimation. The proposed method consists of three major steps. The first step involves generating reflectivity and velocity data using the Doppler radar in the form of raw data images of sweep unit in the polar coordinate system. Then, contour lines are detected on multi-layers using the adaptive median filter and modified Canny's detector based on curvature consistency. The second step interpolates contours on the Cartesian coordinate system using 3D scattered data interpolation and then segments the contours based on MAP-MRF prediction and the metropolis algorithm for each layer. The final step involves integrating the segmented contour layers and generating PPI images in sweep units. Experimental results show that the proposed method produces a visually improved PPI image in 45% of the time as compared to that for conventional methods.
Tomoki MURAKAMI Shingo OKA Yasushi TAKATORI Masato MIZOGUCHI Fumiaki MAEHARA
This paper investigates an adaptive movable access point (AMAP) system and explores its feasibility in a static indoor classroom environment with an applied wireless local area network (WLAN) system. In the AMAP system, the positions of multiple access points (APs) are adaptively moved in accordance with clustered user groups, which ensures effective coverage for non-uniform user distributions over the target area. This enhances the signal to interference and noise power ratio (SINR) performance. In order to derive the appropriate AP positions, we utilize the k-means method in the AMAP system. To accurately estimate the position of each user within the target area for user clustering, we use the general methods of received signal strength indicator (RSSI) or time of arrival (ToA), measured by the WLAN systems. To clarify the basic effectiveness of the AMAP system, we first evaluate the SINR performance of the AMAP system and a conventional fixed-position AP system with equal intervals using computer simulations. Moreover, we demonstrate the quantitative improvement of the SINR performance by analyzing the ToA and RSSI data measured in an indoor classroom environment in order to clarify the feasibility of the AMAP system.
Functional encryption is a new paradigm of public-key encryption that allows a user to compute f(x) on encrypted data CT(x) with a private key SKf to finely control the revealed information. Multi-input functional encryption is an important extension of (single-input) functional encryption that allows the computation f(x1,...,xn) on multiple ciphertexts CT(x1),...,CT(xn) with a private key SKf. Although multi-input functional encryption has many interesting applications like running SQL queries on encrypted database and computation on encrypted stream, current candidates are not yet practical since many of them are built on indistinguishability obfuscation. To solve this unsatisfactory situation, we show that practical two-input functional encryption schemes for inner products can be built based on bilinear maps. In this paper, we first propose a two-input functional encryption scheme for inner products in composite-order bilinear groups and prove its selective IND-security under simple assumptions. Next, we propose a two-client functional encryption scheme for inner products where each ciphertext can be associated with a time period and prove its selective IND-security. Furthermore, we show that our two-input functional encryption schemes in composite-order bilinear groups can be converted into schemes in prime-order asymmetric bilinear groups by using the asymmetric property of asymmetric bilinear groups.
Masahiro YAMAGUCHI Trong Phuc TRUONG Shohei MORI Vincent NOZICK Hideo SAITO Shoji YACHIDA Hideaki SATO
In this paper, we propose a method to generate a three-dimensional (3D) thermal map and RGB + thermal (RGB-T) images of a scene from thermal-infrared and RGB images. The scene images are acquired by moving both a RGB camera and an thermal-infrared camera mounted on a stereo rig. Before capturing the scene with those cameras, we estimate their respective intrinsic parameters and their relative pose. Then, we reconstruct the 3D structures of the scene by using Direct Sparse Odometry (DSO) using the RGB images. In order to superimpose thermal information onto each point generated from DSO, we propose a method for estimating the scale of the point cloud corresponding to the extrinsic parameters between both cameras by matching depth images recovered from the RGB camera and the thermal-infrared camera based on mutual information. We also generate RGB-T images using the 3D structure of the scene and Delaunay triangulation. We do not rely on depth cameras and, therefore, our technique is not limited to scenes within the measurement range of the depth cameras. To demonstrate this technique, we generate 3D thermal maps and RGB-T images for both indoor and outdoor scenes.
Lei CHEN Wei LU Ergude BAO Liqiang WANG Weiwei XING Yuanyuan CAI
MapReduce is an effective framework for processing large datasets in parallel over a cluster. Data locality and data skew on the reduce side are two essential issues in MapReduce. Improving data locality can decrease network traffic by moving reduce tasks to the nodes where the reducer input data is located. Data skew will lead to load imbalance among reducer nodes. Partitioning is an important feature of MapReduce because it determines the reducer nodes to which map output results will be sent. Therefore, an effective partitioner can improve MapReduce performance by increasing data locality and decreasing data skew on the reduce side. Previous studies considering both essential issues can be divided into two categories: those that preferentially improve data locality, such as LEEN, and those that preferentially improve load balance, such as CLP. However, all these studies ignore the fact that for different types of jobs, the priority of data locality and data skew on the reduce side may produce different effects on the execution time. In this paper, we propose a naive Bayes classifier based partitioner, namely, BAPM, which achieves better performance because it can automatically choose the proper algorithm (LEEN or CLP) by leveraging the naive Bayes classifier, i.e., considering job type and bandwidth as classification attributes. Our experiments are performed in a Hadoop cluster, and the results show that BAPM boosts the computing performance of MapReduce. The selection accuracy reaches 95.15%. Further, compared with other popular algorithms, under specific bandwidths, the improvement BAPM achieved is up to 31.31%.
This paper proposes to pre-compute approximate normal distribution functions and store them in textures such that real-time applications can process complex specular surfaces simply by sampling the textures. The proposed method is compatible with the GPU pipeline-based algorithms, and rendering is completed at real time. The experimental results show that the features of complex specular surfaces, such as the glinty appearance of leather and metallic flakes, are successfully reproduced.
Given a sequence of k convex polygons in the plane, a start point s, and a target point t, we seek a shortest path that starts at s, visits in order each of the polygons, and ends at t. We revisit this touring polygons problem, which was introduced by Dror et al. (STOC 2003), by describing a simple method to compute the so-called last step shortest path maps, one per polygon. We obtain an O(kn)-time solution to the problem for a sequence of pairwise disjoint convex polygons and an O(k2n)-time solution for possibly intersecting convex polygons, where n is the total number of vertices of all polygons. A major simplification is made on the operation of locating query points in the last step shortest path maps. Our results improve upon the previous time bounds roughly by a factor of log n.
Recently, the join processing of large-scale datasets in MapReduce environments has become an important issue. However, the existing MapReduce-based join algorithms suffer from too much overhead for constructing and updating the data index. Moreover, the similarity computation cost is high because the existing algorithms partition data without considering the data distribution. In this paper, we propose two grid-based join algorithms for MapReduce. First, we propose a similarity join algorithm that evenly distributes join candidates using a dynamic grid index, which partitions data considering data density and similarity threshold. We use a bottom-up approach by merging initial grid cells into partitions and assigning them to MapReduce jobs. Second, we propose a k-NN join query processing algorithm for MapReduce. To reduce the data transmission cost, we determine an optimal grid cell size by considering the data distribution of randomly selected samples. Then, we perform kNN join by assigning the only related join data to a reducer. From performance analysis, we show that our similarity join query processing algorithm and our k-NN join algorithm outperform existing algorithms by up to 10 times, in terms of query processing time.
Warunya WUNNASRI Jaruwat PAILAI Yusuke HAYASHI Tsukasa HIRASHIMA
This paper describes an investigation into the validity of an automatic assessment method of the learner-build concept map by comparing it with two well-known manual methods. We have previously proposed the Kit-Build (KB) concept map framework where a learner builds a concept map by using only a provided set of components, known as the set “kit”. In this framework, instant and automatic assessment of a learner-build concept map has been realized. We call this assessment method the “Kit-Build method” (KB method). The framework and assessment method have already been practically used in classrooms in various schools. As an investigation of the validity of this method, we have conducted an experiment as a case study to compare the assessment results of the method with the assessment results of two other manual assessment methods. In this experiment, 22 university students attended as subjects and four as raters. It was found that the scores of the KB method had a very strong correlation with the scores of the other manual methods. The results of this experiment are one of evidence to show the automatic assessment of the Kit-Build concept map can attain almost the same level of validity as well-known manual assessment methods.
Hiroomi HIKAWA Masayuki TAMAKI Hidetaka ITO
An FPGA-based hardware hand sign recognition system was proposed in our previous work. The hand sign recognition system consisted of a preprocessing and a self-organizing map (SOM)-Hebb classifier. The training of the SOM-Hebb classifier was carried out by an off-chip computer using training vectors given by the system. The recognition performance was reportedly improved by adding perturbation to the training data. The perturbation was added manually during the process of image capture. This paper proposes a new off-chip training method with automatic performance improvement. To improve the system's recognition performance, the off-chip training system adds artificially generated perturbation to the training feature vectors. Advantage of the proposed method compared to additive scale perturbation to image is its low computational cost because the number of feature vector elements is much less than that of pixels contained in image. The feasibility of the proposed off-chip training was tested in simulations and experiments using American sign language (ASL). Simulation results showed that the proposed perturbation computation alters the feature vector so that it is same as the one obtained by a scaled image. Experimental results revealed that the proposed off-chip training improved the recognition accuracy from 78.9% to 94.3%.
Taichi YOSHIDA Masahiro IWAHASHI Hitoshi KIYA
In this paper, we propose a 2-layer lossless coding method for high dynamic range (HDR) images based on range compression and adaptive inverse tone-mapping. Recently, HDR images, which have a wider range of luminance than conventional low dynamic range (LDR) ones, have been frequently used in various fields. Since commonly used devices cannot yet display HDR images, 2-layer coding methods that decode not only HDR images but also their LDR versions have been proposed. We have previously proposed a state-of-the-art 2-layer lossless coding method for HDR images that unfortunately has huge HDR file size. Hence, we introduce two ideas to reduce the HDR file size to less than that of the previous method. The proposed method achieves high compression ratio and experiments show that it outperforms the previous method and other conventional methods.
This paper reports the development of a landmine visualization system based on complex-valued self-organizing map (CSOM) by employing one-dimensional (1-D) array of taper-walled tapered slot antennas (TSAs). Previously we constructed a high-density two-dimensional array system to observe and classify complex-amplitude texture of scattered wave. The system has superiority in its adaptive distinction ability between landmines and other clutters. However, it used so many (144) antenna elements with many mechanical radio-frequency (RF) switches and cables that it has difficulty in its maintenance and also requires long measurement time. The 1-D array system proposed here uses only 12 antennas and adopts electronic RF switches, resulting in easy maintenance and 1/4 measurement time. Though we observe stripe noise specific to this 1-D system, we succeed in visualization with effective solutions.
Jung HEE CHEON Changmin LEE Hansol RYU
Multilinear maps have lots of cryptographic applications including multipartite key exchange and indistinguishability obfuscations. Since the concept of multilinear map was suggested, three kinds of candidate multilinear maps are constructed. However, the security of multilinear maps suffers from various attacks. In this paper, we overview suggested multilinear maps and cryptanalysis of them in diverse cases.
Daisuke YAMAMOTO Masaki MURASE Naohisa TAKAHASHI
A fisheye map lets users view both detailed and wide areas. The Focus+Glue+Context map is a fisheye map suited for Web map systems; it consists of a detailed map (i.e., Focus), wide-area map (i.e., Context), and an area to absorb the difference in scales between Focus and Context (i.e., Glue). Because Glue is compressed, the road density is too high to draw all of the roads in this area. Although existing methods can filter roads to draw, they have problems with rendering the road density and connectivity in Glue. This paper proposes an improved method to filter roads in Glue by applying a generalization method based on weighted strokes. In addition, a technique to speed up the proposed method by using a weighted stroke database is described. A prototype Web map system with a high level of response was developed and evaluated in terms of its connectivity, road density, and response.
Natsuki TAKAYAMA Hiroki TAKAHASHI
Partial blur segmentation is one of the most interesting topics in computer vision, and it has practical value. The generation of blur maps is a crucial part of partial blur segmentation because partial blur segmentation involves producing a blur map and applying a segmentation algorithm to the blur map. In this study, we address two important issues in order to improve the discrimination of blur maps: (1) estimating a robust local blur feature to consider variations in the intensity amplitude and (2) a scheme for generating blur maps. We propose the ANGHS (Amplitude-Normalized Gradient Histogram Span) as a local blur feature. ANGHS represents the heavy-tailedness of a gradient distribution, where it is calculated from an image gradient normalized using the intensity amplitude. ANGHS is robust to variations in the intensity amplitude, and it can handle local regions in a more appropriate manner than previously proposed local blur features. Blur maps are affected by local blur features but also by the contents and sizes of local regions, and the assignment of blur feature values to pixels. Thus, multiple-sized grids and the EAI (Edge-Aware Interpolation) are employed in each task to improve the discrimination of blur maps. The discrimination of the generated blur maps is evaluated visually and statistically using numerous partial blur images. Comparisons with the results obtained by state-of-the-art methods demonstrate the high discrimination of the blur maps generated using the proposed method.
Mingye JU Zhenfei GU Dengyin ZHANG Jian LIU
In this letter, we propose a novel technique to increase the visibility of the hazy image. Benefiting from the atmospheric scattering model and the invariance principle for scene structure, we formulate structure constraint equations that derive from two simulated inputs by performing gamma correction on the input image. Relying on the inherent boundary constraint of the scattering function, the expected scene albedo can be well restored via these constraint equations. Extensive experimental results verify the power of the proposed dehazing technique.
Takayuki TOMIOKA Kazu MISHIBA Yuji OYAMADA Katsuya KONDO
Depth estimation for a lense-array type light field camera is a challenging problem because of the sensor noise and the radiometric distortion which is a global brightness change among sub-aperture images caused by a vignetting effect of the micro-lenses. We propose a depth map estimation method which has robustness against sensor noise and radiometric distortion. Our method first binarizes sub-aperture images by applying the census transform. Next, the binarized images are matched by computing the majority operations between corresponding bits and summing up the Hamming distance. An initial depth obtained by matching has ambiguity caused by extremely short baselines among sub-aperture images. After an initial depth estimation process, we refine the result with following refinement steps. Our refinement steps first approximate the initial depth as a set of depth planes. Next, we optimize the result of plane fitting with an edge-preserving smoothness term. Experiments show that our method outperforms the conventional methods.
Yuma KINOSHITA Sayaka SHIOTA Hitoshi KIYA
This paper proposes a new inverse tone mapping operator (TMO) with estimated parameters. The proposed inverse TMO is based on Reinhard's global operator which is a well-known TMO. Inverse TM operations have two applications: generating an HDR image from an existing LDR one, and reconstructing an original HDR image from the mapped LDR image. The proposed one can be applied to both applications. In the latter application, two parameters used in Reinhard's TMO, i.e. the key value α regarding brightness of a mapped LDR one and the geometric mean $overline{L}_w$ of an original HDR one, are generally required for carrying out the Reinhard based inverse TMO. In this paper, we show that it is possible to estimate $overline{L}_w$ from α under some conditions, while α can be also estimated from $overline{L}_w$, so that a new inverse TMO with estimated parameter is proposed. Experimental results show that the proposed method outperforms conventional ones for both applications, in terms of high structural similarities and low computational costs.
The problem of reproducing high dynamic range (HDR) images on devices with a restricted dynamic range has gained a lot of interest in the computer graphics community. Various approaches to this issue exist, spanning several research areas, including computer graphics, image processing, color vision, and physiology. However, most of the approaches to the issue have several serious well-known color distortion problems. Accordingly, this article presents a tone-mapping method. The proposed method comprises the tone-mapping operator and the chromatic adaptation transform. The tone-mapping method is combined with linear and non-linear mapping using visual gamma based on contrast sensitive function (CSF) and using key of scene value, where the visual gamma is adopted to automatically control the dynamic range, parameter free, as well as to avoid both the luminance shift and the hue shift in the displayed images. Furthermore, the key of scene value is used to represent whether the scene was subjectively light, norm, dark. The resulting image is then processed through a chromatic adaptation transform and emphasis lies in human visual perception (HVP). The experiment results show that the proposed method yields better performance of the color rendering over the conventional method in subjective and quantitative quality and color reproduction.