Young-Sik KIM Hosung PARK Sang-Hyo KIM
To construct good DNA codes based on biologically motivated constraints, it is important that they have a large minimum Hamming distance and the number of GC-content is kept constant. Also, maximizing the number of codewords in a DNA code is required for given code length, minimum Hamming distance, and number of GC-content. In most previous works on the construction of DNA codes, quaternary constant weight codes were directly used because the alphabet of DNA strands is quaternary. In this paper, we propose new coding theoretic constructions of DNA codes based on the binary Hadamard matrix from a binary sequence with ideal autocorrelation. The proposed DNA codes have a greater number of codewords than or the equal number to existing DNA codes constructed from quaternary constant weight codes. In addition, it is numerically shown that for the case of codes with length 8 or 16, the number of codewords in the proposed DNA code sets is the largest with respect to the minimum reverse complementary Hamming distances, compared to all previously known results.
Shota KASAI Yusuke KAMEDA Tomokazu ISHIKAWA Ichiro MATSUDA Susumu ITOH
We propose a method of interframe prediction in depth map coding that uses pixel-wise 3D motion estimated from encoded textures and depth maps. By using the 3D motion, an approximation of the depth map frame to be encoded is generated and used as a reference frame of block-wise motion compensation.
Autonomous Underwater Vehicle (AUV) can be utilized to directly measure the geomagnetic map in deep sea. The traditional map interpolation algorithms based on sampling continuation above the sea level yield low resolution and accuracy, which restricts the applications such as the deep sea geomagnetic positioning, navigation, searching and surveillance, etc. In this letter, we propose a Three-Dimensional (3D) Compressive Sensing (CS) algorithm in terms of the real trajectory of AUV which can be optimized with the required accuracy. The geomagnetic map recovered with the CS algorithm shows high precision compared with traditional interpolation schemes, by which the magnetic positioning accuracy can be greatly improved.
Ai ISHIDA Keita EMURA Goichiro HANAOKA Yusuke SAKAI Keisuke TANAKA
Group signatures are a class of digital signatures with enhanced privacy. By using this type of signature, a user can sign a message on behalf of a specific group without revealing his identity, but in the case of a dispute, an authority can expose the identity of the signer. However, it is not always the case that we need to know the specific identity of a signature. In this paper, we propose the notion of deniable group signatures, where the authority can issue a proof showing that the specified user is NOT the signer of a signature, without revealing the actual signer. We point out that existing efficient non-interactive zero-knowledge proof systems cannot be straightforwardly applied to prove such a statement. We circumvent this problem by giving a fairly practical construction through extending the Groth group signature scheme (ASIACRYPT 2007). In particular, a denial proof in our scheme consists of 96 group elements, which is about twice the size of a signature in the Groth scheme. The proposed scheme is provably secure under the same assumptions as those of the Groth scheme.
Md. Al-Amin KHANDAKER Yasuyuki NOGAMI
Scalar multiplication over higher degree rational point groups is often regarded as the bottleneck for faster pairing based cryptography. This paper has presented a skew Frobenius mapping technique in the sub-field isomorphic sextic twisted curve of Kachisa-Schaefer-Scott (KSS) pairing friendly curve of embedding degree 18 in the context of Ate based pairing. Utilizing the skew Frobenius map along with multi-scalar multiplication procedure, an efficient scalar multiplication method for KSS curve is proposed in the paper. In addition to the theoretic proposal, this paper has also presented a comparative simulation of the proposed approach with plain binary method, sliding window method and non-adjacent form (NAF) for scalar multiplication. The simulation shows that the proposed method is about 60 times faster than plain implementation of other compared methods.
Naotake KAMIURA Shoji KOBASHI Manabu NII Takayuki YUMOTO Ichiro YAMAMOTO
In this paper, we present a method of analyzing relationships between items in specific health examination data, as one of the basic researches to address increases of lifestyle-related diseases. We use self-organizing maps, and pick up the data from the examination dataset according to the condition specified by some item values. We then focus on twelve items such as hemoglobin A1c (HbA1c), aspartate transaminase (AST), alanine transaminase (ALT), gamma-glutamyl transpeptidase (γ-GTP), and triglyceride (TG). We generate training data presented to a map by calculating the difference between item values associated with successive two years and normalizing the values of this calculation. We label neurons in the map on condition that one of the item values of training data is employed as a parameter. We finally examine the relationships between items by comparing results of labeling (clusters formed in the map) to each other. From experimental results, we separately reveal the relationships among HbA1c, AST, ALT, γ-GTP and TG in the unfavorable case of HbA1c value increasing and those in the favorable case of HbA1c value decreasing.
Yousuke SANO Kazuaki TAKEDA Satoshi NAGATA Takehiro NAKAMURA Xiaohang CHEN Anxin LI Xu ZHANG Jiang HUILING Kazuhiko FUKAWA
Non-orthogonal multiple access (NOMA) is a promising multiple access scheme for further improving the spectrum efficiency compared to orthogonal multiple access (OMA) in the 5th Generation (5G) mobile communication systems. As inter-user interference cancellers for NOMA, two kinds of receiver structures are considered. One is the reduced complexity-maximum likelihood receiver (R-ML) and the other is the codeword level interference canceller (CWIC). In this paper, we show that the R-ML is superior to the CWIC in terms of scheduling flexibility. In addition, we propose a link to system (L2S) mapping scheme for the R-ML to conduct a system level evaluation, and show that the proposed scheme accurately predicts the block error rate (BLER) performance of the R-ML. The proposed L2S mapping scheme also demonstrates that the system level throughput performance of the R-ML is higher than that for the CWIC thanks to the scheduling flexibility.
JinAn XU Yufeng CHEN Kuang RU Yujie ZHANG Kenji ARAKI
Named Entity Translation Equivalents extraction plays a critical role in machine translation (MT) and cross language information retrieval (CLIR). Traditional methods are often based on large-scale parallel or comparable corpora. However, the applicability of these studies is constrained, mainly because of the scarcity of parallel corpora of the required scale, especially for language pairs of Chinese and Japanese. In this paper, we propose a method considering the characteristics of Chinese and Japanese to automatically extract the Chinese-Japanese Named Entity (NE) translation equivalents based on inductive learning (IL) from monolingual corpora. The method adopts the Chinese Hanzi and Japanese Kanji Mapping Table (HKMT) to calculate the similarity of the NE instances between Japanese and Chinese. Then, we use IL to obtain partial translation rules for NEs by extracting the different parts from high similarity NE instances in Chinese and Japanese. In the end, the feedback processing updates the Chinese and Japanese NE entity similarity and rule sets. Experimental results show that our simple, efficient method, which overcomes the insufficiency of the traditional methods, which are severely dependent on bilingual resource. Compared with other methods, our method combines the language features of Chinese and Japanese with IL for automatically extracting NE pairs. Our use of a weak correlation bilingual text sets and minimal additional knowledge to extract NE pairs effectively reduces the cost of building the corpus and the need for additional knowledge. Our method may help to build a large-scale Chinese-Japanese NE translation dictionary using monolingual corpora.
Junsu KIM Kyong-Ha LEE Myoung-Ho KIM
With rapid increase of the number of applications as well as the sizes of data, multi-query processing on the MapReduce framework has gained much attention. Meanwhile, there have been much interest in skyline query processing due to its power of multi-criteria decision making and analysis. Recently, there have been attempts to optimize multi-query processing in MapReduce. However, they are not appropriate to process multiple skyline queries efficiently and they also require modifications of the Hadoop internals. In this paper, we propose an efficient method for processing multi-skyline queries with MapReduce without any modification of the Hadoop internals. Through various experiments, we show that our approach outperforms previous studies by orders of magnitude.
Yining XU Ittetsu TANIGUCHI Hiroyuki TOMIYAMA
Task mapping is one of the most important design processes in embedded manycore systems. This paper proposes a static task mapping technique for manycore real-time systems. The technique minimizes the number of cores while satisfying deadline constraints of individual tasks.
This paper proposes 0-1-A-Ā LUT, a new programmable logic using atom switches, and a delay-optimal mapping algorithm for it. Atom switch is a non-volatile memory device of very small geometry which is fabricated between metal layers of a VLSI, and it can be used as a switch device of very small on-resistance and parasitic capacitance. While considerable area reduction of Look Up Tables (LUTs) used in conventional Field Programmable Gate Arrays (FPGAs) has been achieved by simply replacing each SRAM element with a memory element using a pair of atom switches, our 0-1-A-Ā LUT achieves further area and delay reduction. Unlike the conventional atom-switch-based LUT in which all k input signals are fed to a MUX, one of input signals is fed to the switch array, resulting area reduction due to the reduced number of inputs of the MUX from 2k to 2k-1, as well as delay reduction due to reduced fanout load of the input buffers. Since the fanout of this input buffers depends on the mapped logic function, this paper also proposes technology mapping algorithms to select logic function of fewer number of fanouts of input buffers to achieve further delay reduction. From our experiments, the circuit delay using our k-LUT is 0.94% smaller in the best case compared with using the conventional atom-switch-based k-LUT.
Ting WANG Tiansheng XU Zheng TANG Yuki TODO
Linked Open Data (LOD) at Schema-Level and knowledge described in Chinese is an important part of the LOD project. Previous work generally ignored the rules of word-order sensitivity and polysemy in Chinese or could not deal with the out-of-vocabulary (OOV) mapping task. There is still no efficient system for large-scale Chinese ontology mapping. In order to solve the problem, this study proposes a novel TongYiCiCiLin (TYCCL) and Sequence Alignment-based Chinese Ontology Mapping model, which is called TongSACOM, to evaluate Chinese concept similarity in LOD environment. Firstly, an improved TYCCL-based similarity algorithm is proposed to compute the similarity between atomic Chinese concepts that have been included in TYCCL. Secondly, a global sequence-alignment and improved TYCCL-based combined algorithm is proposed to evaluate the similarity between Chinese OOV. Finally, comparing the TongSACOM to other typical similarity computing algorithms, and the results prove that it has higher overall performance and usability. This study may have important practical significance for promoting Chinese knowledge sharing, reusing, interoperation and it can be widely applied in the related area of Chinese information processing.
Hironori TAKIMOTO Syuhei HITOMI Hitoshi YAMAUCHI Mitsuyoshi KISHIHARA Kensuke OKUBO
It is estimated that 80% of the information entering the human brain is obtained through the eyes. Therefore, it is commonly believed that drawing human attention to particular objects is effective in assisting human activities. In this paper, we propose a novel image modification method for guiding user attention to specific regions of interest by using a novel saliency map model based on spatial frequency components. We modify the frequency components on the basis of the obtained saliency map to decrease the visual saliency outside the specified region. By applying our modification method to an image, human attention can be guided to the specified region because the saliency inside the region is higher than that outside the region. Using gaze measurements, we show that the proposed saliency map matches well with the distribution of actual human attention. Moreover, we evaluate the effectiveness of the proposed modification method by using an eye tracking system.
Ngoc-Giao PHAM Suk-Hwan LEE Ki-Ryong KWON
Nowadays, vector map content is widely used in the areas of life, science and the military. Due to the fact that vector maps bring great value and that their production process is expensive, a large volume of vector map data is attacked, stolen and illegally distributed by pirates. Thus, vector map data must be encrypted before being stored and transmitted in order to ensure the access and to prevent illegal copying. This paper presents a novel perceptual encryption algorithm for ensuring the secured storage and transmission of vector map data. Polyline data of vector maps are extracted to interpolate a spline curve, which is represented by an interpolating vector, the curvature degree coefficients, and control points. The proposed algorithm is based on encrypting the control points of the spline curve in the frequency domain of discrete cosine transform. Control points are transformed and selectively encrypted in the frequency domain of discrete cosine transform. They are then used in an inverse interpolation to generate the encrypted vector map. Experimental results show that the entire vector map is altered after the encryption process, and the proposed algorithm is very effective for a large dataset of vector maps.
Code division multiple access (CDMA) based on direct sequence (DS) spread spectrum modulation using spreading codes is one of standard technologies for multiple access communications. In asynchronous DS/CDMA communications, spreading codes with appropriate negative auto-correlation can reduce bit error rate (BER) compared with uncorrelated sequences. In this letter, we design new binary functions for generating chaotic binary sequences with negative auto-correlation using Bernoulli chaotic map. Such binary functions can be applied to the generation of spreading codes with negative auto-correlation based on existing spreading codes (e.g., shift register sequences).
Seungtae HONG Kyongseok PARK Chae-Deok LIM Jae-Woo CHANG
To analyze large-scale data efficiently, studies on Hadoop, one of the most popular MapReduce frameworks, have been actively done. Meanwhile, most of the large-scale data analysis applications, e.g., data clustering, are required to do the same map and reduce functions repeatedly. However, Hadoop cannot provide an optimal performance for iterative MapReduce jobs because it derives a result by doing one phase of map and reduce functions. To solve the problems, in this paper, we propose a new efficient resource management framework for iterative MapReduce processing in large-scale data analysis. For this, we first design an iterative job state-machine for managing the iterative MapReduce jobs. Secondly, we propose an invariant data caching mechanism for reducing the I/O costs of data accesses. Thirdly, we propose an iterative resource management technique for efficiently managing the resources of a Hadoop cluster. Fourthly, we devise a stop condition check mechanism for preventing unnecessary computation. Finally, we show the performance superiority of the proposed framework by comparing it with the existing frameworks.
Arata KAWAMURA Hiro IGARASHI Youji IIGUNI
Image-to-sound mapping is a technique that transforms an image to a sound signal, which is subsequently treated as a sound spectrogram. In general, the transformed sound differs from a human speech signal. Herein an efficient image-to-sound mapping method, which provides an understandable speech signal without any training, is proposed. To synthesize such a speech signal, the proposed method utilizes a multi-column image and a speech spectral phase that is obtained from a long-time observation of the speech. The original image can be retrieved from the sound spectrogram of the synthesized speech signal. The synthesized speech and the reconstructed image qualities are evaluated using objective tests.
Brahmastro KRESNARAMAN Yasutomo KAWANISHI Daisuke DEGUCHI Tomokazu TAKAHASHI Yoshito MEKADA Ichiro IDE Hiroshi MURASE
This paper addresses the attribute recognition problem, a field of research that is dominated by studies in the visible spectrum. Only a few works are available in the thermal spectrum, which is fundamentally different from the visible one. This research performs recognition specifically on wearable attributes, such as glasses and masks. Usually these attributes are relatively small in size when compared with the human body, on top of a large intra-class variation of the human body itself, therefore recognizing them is not an easy task. Our method utilizes a decomposition framework based on Robust Principal Component Analysis (RPCA) to extract the attribute information for recognition. However, because it is difficult to separate the body and the attributes without any prior knowledge, noise is also extracted along with attributes, hampering the recognition capability. We made use of prior knowledge; namely the location where the attribute is likely to be present. The knowledge is referred to as the Probability Map, incorporated as a weight in the decomposition by RPCA. Using the Probability Map, we achieve an attribute-wise decomposition. The results show a significant improvement with this approach compared to the baseline, and the proposed method achieved the highest performance in average with a 0.83 F-score.
Due to heavy rendering load and unstable frame rate when rendering large terrain, this paper proposes a geometry clipmaps based algorithm. Triangle meshes are generated by few tessellation control points in GPU tessellation shader. ‘Cracks’ caused by different resolution between adjacent levels are eliminated by modifying outer tessellation level factor of shared edges between levels. Experimental results show the algorithm is able to improve rendering efficiency and frame rate stability in terrain navigation.
This paper presents the set of procedures to blend GNSS and V2V communication to improve the performance of the stand-alone on-board GNSS receiver and to assure mutual positioning with a bounded error. Particle filter algorithm is applied to enhance mutual positioning of vehicles, and it fuses the information provided by the GNSS receiver, wireless measurements in vehicular environments, odometer, and digital road map data including reachability and zone probabilities. Measurement-based statistical model of relative distance as a function of Time-of-Arrival is experimentally obtained. The number of collaborative vehicles to the mutual positioning procedure is investigated in terms of positioning accuracy and network performance through realistic simulation studies, and the proposed mutual positioning procedure is experimentally evaluated by a fleet of five IEEE 802.11p radio modem equipped vehicles. Collaboration in a VANET improves availability of position measurement and its accuracy up to 40% in comparison with respect to the stand-alone GNSS receiver.