The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] REST(332hit)

41-60hit(332hit)

  • A Multiple Cyclic-Route Generation Method with Route Length Constraint Considering Point-of-Interests

    Tensei NISHIMURA  Kazuaki ISHIKAWA  Toshinori TAKAYAMA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER-Intelligent Transport System

      Vol:
    E102-A No:4
      Page(s):
    641-653

    With the spread of map applications, route generation has become a familiar function. Most of route generation methods search a route from a starting point to a destination point with the shortest time or shortest length, but more enjoyable route generation is recently focused on. Particularly, cyclic-route generation for strolling requires to suggest to a user more than one route passing through several POIs (Point-of-Interests), to satisfy the user's preferences as much as possible. In this paper, we propose a multiple cyclic-route generation method with a route length constraint considering POIs. Firstly, our proposed method finds out a set of reference points based on the route length constraint. Secondly, we search a non-cyclic route from one reference point to the next one and finally generate a cyclic route by connecting these non-cyclic routes. Compared with previous methods, our proposed method generates a cyclic route closer to the route length constraint, reduces the number of the same points passing through by approximately 80%, and increases the number of POIs passed approximately 1.49 times.

  • Recursive Nearest Neighbor Graph Partitioning for Extreme Multi-Label Learning

    Yukihiro TAGAMI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/11/30
      Vol:
    E102-D No:3
      Page(s):
    579-587

    As the data size of Web-related multi-label classification problems continues to increase, the label space has also grown extremely large. For example, the number of labels appearing in Web page tagging and E-commerce recommendation tasks reaches hundreds of thousands or even millions. In this paper, we propose a graph partitioning tree (GPT), which is a novel approach for extreme multi-label learning. At an internal node of the tree, the GPT learns a linear separator to partition a feature space, considering approximate k-nearest neighbor graph of the label vectors. We also developed a simple sequential optimization procedure for learning the linear binary classifiers. Extensive experiments on large-scale real-world data sets showed that our method achieves better prediction accuracy than state-of-the-art tree-based methods, while maintaining fast prediction.

  • Meet-in-the-Middle Key Recovery Attacks on a Single-Key Two-Round Even-Mansour Cipher

    Takanori ISOBE  Kyoji SHIBUTANI  

     
    PAPER

      Vol:
    E102-A No:1
      Page(s):
    17-26

    We propose new key recovery attacks on the two-round single-key n-bit Even-Mansour ciphers (2SEM) that are secure up to 22n/3 queries against distinguishing attacks proved by Chen et al. Our attacks are based on the meet-in-the-middle technique which can significantly reduce the data complexity. In particular, we introduce novel matching techniques which enable us to compute one of the two permutations without knowing a part of the key information. Moreover, we present two improvements of the proposed attack: one significantly reduces the data complexity and the other reduces the time complexity. Compared with the previously known attacks, our attack first breaks the birthday barrier on the data complexity although it requires chosen plaintexts. When the block size is 64 bits, our attack reduces the required data from 245 known plaintexts to 226 chosen plaintexts with keeping the time complexity required by the previous attacks. Furthermore, by increasing the time complexity up to 262, the required data is further reduced to 28, and DT=270, where DT is the product of data and time complexities. We show that our data-optimized attack requires DT=2n+6 in general cases. Since the proved lower bound on DT for the single-key one-round n-bit Even-Mansour ciphers is 2n, our results imply that adding one round to one-round constructions does not sufficiently improve the security against key recovery attacks. Finally, we propose a time-optimized attacks on 2SEM in which, we aim to minimize the number of the invocations of internal permutations.

  • Development of License Plate Recognition on Complex Scene with Plate-Style Classification and Confidence Scoring Based on KNN

    Vince Jebryl MONTERO  Yong-Jin JEONG  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/08/24
      Vol:
    E101-D No:12
      Page(s):
    3181-3189

    This paper presents an approach for developing an algorithm for automatic license plate recognition system (ALPR) on complex scenes. A plate-style classification method is also proposed in this paper to address the inherent challenges for ALPR in a system that uses multiple plate-styles (e.g., different fonts, multiple plate lay-out, variations in character sequences) which is the case in the current Philippine license plate system. Methods are proposed for each ALPR module: plate detection, character segmentation, and character recognition. K-nearest neighbor (KNN) is used as a classifier for character recognition together with a proposed confidence scoring to rate the decision made by the classifier. A small dataset of Philippine license plates but with relevant features of complex scenarios for ALPR is prepared. Using the proposed system on the prepared dataset, the performance of the system is evaluated on different categories of complex scenes. The proposed algorithm structure shows promising results and yielded an overall accuracy higher than the existing ALPR systems on the dataset consisting mostly of complex scenes.

  • Unrestricted-Rate Parallel Random Input-Output Codes for Multilevel Flash Memory

    Shan LU  Hiroshi KAMABE  Jun CHENG  Akira YAMAWAKI  

     
    PAPER-Coding theory for storage

      Vol:
    E101-A No:12
      Page(s):
    2135-2140

    Recent years have seen increasing efforts to improve the input/output performance of multilevel flash memory. In this regard, we propose a coding scheme for two-page unrestricted-rate parallel random input-output (P-RIO) code, which enables different code rates to be used for each page of multilevel memory. On the second page, the set of cell-state vectors for each message consists of two complementary vectors with length n. There are a total of 2n-1 sets that are disjoint to guarantee that they are uniquely decodable for 2n-1 messages. On the first page, the set of cell-state vectors for each message consists of all weight-u vectors with their non-zero elements restricted to the same (2u-1) positions, where the non-negative integer u is less than or equal to half of the code length. Finding cell-state vector sets such that they are disjoint on the first page is equivalent to the construction of constant-weight codes, and the number of disjoint sets is the best-known number of code words in the constant-weight codes. Our coding scheme is constructive, and the code length is arbitrary. The sum rates of our proposed codes are higher than those of previous work.

  • Speeding up Extreme Multi-Label Classifier by Approximate Nearest Neighbor Search

    Yukihiro TAGAMI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/08/06
      Vol:
    E101-D No:11
      Page(s):
    2784-2794

    Extreme multi-label classification methods have been widely used in Web-scale classification tasks such as Web page tagging and product recommendation. In this paper, we present a novel graph embedding method called “AnnexML”. At the training step, AnnexML constructs a k-nearest neighbor graph of label vectors and attempts to reproduce the graph structure in the embedding space. The prediction is efficiently performed by using an approximate nearest neighbor search method that efficiently explores the learned k-nearest neighbor graph in the embedding space. We conducted evaluations on several large-scale real-world data sets and compared our method with recent state-of-the-art methods. Experimental results show that our AnnexML can significantly improve prediction accuracy, especially on data sets that have a larger label space. In addition, AnnexML improves the trade-off between prediction time and accuracy. At the same level of accuracy, the prediction time of AnnexML was up to 58 times faster than that of SLEEC, a state-of-the-art embedding-based method.

  • Single Image Haze Removal Using Hazy Particle Maps

    Geun-Jun KIM  Seungmin LEE  Bongsoon KANG  

     
    LETTER-Image

      Vol:
    E101-A No:11
      Page(s):
    1999-2002

    Hazes with various properties spread widely across flat areas with depth continuities and corner areas with depth discontinuities. Removing haze from a single hazy image is difficult due to its ill-posed nature. To solve this problem, this study proposes a modified hybrid median filter that performs a median filter to preserve the edges of flat areas and a hybrid median filter to preserve depth discontinuity corners. Recovered scene radiance, which is obtained by removing hazy particles, restores image visibility using adaptive nonlinear curves for dynamic range expansion. Using comparative studies and quantitative evaluations, this study shows that the proposed method achieves similar or better results than those of other state-of-the-art methods.

  • Restricted Access Window Based Hidden Node Problem Mitigating Algorithm in IEEE 802.11ah Networks

    Ruoyu WANG  Min LIN  

     
    PAPER-Network

      Pubricized:
    2018/03/29
      Vol:
    E101-B No:10
      Page(s):
    2162-2171

    IEEE 802.11ah is a specification being developed for sub-1GHz license-exempt operation and is intended to provide Low Power Wide Area (LPWA) communication services and support Internet of Things (IoT) features such as large-scale networks and extended transmission range. However, these features also make the 802.11ah networks highly susceptible to channel contention and hidden node problem (HNP). To address the problems, the 11ah Task Group proposed a Restricted Access Window (RAW) mechanism. It shows outstanding performance in alleviating channel contention, but its effect on solving HNP is unsatisfactory. In this paper, we propose a simple and effective hidden node grouping algorithm (HNGA) based on IEEE 802.11ah RAW. The algorithm collects hidden node information by taking advantage of the 802.11 association process and then performs two-stage uniform grouping to prevent hidden node collisions (HNCs). Performance of the proposed algorithm is evaluated in comparison with other existing schemes in a hidden node situation. The results show that our proposed algorithm eliminates most of hidden node pairs inside a RAW group with low overhead penalty, thereby improving the performance of the network. Moreover, the algorithm is immune to HNCs caused by cross slot boundary transmissions.

  • Low Bit-Rate Compression Image Restoration through Subspace Joint Regression Learning

    Zongliang GAN  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2018/06/28
      Vol:
    E101-D No:10
      Page(s):
    2539-2542

    In this letter, an effective low bit-rate image restoration method is proposed, in which image denoising and subspace regression learning are combined. The proposed framework has two parts: image main structure estimation by classical NLM denoising and texture component prediction by subspace joint regression learning. The local regression function are learned from denoised patch to original patch in each subspace, where the corresponding compression image patches are employed to generate anchoring points by the dictionary learning approach. Moreover, we extent Extreme Support Vector Regression (ESVR) as multi-variable nonlinear regression to get more robustness results. Experimental results demonstrate the proposed method achieves favorable performance compared with other leading methods.

  • Recovery Performance of IHT and HTP Algorithms under General Perturbations

    Xiaobo ZHANG  Wenbo XU  Yupeng CUI  Jiaru LIN  

     
    LETTER-Digital Signal Processing

      Vol:
    E101-A No:10
      Page(s):
    1698-1702

    In compressed sensing, most previous researches have studied the recovery performance of a sparse signal x based on the acquired model y=Φx+n, where n denotes the noise vector. There are also related studies for general perturbation environment, i.e., y=(Φ+E)x+n, where E is the measurement perturbation. IHT and HTP algorithms are the classical algorithms for sparse signal reconstruction in compressed sensing. Under the general perturbations, this paper derive the required sufficient conditions and the error bounds of IHT and HTP algorithms.

  • Deep Reinforcement Learning with Sarsa and Q-Learning: A Hybrid Approach

    Zhi-xiong XU  Lei CAO  Xi-liang CHEN  Chen-xi LI  Yong-liang ZHANG  Jun LAI  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/05/22
      Vol:
    E101-D No:9
      Page(s):
    2315-2322

    The commonly used Deep Q Networks is known to overestimate action values under certain conditions. It's also proved that overestimations do harm to performance, which might cause instability and divergence of learning. In this paper, we present the Deep Sarsa and Q Networks (DSQN) algorithm, which can considered as an enhancement to the Deep Q Networks algorithm. First, DSQN algorithm takes advantage of the experience replay and target network techniques in Deep Q Networks to improve the stability of neural networks. Second, double estimator is utilized for Q-learning to reduce overestimations. Especially, we introduce Sarsa learning to Deep Q Networks for removing overestimations further. Finally, DSQN algorithm is evaluated on cart-pole balancing, mountain car and lunarlander control task from the OpenAI Gym. The empirical evaluation results show that the proposed method leads to reduced overestimations, more stable learning process and improved performance.

  • ECG Delineation with Randomly Selected Wavelet Feature and Random Forest Classifier

    Dapeng FU  Zhourui XIA  Pengfei GAO  Haiqing WANG  Jianping LIN  Li SUN  

     
    PAPER-Pattern Recognition

      Pubricized:
    2018/05/09
      Vol:
    E101-D No:8
      Page(s):
    2082-2091

    Objective: Detection of Electrocardiogram (ECG) characteristic points can provide critical diagnostic information about heart diseases. We proposed a novel feature extraction and machine learning scheme for automatic detection of ECG characteristic points. Methods: A new feature, termed as randomly selected wavelet transform (RSWT) feature, was devised to represent ECG characteristic points. A random forest classifier was adapted to infer the characteristic points position with high sensitivity and precision. Results: Compared with other state-of-the-art algorithms' testing results on QT database, our detection results of RSWT scheme showed comparable performance (similar sensitivity, precision, and detection error for each characteristic point). RSWT testing on MIT-BIH database also demonstrated promising cross-database performance. Conclusion: A novel RSWT feature and a new detection scheme was fabricated for ECG characteristic points. The RSWT demonstrated a robust and trustworthy feature for representing ECG morphologies. Significance: With the effectiveness of the proposed RSWT feature we presented a novel machine learning based scheme to automatically detect all types of ECG characteristic points at a time. Furthermore, it showed that our algorithm achieved better performance than other reported machine learning based methods.

  • Forecasting Service Performance on the Basis of Temporal Information by the Conditional Restricted Boltzmann Machine

    Jiali YOU  Hanxing XUE  Yu ZHUO  Xin ZHANG  Jinlin WANG  

     
    PAPER-Network

      Pubricized:
    2017/11/10
      Vol:
    E101-B No:5
      Page(s):
    1210-1221

    Predicting the service performance of Internet applications is important in service selection, especially for video services. In order to design a predictor for forecasting video service performance in third-party application, two famous service providers in China, Iqiyi and Letv, are monitored and analyzed. The study highlights that the measured performance in the observation period is time-series data, and it has strong autocorrelation, which means it is predictable. In order to combine the temporal information and map the measured data to a proper feature space, the authors propose a predictor based on a Conditional Restricted Boltzmann Machine (CRBM), which can capture the potential temporal relationship of the historical information. Meanwhile, the measured data of different sources are combined to enhance the training process, which can enlarge the training size and avoid the over-fit problem. Experiments show that combining the measured results from different resolutions for a video can raise prediction performance, and the CRBM algorithm shows better prediction ability and more stable performance than the baseline algorithms.

  • Person Identification Using Pose-Based Hough Forests from Skeletal Action Sequence

    Ju Yong CHANG  Ji Young PARK  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2017/12/04
      Vol:
    E101-D No:3
      Page(s):
    767-777

    The present study considers an action-based person identification problem, in which an input action sequence consists of 3D skeletal data from multiple frames. Unlike previous approaches, the type of action is not pre-defined in this work, which requires the subject classifier to possess cross-action generalization capabilities. To achieve that, we present a novel pose-based Hough forest framework, in which each per-frame pose feature casts a probabilistic vote to the Hough space. Pose distribution is estimated from training data and then used to compute the reliability of the vote to deal with the unseen poses in the test action sequence. Experimental results with various real datasets demonstrate that the proposed method provides effective person identification results especially for the challenging cross-action person identification setting.

  • An Efficient Content Search Method Based on Local Link Replacement in Unstructured Peer-to-Peer Networks

    Nagao OGINO  Takeshi KITAHARA  

     
    PAPER-Network

      Pubricized:
    2017/09/14
      Vol:
    E101-B No:3
      Page(s):
    740-749

    Peer-to-peer overlay networks can easily achieve a large-scale content sharing system on the Internet. Although unstructured peer-to-peer networks are suitable for finding entire partial-match content, flooding-based search is an inefficient way to obtain target content. When the shared content is semantically specified by a great number of attributes, it is difficult to derive the semantic similarity of peers beforehand. This means that content search methods relying on interest-based locality are more advantageous than those based on the semantic similarity of peers. Existing search methods that exploit interest-based locality organize multiple peer groups, in each of which peers with common interests are densely connected using short-cut links. However, content searches among multiple peer groups are still inefficient when the number of incident links at each peer is limited due to the capacity of the peer. This paper proposes a novel content search method that exploits interest-based locality. The proposed method can organize an efficient peer-to-peer network similar to the semantic small-world random graph, which can be organized by the existing methods based on the semantic similarity of peers. In the proposed method, topology transformation based on local link replacement maintains the numbers of incident links at all the peers. Simulation results confirm that the proposed method can achieve a significantly higher ratio of obtainable partial-match content than existing methods that organize peer groups.

  • An FPGA Realization of a Random Forest with k-Means Clustering Using a High-Level Synthesis Design

    Akira JINGUJI  Shimpei SATO  Hiroki NAKAHARA  

     
    PAPER-Emerging Applications

      Pubricized:
    2017/11/17
      Vol:
    E101-D No:2
      Page(s):
    354-362

    A random forest (RF) is a kind of ensemble machine learning algorithm used for a classification and a regression. It consists of multiple decision trees that are built from randomly sampled data. The RF has a simple, fast learning, and identification capability compared with other machine learning algorithms. It is widely used for application to various recognition systems. Since it is necessary to un-balanced trace for each tree and requires communication for all the ones, the random forest is not suitable in SIMD architectures such as GPUs. Although the accelerators using the FPGA have been proposed, such implementations were based on HDL design. Thus, they required longer design time than the soft-ware based realizations. In the previous work, we showed the high-level synthesis design of the RF including the fully pipelined architecture and the all-to-all communication. In this paper, to further reduce the amount of hardware, we use k-means clustering to share comparators of the branch nodes on the decision tree. Also, we develop the krange tool flow, which generates the bitstream with a few number of hyper parameters. Since the proposed tool flow is based on the high-level synthesis design, we can obtain the high performance RF with short design time compared with the conventional HDL design. We implemented the RF on the Xilinx Inc. ZC702 evaluation board. Compared with the CPU (Intel Xeon (R) E5607 Processor) and the GPU (NVidia Geforce Titan) implementations, as for the performance, the FPGA realization was 8.4 times faster than the CPU one, and it was 62.8 times faster than the GPU one. As for the power consumption efficiency, the FPGA realization was 7.8 times better than the CPU one, and it was 385.9 times better than the GPU one.

  • Scalable and Parameterized Architecture for Efficient Stream Mining

    Li ZHANG  Dawei LI  Xuecheng ZOU  Yu HU  Xiaowei XU  

     
    PAPER-Systems and Control

      Vol:
    E101-A No:1
      Page(s):
    219-231

    With an annual growth of billions of sensor-based devices, it is an urgent need to do stream mining for the massive data streams produced by these devices. Cloud computing is a competitive choice for this, with powerful computational capabilities. However, it sacrifices real-time feature and energy efficiency. Application-specific integrated circuit (ASIC) is with high performance and efficiency, which is not cost-effective for diverse applications. The general-purpose microcontroller is of low performance. Therefore, it is a challenge to do stream mining on these low-cost devices with scalability and efficiency. In this paper, we introduce an FPGA-based scalable and parameterized architecture for stream mining.Particularly, Dynamic Time Warping (DTW) based k-Nearest Neighbor (kNN) is adopted in the architecture. Two processing element (PE) rings for DTW and kNN are designed to achieve parameterization and scalability with high performance. We implement the proposed architecture on an FPGA and perform a comprehensive performance evaluation. The experimental results indicate thatcompared to the multi-core CPU-based implementation, our approach demonstrates over one order of magnitude on speedup and three orders of magnitude on energy-efficiency.

  • Reliable Transmission Parameter Signalling Detection for DTMB-A Standard

    Jingjing LIU  Chao ZHANG  Changyong PAN  

     
    PAPER-Terrestrial Wireless Communication/Broadcasting Technologies

      Pubricized:
    2017/06/07
      Vol:
    E100-B No:12
      Page(s):
    2156-2163

    In the advanced digital terrestrial/television multimedia broadcasting (DTMB-A) standard, a preamble based on distance detection (PBDD) is adopted for robust synchronization and signalling transmission. However, traditional signalling detection method will completely fail to work under severe frequency selective channels with ultra-long delay spread 0dB echoes. In this paper, a novel transmission parameter signalling detection method is proposed for the preamble in DTMB-A. Compared with the conventional signalling detection method, the proposed scheme works much better when the maximum channel delay is close to the length of the guard interval (GI). Both theoretical analyses and simulation results demonstrate that the proposed algorithm significantly improves the accuracy and robustness of detecting the transmitted signalling.

  • Error Recovery for Massive MIMO Signal Detection via Reconstruction of Discrete-Valued Sparse Vector

    Ryo HAYAKAWA  Kazunori HAYASHI  

     
    PAPER-Communication Theory and Systems

      Vol:
    E100-A No:12
      Page(s):
    2671-2679

    In this paper, we propose a novel error recovery method for massive multiple-input multiple-output (MIMO) signal detection, which improves an estimate of transmitted signals by taking advantage of the sparsity and the discreteness of the error signal. We firstly formulate the error recovery problem as the maximum a posteriori (MAP) estimation and then relax the MAP estimation into a convex optimization problem, which reconstructs a discrete-valued sparse vector from its linear measurements. By using the restricted isometry property (RIP), we also provide a theoretical upper bound of the size of the reconstruction error with the optimization problem. Simulation results show that the proposed error recovery method has better bit error rate (BER) performance than that of the conventional error recovery method.

  • Trojan-Net Feature Extraction and Its Application to Hardware-Trojan Detection for Gate-Level Netlists Using Random Forest

    Kento HASEGAWA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER

      Vol:
    E100-A No:12
      Page(s):
    2857-2868

    It has been reported that malicious third-party IC vendors often insert hardware Trojans into their IC products. How to detect them is a critical concern in IC design process. Machine-learning-based hardware-Trojan detection gives a strong solution to tackle this problem. Hardware-Trojan infected nets (or Trojan nets) in ICs must have particular Trojan-net features, which differ from those of normal nets. In order to classify all the nets in a netlist designed by third-party vendors into Trojan nets and normal ones by machine learning, we have to extract effective Trojan-net features from Trojan nets. In this paper, we first propose 51 Trojan-net features which describe well Trojan nets. After that, we pick up random forest as one of the best candidates for machine learning and optimize it to apply to hardware-Trojan detection. Based on the importance values obtained from the optimized random forest classifier, we extract the best set of 11 Trojan-net features out of the 51 features which can effectively classify the nets into Trojan ones and normal ones, maximizing the F-measures. By using the 11 Trojan-net features extracted, our optimized random forest classifier has achieved at most 100% true positive rate as well as 100% true negative rate in several Trust-HUB benchmarks and obtained the average F-measure of 79.3% and the accuracy of 99.2%, which realize the best values among existing machine-learning-based hardware-Trojan detection methods.

41-60hit(332hit)