The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] MPO(945hit)

121-140hit(945hit)

  • Efficient Reusable Collections

    Davud MOHAMMADPUR  Ali MAHJUR  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/08/20
      Vol:
    E101-D No:11
      Page(s):
    2710-2719

    Efficiency and flexibility of collections have a significant impact on the overall performance of applications. The current approaches to implement collections have two main drawbacks: (i) they limit the efficiency of collections and (ii) they have not adequate support for collection composition. So, when the efficiency and flexibility of collections is important, the programmer needs to implement them himself, which leads to the loss of reusability. This article presents neoCollection, a novel approach to encapsulate collections. neoCollection has several distinguishing features: (i) it can be applied on data elements efficiently and flexibly (ii) composition of collections can be made efficiently and flexibly, a feature that does not exist in the current approaches. In order to demonstrate its effectiveness, neoCollection is implemented as an extension to Java and C++.

  • Accelerating a Lloyd-Type k-Means Clustering Algorithm with Summable Lower Bounds in a Lower-Dimensional Space

    Kazuo AOYAMA  Kazumi SAITO  Tetsuo IKEDA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/08/02
      Vol:
    E101-D No:11
      Page(s):
    2773-2783

    This paper presents an efficient acceleration algorithm for Lloyd-type k-means clustering, which is suitable to a large-scale and high-dimensional data set with potentially numerous classes. The algorithm employs a novel projection-based filter (PRJ) to avoid unnecessary distance calculations, resulting in high-speed performance keeping the same results as a standard Lloyd's algorithm. The PRJ exploits a summable lower bound on a squared distance defined in a lower-dimensional space to which data points are projected. The summable lower bound can make the bound tighter dynamically by incremental addition of components in the lower-dimensional space within each iteration although the existing lower bounds used in other acceleration algorithms work only once as a fixed filter. Experimental results on large-scale and high-dimensional real image data sets demonstrate that the proposed algorithm works at high speed and with low memory consumption when large k values are given, compared with the state-of-the-art algorithms.

  • New Constructions of Zero-Difference Balanced Functions

    Zhibao LIN  Zhengqian LI  Pinhui KE  

     
    LETTER-Coding Theory

      Vol:
    E101-A No:10
      Page(s):
    1719-1723

    Zero-difference balanced (ZDB) functions, which have many applications in coding theory and sequence design, have received a lot of attention in recent years. In this letter, based on two known classes of ZDB functions, a new class of ZDB functions, which is defined on the group (Z2e-1×Zn,+) is presented, where e is a prime and n=p1m1p2m2…pkmk, pi is odd prime satisfying that e|(pi-1) for any 1≤i≤k . In the case of gcd(2e-1,n)=1, the new constructed ZDB functions are cyclic.

  • Finding Important People in a Video Using Deep Neural Networks with Conditional Random Fields

    Mayu OTANI  Atsushi NISHIDA  Yuta NAKASHIMA  Tomokazu SATO  Naokazu YOKOYA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2018/07/20
      Vol:
    E101-D No:10
      Page(s):
    2509-2517

    Finding important regions is essential for applications, such as content-aware video compression and video retargeting to automatically crop a region in a video for small screens. Since people are one of main subjects when taking a video, some methods for finding important regions use a visual attention model based on face/pedestrian detection to incorporate the knowledge that people are important. However, such methods usually do not distinguish important people from passers-by and bystanders, which results in false positives. In this paper, we propose a deep neural network (DNN)-based method, which classifies a person into important or unimportant, given a video containing multiple people in a single frame and captured with a hand-held camera. Intuitively, important/unimportant labels are highly correlated given that corresponding people's spatial motions are similar. Based on this assumption, we propose to boost the performance of our important/unimportant classification by using conditional random fields (CRFs) built upon the DNN, which can be trained in an end-to-end manner. Our experimental results show that our method successfully classifies important people and the use of a DNN with CRFs improves the accuracy.

  • Designing Coded Aperture Camera Based on PCA and NMF for Light Field Acquisition

    Yusuke YAGI  Keita TAKAHASHI  Toshiaki FUJII  Toshiki SONODA  Hajime NAGAHARA  

     
    PAPER

      Pubricized:
    2018/06/20
      Vol:
    E101-D No:9
      Page(s):
    2190-2200

    A light field, which is often understood as a set of dense multi-view images, has been utilized in various 2D/3D applications. Efficient light field acquisition using a coded aperture camera is the target problem considered in this paper. Specifically, the entire light field, which consists of many images, should be reconstructed from only a few images that are captured through different aperture patterns. In previous work, this problem has often been discussed from the context of compressed sensing (CS), where sparse representations on a pre-trained dictionary or basis are explored to reconstruct the light field. In contrast, we formulated this problem from the perspective of principal component analysis (PCA) and non-negative matrix factorization (NMF), where only a small number of basis vectors are selected in advance based on the analysis of the training dataset. From this formulation, we derived optimal non-negative aperture patterns and a straight-forward reconstruction algorithm. Even though our method is based on conventional techniques, it has proven to be more accurate and much faster than a state-of-the-art CS-based method.

  • Parameterized Algorithms to Compute Ising Partition Function

    Hidefumi HIRAISHI  Hiroshi IMAI  Yoichi IWATA  Bingkai LIN  

     
    PAPER

      Vol:
    E101-A No:9
      Page(s):
    1398-1403

    Computing the partition function of the Ising model on a graph has been investigated from both sides of computer science and statistical physics, with producing fertile results of P cases, FPTAS/FPRAS cases, inapproximability and intractability. Recently, measurement-based quantum computing as well as quantum annealing open up another bridge between two fields by relating a tree tensor network representing a quantum graph state to a rank decomposition of the graph. This paper makes this bridge wider in both directions. An $O^*(2^{ rac{omega}{2} bw(G)})$-time algorithm is developed for the partition function on n-vertex graph G with branch decomposition of width bw(G), where O* ignores a polynomial factor in n and ω is the matrix multiplication parameter less than 2.37287. Related algorithms of $O^*(4^{rw( ilde{G})})$ time for the tree tensor network are given which are of interest in quantum computation, given rank decomposition of a subdivided graph $ ilde{G}$ with width $rw( ilde{G})$. These algorithms are parameter-exponential, i.e., O*(cp) for constant c and parameter p, and such an algorithm is not known for a more general case of computing the Tutte polynomial in terms of bw(G) (the current best time is O*(min{2n, bw(G)O(bw(G))})) with a negative result in terms of the clique-width, related to the rank-width, under ETH.

  • Sparse Graph Based Deep Learning Networks for Face Recognition

    Renjie WU  Sei-ichiro KAMATA  

     
    PAPER

      Pubricized:
    2018/06/20
      Vol:
    E101-D No:9
      Page(s):
    2209-2219

    In recent years, deep learning based approaches have substantially improved the performance of face recognition. Most existing deep learning techniques work well, but neglect effective utilization of face correlation information. The resulting performance loss is noteworthy for personal appearance variations caused by factors such as illumination, pose, occlusion, and misalignment. We believe that face correlation information should be introduced to solve this network performance problem originating from by intra-personal variations. Recently, graph deep learning approaches have emerged for representing structured graph data. A graph is a powerful tool for representing complex information of the face image. In this paper, we survey the recent research related to the graph structure of Convolutional Neural Networks and try to devise a definition of graph structure included in Compressed Sensing and Deep Learning. This paper devoted to the story explain of two properties of our graph - sparse and depth. Sparse can be advantageous since features are more likely to be linearly separable and they are more robust. The depth means that this is a multi-resolution multi-channel learning process. We think that sparse graph based deep neural network can more effectively make similar objects to attract each other, the relative, different objects mutually exclusive, similar to a better sparse multi-resolution clustering. Based on this concept, we propose a sparse graph representation based on the face correlation information that is embedded via the sparse reconstruction and deep learning within an irregular domain. The resulting classification is remarkably robust. The proposed method achieves high recognition rates of 99.61% (94.67%) on the benchmark LFW (YTF) facial evaluation database.

  • Improving Range Resolution by Triangular Decomposition for Small UAV Radar Altimeters

    Di BAI  Zhenghai WANG  Mao TIAN  Xiaoli CHEN  

     
    PAPER-Sensing

      Pubricized:
    2018/02/20
      Vol:
    E101-B No:8
      Page(s):
    1933-1939

    A triangular decomposition-based multipath super-resolution method is proposed to improve the range resolution of small unmanned aerial vehicle (UAV) radar altimeters that use a single channel with continuous direct spread waveform. In the engineering applications of small UAV radar altimeter, multipath scenarios are quite common. When the conventional matched filtering process is used under these environments, it is difficult to identify multiple targets in the same range cell due to the overlap between echoes. To improve the performance, we decompose the overlapped peaks yielded by matched filtering into a series of basic triangular waveforms to identify various targets with different time-shifted correlations of the pseudo-noise (PN) sequence. Shifting the time scale enables targets in the same range resolution unit to be identified. Both theoretical analysis and experiments show that the range resolution can be improved significantly, as it outperforms traditional matched filtering processes.

  • Decentralized Event-Triggered Control of Composite Systems Using M-Matrices

    Kenichi FUKUDA  Toshimitsu USHIO  

     
    PAPER-Systems and Control

      Vol:
    E101-A No:8
      Page(s):
    1156-1161

    A composite system consists of many subsystems, which have interconnections with other subsystems. For such a system, in general, we utilize decentralized control, where each subsystem is controlled by a local controller. On the other hand, event-triggered control is one of useful approaches to reduce the amount of communications between a controller and a plant. In the event-triggered control, an event triggering mechanism (ETM) monitors the information of the plant, and determines the time to transmit the data. In this paper, we propose a design of ETMs for the decentralized event-triggered control of nonlinear composite systems using an M-matrix. We consider the composite system where there is an ETM for each subsystem, and ETMs monitor local states of the corresponding subsystems. Each ETM is designed so that the composite system is stabilized. Moreover, we deal with the case of linear systems. Finally, we perform simulation to show that the proposed triggering rules are useful for decentralized control.

  • A Reactive Management System for Reliable Power Supply in a Building Microgrid with Vehicle-to-Grid Interaction

    Shoko KIMURA  Yoshihiko SUSUKI  Atsushi ISHIGAME  

     
    PAPER-Systems and Control

      Vol:
    E101-A No:8
      Page(s):
    1172-1184

    We address a BEMS (Building Energy Management System) to guarantee reliability of electric-power supply in dynamic uncertain environments. The building microgrid as the target of BEMS has multiple distributed power sources including a photo-voltaic power system and Electric-Vehicle (EV). EV is regarded as an autonomously-moving battery due to the original means of transportation and is hence a cause of dynamic uncertainty of the building microgrid. The main objective of synthesis of BEMS in this paper is to guarantee the continuous supply of power to the most critical load in a building microgrid and to realize the power supply to the other loads according to a ranking of load importance. We synthesize the BEMS as a reactive control system that monitors changes of dynamic uncertain environment of the microgrid including departure and arrival of an EV, and determines a route of power supply to the most critical load. Also, we conduct numerical experiments of the reactive BEMS using models of power flows in the building and of charging states of the batteries. The experiments are incorporated with data measured in a practical office building and demonstration project of EMS at Osaka, Japan. We show that the BEMS works for extending the time duration of continuous power supply to the most critical load.

  • Matrix Decomposition of Precoder Matrix in Orthogonal Precoding for Sidelobe Suppression of OFDM Signals

    Hikaru KAWASAKI  Masaya OHTA  Katsumi YAMASHITA  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2018/01/18
      Vol:
    E101-B No:7
      Page(s):
    1716-1722

    The spectrum sculpting precoder (SSP) is a precoding scheme for sidelobe suppression of frequency division multiplexing (OFDM) signals. It can form deep spectral notches at chosen frequencies and is suitable for cognitive radio systems. However, the SSP degrades the error rate as the number of notched frequencies increases. Orthogonal precoding that improves the SSP can achieve both spectrum notching and the ideal error rate, but its computational complexity is very high since the precoder matrix is large in size. This paper proposes an effective and equivalent decomposition of the precoder matrix by QR-decomposition in order to reduce the computational complexity of orthogonal precoding. Numerical experiments show that the proposed method can drastically reduce the computational complexity with no performance degradation.

  • Infants' Pain Recognition Based on Facial Expression: Dynamic Hybrid Descriptions

    Ruicong ZHI  Ghada ZAMZMI  Dmitry GOLDGOF  Terri ASHMEADE  Tingting LI  Yu SUN  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/04/20
      Vol:
    E101-D No:7
      Page(s):
    1860-1869

    The accurate assessment of infants' pain is important for understanding their medical conditions and developing suitable treatment. Pediatric studies reported that the inadequate treatment of infants' pain might cause various neuroanatomical and psychological problems. The fact that infants can not communicate verbally motivates increasing interests to develop automatic pain assessment system that provides continuous and accurate pain assessment. In this paper, we propose a new set of pain facial activity features to describe the infants' facial expression of pain. Both dynamic facial texture feature and dynamic geometric feature are extracted from video sequences and utilized to classify facial expression of infants as pain or no pain. For the dynamic analysis of facial expression, we construct spatiotemporal domain representation for texture features and time series representation (i.e. time series of frame-level features) for geometric features. Multiple facial features are combined through both feature fusion and decision fusion schemes to evaluate their effectiveness in infants' pain assessment. Experiments are conducted on the video acquired from NICU infants, and the best accuracy of the proposed pain assessment approaches is 95.6%. Moreover, we find that although decision fusion does not perform better than that of feature fusion, the False Negative Rate of decision fusion (6.2%) is much lower than that of feature fusion (25%).

  • Uplink Multiuser MIMO Access with Probe Packets in Distributed Wireless Networks

    Satoshi DENNO  Yusuke MURAKAMI  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2017/12/15
      Vol:
    E101-B No:6
      Page(s):
    1443-1452

    This paper proposes a novel access technique that enables uplink multiuser multiple input multiple output (MU-MIMO) access with small overhead in distributed wireless networks. The proposed access technique introduces a probe packet that is sent to all terminals to judge whether they have the right to transmit their signals or not. The probe packet guarantees high quality MU-MIMO signal transmission when a minimum mean square error (MMSE) filter is applied at the access point, which results in high frequency utilization efficiency. Computer simulation reveals that the proposed access achieves more than twice of the capacity obtained by the traditional carrier sense multiple access/collision avoidance (CSMA/CA) with a single user MIMO, when the access point with 5 antennas is surrounded by the terminals with 2 antennas.

  • Extreme Learning Machine with Superpixel-Guided Composite Kernels for SAR Image Classification

    Dongdong GUAN  Xiaoan TANG  Li WANG  Junda ZHANG  

     
    LETTER-Pattern Recognition

      Pubricized:
    2018/03/14
      Vol:
    E101-D No:6
      Page(s):
    1703-1706

    Synthetic aperture radar (SAR) image classification is a popular yet challenging research topic in the field of SAR image interpretation. This paper presents a new classification method based on extreme learning machine (ELM) and the superpixel-guided composite kernels (SGCK). By introducing the generalized likelihood ratio (GLR) similarity, a modified simple linear iterative clustering (SLIC) algorithm is firstly developed to generate superpixel for SAR image. Instead of using a fixed-size region, the shape-adaptive superpixel is used to exploit the spatial information, which is effective to classify the pixels in the detailed and near-edge regions. Following the framework of composite kernels, the SGCK is constructed base on the spatial information and backscatter intensity information. Finally, the SGCK is incorporated an ELM classifier. Experimental results on both simulated SAR image and real SAR image demonstrate that the proposed framework is superior to some traditional classification methods.

  • Image Denoising Using Block-Rotation-Based SVD Filtering in Wavelet Domain

    Min WANG  Shudao ZHOU  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2018/03/14
      Vol:
    E101-D No:6
      Page(s):
    1621-1628

    This paper proposes an image denoising method using singular value decomposition (SVD) with block-rotation-based operations in wavelet domain. First, we decompose a noisy image to some sub-blocks, and use the single-level discrete 2-D wavelet transform to decompose each sub-block into the low-frequency image part and the high-frequency parts. Then, we use SVD and rotation-based SVD with the rank-1 approximation to filter the noise of the different high-frequency parts, and get the denoised sub-blocks. Finally, we reconstruct the sub-block from the low-frequency part and the filtered the high-frequency parts by the inverse wavelet transform, and reorganize each denoised sub-blocks to obtain the final denoised image. Experiments show the effectiveness of this method, compared with relevant methods.

  • A Real-Time Subtask-Assistance Strategy for Adaptive Services Composition

    Li QUAN  Zhi-liang WANG  Xin LIU  

     
    PAPER-Data Engineering, Web Information Systems

      Pubricized:
    2018/01/30
      Vol:
    E101-D No:5
      Page(s):
    1361-1369

    Reinforcement learning has been used to adaptive service composition. However, traditional algorithms are not suitable for large-scale service composition. Based on Q-Learning algorithm, a multi-task oriented algorithm named multi-Q learning is proposed to realize subtask-assistance strategy for large-scale and adaptive service composition. Differ from previous studies that focus on one task, we take the relationship between multiple service composition tasks into account. We decompose complex service composition task into multiple subtasks according to the graph theory. Different tasks with the same subtasks can assist each other to improve their learning speed. The results of experiments show that our algorithm could obtain faster learning speed obviously than traditional Q-learning algorithm. Compared with multi-agent Q-learning, our algorithm also has faster convergence speed. Moreover, for all involved service composition tasks that have the same subtasks between each other, our algorithm can improve their speed of learning optimal policy simultaneously in real-time.

  • Retweeting Prediction Based on Social Hotspots and Dynamic Tensor Decomposition

    Qian LI  Xiaojuan LI  Bin WU  Yunpeng XIAO  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/01/30
      Vol:
    E101-D No:5
      Page(s):
    1380-1392

    In social networks, predicting user behavior under social hotspots can aid in understanding the development trend of a topic. In this paper, we propose a retweeting prediction method for social hotspots based on tensor decomposition, using user information, relationship and behavioral data. The method can be used to predict the behavior of users and analyze the evolvement of topics. Firstly, we propose a tensor-based mechanism for mining user interaction, and then we propose that the tensor be used to solve the problem of inaccuracy that arises when interactively calculating intensity for sparse user interaction data. At the same time, we can analyze the influence of the following relationship on the interaction between users based on characteristics of the tensor in data space conversion and projection. Secondly, time decay function is introduced for the tensor to quantify further the evolution of user behavior in current social hotspots. That function can be fit to the behavior of a user dynamically, and can also solve the problem of interaction between users with time decay. Finally, we invoke time slices and discretization of the topic life cycle and construct a user retweeting prediction model based on logistic regression. In this way, we can both explore the temporal characteristics of user behavior in social hotspots and also solve the problem of uneven interaction behavior between users. Experiments show that the proposed method can improve the accuracy of user behavior prediction effectively and aid in understanding the development trend of a topic.

  • Impossible Differential Cryptanalysis of Fantomas and Robin

    Xuan SHEN  Guoqiang LIU  Chao LI  Longjiang QU  

     
    LETTER-Cryptography and Information Security

      Vol:
    E101-A No:5
      Page(s):
    863-866

    At FSE 2014, Grosso et al. proposed LS-designs which are a family of bitslice ciphers aiming at efficient masked implementations against side-channel analysis. They also presented two specific LS-designs, namely the non-involutive cipher Fantomas and the involutive cipher Robin. The designers claimed that the longest impossible differentials of these two ciphers only span 3 rounds. In this paper, for the two ciphers, we construct 4-round impossible differentials which are one round more than the longest impossible differentials found by the designers. Furthermore, with the 4-round impossible differentials, we propose impossible differential attacks on Fantomas and Robin reduced to 6 rounds (out of the full 12/16 rounds). Both of the attacks need 2119 chosen plaintexts and 2101.81 6-round encryptions.

  • Static Representation Exposing Spatial Changes in Spatio-Temporal Dependent Data

    Hiroki CHIBA  Yuki HYOGO  Kazuo MISUE  

     
    PAPER-Elemental Technologies for human behavior analysis

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    933-943

    Spatio-temporal dependent data, such as weather observation data, are data of which the attribute values depend on both time and space. Typical methods for the visualization of such data include plotting the attribute values at each point in time on a map and displaying series of the maps in chronological order with animation, or displaying them by juxtaposing horizontally or vertically. However, these methods are problematic in that they compel readers interested in grasping the spatial changes of the attribute values to memorize the representations on the maps. The problem is exacerbated by considering that the longer the time-period covered by the data, the higher the cognitive load. In order to solve these problems, the authors propose a visualization method capable of overlaying the representations of multiple instantaneous values on a single static map. This paper explains the design of the proposed method and reports two experiments conducted by the authors to investigate the usefulness of the method. The experimental results show that the proposed method is useful in terms of the speed and accuracy with which it reads the spatial changes and its ability to present data with long time series efficiently.

  • Multiple Speech Source Separation with Non-Sparse Components Recovery by Using Dual Similarity Determination

    Maoshen JIA  Jundai SUN  Feng DENG  Junyue SUN  

     
    PAPER-Elemental Technologies for human behavior analysis

      Pubricized:
    2018/01/19
      Vol:
    E101-D No:4
      Page(s):
    925-932

    In this work, a multiple source separation method with joint sparse and non-sparse components recovery is proposed by using dual similarity determination. Specifically, a dual similarity coefficient is designed based on normalized cross-correlation and Jaccard coefficients, and its reasonability is validated via a statistical analysis on a quantitative effective measure. Thereafter, by regarding the sparse components as a guide, the non-sparse components are recovered using the dual similarity coefficient. Eventually, a separated signal is obtained by a synthesis of the sparse and non-sparse components. Experimental results demonstrate the separation quality of the proposed method outperforms some existing BSS methods including sparse components separation based methods, independent components analysis based methods and soft threshold based methods.

121-140hit(945hit)