The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

521-540hit(3945hit)

  • Cube-Based Encryption-then-Compression System for Video Sequences

    Kosuke SHIMIZU  Taizo SUZUKI  Keisuke KAMEYAMA  

     
    PAPER-Image

      Vol:
    E101-A No:11
      Page(s):
    1815-1822

    We propose the cube-based perceptual encryption (C-PE), which consists of cube scrambling, cube rotation, cube negative/positive transformation, and cube color component shuffling, and describe its application to the encryption-then-compression (ETC) system of Motion JPEG (MJPEG). Especially, cube rotation replaces the blocks in the original frames with ones in not only the other frames but also the depth-wise cube sides (spatiotemporal sides) unlike conventional block-based perceptual encryption (B-PE). Since it makes intra-block observation more difficult and prevents unauthorized decryption from only a single frame, it is more robust than B-PE against attack methods without any decryption key. However, because the encrypted frames including the blocks from the spatiotemporal sides affect the MJPEG compression performance slightly, we also devise a version of C-PE with no spatiotemporal sides (NSS-C-PE) that hardly affects compression performance. C-PE makes the encrypted video sequence robust against the only single frame-based algorithmic brute force (ABF) attack with only 21 cubes. The experimental results show the compression efficiency and encryption robustness of the C-PE/NSS-C-PE-based ETC system. C-PE-based ETC system shows mixed results depending on videos, whereas NSS-C-PE-based ETC system shows that the BD-PSNR can be suppressed to about -0.03dB not depending on videos.

  • Deterministic Constructions of Compressed Sensing Matrices Based on Affine Singular Linear Space over Finite Fields

    Gang WANG  Min-Yao NIU  Jian GAO  Fang-Wei FU  

     
    LETTER-Coding Theory

      Vol:
    E101-A No:11
      Page(s):
    1957-1963

    Compressed sensing theory provides a new approach to acquire data as a sampling technique and makes sure that a sparse signal can be reconstructed from few measurements. The construction of compressed sensing matrices is a main problem in compressed sensing theory (CS). In this paper, the deterministic constructions of compressed sensing matrices based on affine singular linear space over finite fields are presented and a comparison is made with the compressed sensing matrices constructed by DeVore based on polynomials over finite fields. By choosing appropriate parameters, our sparse compressed sensing matrices are superior to the DeVore's matrices. Then we use a new formulation of support recovery to recover the support sets of signals with sparsity no more than k on account of binary compressed sensing matrices satisfying disjunct and inclusive properties.

  • Low-Complexity Detection Based on Landweber Method in the Uplink of Massive MIMO Systems

    Xu BAO  Wence ZHANG  Jisheng DAI  Jianxin DAI  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2018/05/16
      Vol:
    E101-B No:11
      Page(s):
    2340-2347

    In this paper, we devise low-complexity uplink detection algorithms for Massive MIMO systems. We treat the uplink detection as an ill-posed problem and adopt the Landweber Method to solve it. In order to reduce the computational complexity and increase the convergence rate, we propose improved Landweber Method with optimal relax factor (ILM-O) algorithm. In addition, to reduce the order of Landweber Method by introducing a set of coefficients, we propose reduced order Landweber Method (ROLM) algorithm. An analysis on the convergence and the complexity is provided. Numerical results demonstrate that the proposed algorithms outperform the existing algorithm.

  • Geometric Deformation Analysis of Ray-Sampling Plane Method for Projection-Type Holographic Display Open Access

    Koki WAKUNAMI  Yasuyuki ICHIHASHI  Ryutaro OI  Makoto OKUI  Boaz Jessie JACKIN  Kenji YAMAMOTO  

     
    INVITED PAPER

      Vol:
    E101-C No:11
      Page(s):
    863-869

    Computer-generated hologram based on ray-sampling plane method was newly applied to the projection-type holographic display that consists of the holographic projection and the holographic optical element screen. In the proposed method, geometric deformation characteristic of the holographic image via the display system was mathematically derived and canceled out by the coordinate transformation of ray-sampling condition to avoid the image distortion. In the experiment, holographic image reconstruction with the arbitral depth expression without image distortion could be optically demonstrated.

  • Efficient Reusable Collections

    Davud MOHAMMADPUR  Ali MAHJUR  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/08/20
      Vol:
    E101-D No:11
      Page(s):
    2710-2719

    Efficiency and flexibility of collections have a significant impact on the overall performance of applications. The current approaches to implement collections have two main drawbacks: (i) they limit the efficiency of collections and (ii) they have not adequate support for collection composition. So, when the efficiency and flexibility of collections is important, the programmer needs to implement them himself, which leads to the loss of reusability. This article presents neoCollection, a novel approach to encapsulate collections. neoCollection has several distinguishing features: (i) it can be applied on data elements efficiently and flexibly (ii) composition of collections can be made efficiently and flexibly, a feature that does not exist in the current approaches. In order to demonstrate its effectiveness, neoCollection is implemented as an extension to Java and C++.

  • Accelerating a Lloyd-Type k-Means Clustering Algorithm with Summable Lower Bounds in a Lower-Dimensional Space

    Kazuo AOYAMA  Kazumi SAITO  Tetsuo IKEDA  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2018/08/02
      Vol:
    E101-D No:11
      Page(s):
    2773-2783

    This paper presents an efficient acceleration algorithm for Lloyd-type k-means clustering, which is suitable to a large-scale and high-dimensional data set with potentially numerous classes. The algorithm employs a novel projection-based filter (PRJ) to avoid unnecessary distance calculations, resulting in high-speed performance keeping the same results as a standard Lloyd's algorithm. The PRJ exploits a summable lower bound on a squared distance defined in a lower-dimensional space to which data points are projected. The summable lower bound can make the bound tighter dynamically by incremental addition of components in the lower-dimensional space within each iteration although the existing lower bounds used in other acceleration algorithms work only once as a fixed filter. Experimental results on large-scale and high-dimensional real image data sets demonstrate that the proposed algorithm works at high speed and with low memory consumption when large k values are given, compared with the state-of-the-art algorithms.

  • NEST: Towards Extreme Scale Computing Systems

    Yunfeng LU  Huaxi GU  Xiaoshan YU  Kun WANG  

     
    LETTER-Information Network

      Pubricized:
    2018/08/20
      Vol:
    E101-D No:11
      Page(s):
    2827-2830

    High-performance computing (HPC) has penetrated into various research fields, yet the increase in computing power is limited by conventional electrical interconnections. The proposed architecture, NEST, exploits wavelength routing in arrayed waveguide grating routers (AWGRs) to achieve a scalable, low-latency, and high-throughput network. For the intra pod and inter pod communication, the symmetrical topology of NEST reduces the network diameter, which leads to an increase in latency performance. Moreover, the proposed architecture enables exponential growth of network size. Simulation results demonstrate that NEST shows 36% latency improvement and 30% throughput improvement over the dragonfly on an average.

  • High-Speed Spelling in Virtual Reality with Sequential Hybrid BCIs

    Zhaolin YAO  Xinyao MA  Yijun WANG  Xu ZHANG  Ming LIU  Weihua PEI  Hongda CHEN  

     
    LETTER-Biological Engineering

      Pubricized:
    2018/07/25
      Vol:
    E101-D No:11
      Page(s):
    2859-2862

    A new hybrid brain-computer interface (BCI), which is based on sequential controls by eye tracking and steady-state visual evoked potentials (SSVEPs), has been proposed for high-speed spelling in virtual reality (VR) with a 40-target virtual keyboard. During target selection, gaze point was first detected by an eye-tracking accessory. A 4-target block was then selected for further target selection by a 4-class SSVEP BCI. The system can type at a speed of 1.25 character/sec in a cue-guided target selection task. Online experiments on three subjects achieved an averaged information transfer rate (ITR) of 360.7 bits/min.

  • Mobile Network Architectures and Context-Aware Network Control Technology in the IoT Era Open Access

    Takanori IWAI  Daichi KOMINAMI  Masayuki MURATA  Ryogo KUBO  Kozo SATODA  

     
    INVITED PAPER

      Pubricized:
    2018/04/13
      Vol:
    E101-B No:10
      Page(s):
    2083-2093

    As IoT services become more popular, mobile networks will have to accommodate a wide variety of devices that have different requirements such as different bandwidth limitations and latencies. This paper describes edge distributed mobile network architectures for the IoT era based on dedicated network technology and multi-access edge computing technology, which have been discussed in 3GPP and ETSI. Furthermore, it describes two context-aware control methods that will make mobile networks on the network architecture more efficient, reliable, and real-time: autonomous and distributed mobility management and bandwidth-guaranteed transmission rate control in a networked control system.

  • New Constructions of Zero-Difference Balanced Functions

    Zhibao LIN  Zhengqian LI  Pinhui KE  

     
    LETTER-Coding Theory

      Vol:
    E101-A No:10
      Page(s):
    1719-1723

    Zero-difference balanced (ZDB) functions, which have many applications in coding theory and sequence design, have received a lot of attention in recent years. In this letter, based on two known classes of ZDB functions, a new class of ZDB functions, which is defined on the group (Z2e-1×Zn,+) is presented, where e is a prime and n=p1m1p2m2…pkmk, pi is odd prime satisfying that e|(pi-1) for any 1≤i≤k . In the case of gcd(2e-1,n)=1, the new constructed ZDB functions are cyclic.

  • Incremental Environmental Monitoring for Revealing the Ecology of Endangered Fish Open Access

    Yoshinari SHIRAI  Yasue KISHINO  Shin MIZUTANI  Yutaka YANAGISAWA  Takayuki SUYAMA  Takuma OTSUKA  Tadao KITAGAWA  Futoshi NAYA  

     
    INVITED PAPER

      Pubricized:
    2018/04/13
      Vol:
    E101-B No:10
      Page(s):
    2070-2082

    This paper proposes a novel environmental monitoring strategy, incremental environmental monitoring, that enables scientists to reveal the ecology of wild animals in the field. We applied this strategy to the habitat of endangered freshwater fish. Specifically, we designed and implemented a network-based system using distributed sensors to continuously monitor and record the habitat of endangered fish. Moreover, we developed a set of analytical tools to exploit a variety of sensor data, including environmental time-series data such as amount of dissolved oxygen, as well as underwater video capturing the interaction of fish and their environment. We also describe the current state of monitoring the behavior and habitat of endangered fish and discuss solutions for making such environmental monitoring more efficient in the field.

  • Wideband Waveguide Short-Slot 2-Plane Coupler Using Frequency Shift of Propagating Modes

    Dong-Hun KIM  Jiro HIROKAWA  Makoto ANDO  

     
    PAPER-Microwaves, Millimeter-Waves

      Vol:
    E101-C No:10
      Page(s):
    815-821

    A wideband design of the waveguide short-slot 2-plane coupler with 2×2 input/output ports is designed, fabricated, and evaluated. Using coupling coefficients of complementary propagating modes which are TE11, TE21, and TE30 modes, the flatness of the output amplitudes of 2-plane coupler is improved. The coupler operates from 4.96GHz to 5.27GHz (bandwidth 6.1%) which is wider than the former coupler without considering the complementary propagating mode from 5.04GHz to 5.17GHz (bandwidth 2.5%).

  • Improving Per-Node Computing Efficiency by an Adaptive Lock-Free Scheduling Model

    Zhishuo ZHENG  Deyu QI  Naqin ZHOU  Xinyang WANG  Mincong YU  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2018/07/06
      Vol:
    E101-D No:10
      Page(s):
    2423-2435

    Job scheduling on many-core computers with tens or even hundreds of processing cores is one of the key technologies in High Performance Computing (HPC) systems. Despite many scheduling algorithms have been proposed, scheduling remains a challenge for executing highly effective jobs that are assigned in a single computing node with diverse scheduling objectives. On the other hand, the increasing scale and the need for rapid response to changing requirements are hard to meet with existing scheduling models in an HPC node. To address these issues, we propose a novel adaptive scheduling model that is applied to a single node with a many-core processor; this model solves the problems of scheduling efficiency and scalability through an adaptive optimistic control mechanism. This mechanism exposes information such that all the cores are provided with jobs and the tools necessary to take advantage of that information and thus compete for resources in an uncoordinated manner. At the same time, the mechanism is equipped with adaptive control, allowing it to adjust the number of running tools dynamically when frequent conflict happens. We justify this scheduling model and present the simulation results for synthetic and real-world HPC workloads, in which we compare our proposed model with two widely used scheduling models, i.e. multi-path monolithic and two-level scheduling. The proposed approach outperforms the other models in scheduling efficiency and scalability. Our results demonstrate that the adaptive optimistic control affords significant improvements for HPC workloads in the parallelism of the node-level scheduling model and performance.

  • Twofold Correlation Filtering for Tracking Integration

    Wei WANG  Weiguang LI  Zhaoming CHEN  Mingquan SHI  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2018/07/10
      Vol:
    E101-D No:10
      Page(s):
    2547-2550

    In general, effective integrating the advantages of different trackers can achieve unified performance promotion. In this work, we study the integration of multiple correlation filter (CF) trackers; propose a novel but simple tracking integration method that combines different trackers in filter level. Due to the variety of their correlation filter and features, there is no comparability between different CF tracking results for tracking integration. To tackle this, we propose twofold CF to unify these various response maps so that the results of different tracking algorithms can be compared, so as to boost the tracking performance like ensemble learning. Experiment of two CF methods integration on the data sets OTB demonstrates that the proposed method is effective and promising.

  • Low Bit-Rate Compression Image Restoration through Subspace Joint Regression Learning

    Zongliang GAN  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2018/06/28
      Vol:
    E101-D No:10
      Page(s):
    2539-2542

    In this letter, an effective low bit-rate image restoration method is proposed, in which image denoising and subspace regression learning are combined. The proposed framework has two parts: image main structure estimation by classical NLM denoising and texture component prediction by subspace joint regression learning. The local regression function are learned from denoised patch to original patch in each subspace, where the corresponding compression image patches are employed to generate anchoring points by the dictionary learning approach. Moreover, we extent Extreme Support Vector Regression (ESVR) as multi-variable nonlinear regression to get more robustness results. Experimental results demonstrate the proposed method achieves favorable performance compared with other leading methods.

  • Recovery Performance of IHT and HTP Algorithms under General Perturbations

    Xiaobo ZHANG  Wenbo XU  Yupeng CUI  Jiaru LIN  

     
    LETTER-Digital Signal Processing

      Vol:
    E101-A No:10
      Page(s):
    1698-1702

    In compressed sensing, most previous researches have studied the recovery performance of a sparse signal x based on the acquired model y=Φx+n, where n denotes the noise vector. There are also related studies for general perturbation environment, i.e., y=(Φ+E)x+n, where E is the measurement perturbation. IHT and HTP algorithms are the classical algorithms for sparse signal reconstruction in compressed sensing. Under the general perturbations, this paper derive the required sufficient conditions and the error bounds of IHT and HTP algorithms.

  • DCD-Based Branch and Bound Detector with Reduced Complexity for MIMO Systems

    Zhi QUAN  Ting TIAN  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2018/04/09
      Vol:
    E101-B No:10
      Page(s):
    2230-2238

    In many communications applications, maximum-likelihood decoding reduces to solving an integer least-squares problem, which is NP-hard in the worst case. It has recently been shown that over a wide range of dimensions and SNRs, the branch and bound (BB) algorithm can be used to find the exact solution with an expected complexity that is roughly cubic in the dimension of the problem. However, the computational complexity becomes prohibitive if the SNR is too low and/or the dimension of the problem is too large. The dichotomous coordinate descent (DCD) algorithm provides low complexity, but its detection performance is not as good as that of the BB detector. Two methods are developed to bound the optimal detector cost to reduce the complexity of BB in this paper. These methods are DCD-based detectors for MIMO and multiuser detection in the scenario of a large number of transmitting antennas/users. First, a combined detection technique based on the BB and DCD algorithms is proposed. The technique maintains the advantages of both algorithms and achieves a good trade-off between performance and complexity compared to using only the BB or DCD algorithm. Second, since the first feasible solution obtained from the BB search is the solution of the decorrelating decision feedback (DF) method and because DCD results in better accuracy than the decorrelating DF solution, we propose that the first feasible solution of the BB algorithm be obtained by the box-constrained DCD algorithm rather than the decorrelating DF detector. This method improves the precision of the initial solution and identifies more branches that can be eliminated from the search tree. The results show that the DCD-based BB detector provides optimal detection with reduced worst-case complexity compared to that of the decorrelating DF-based BB detector.

  • Weighting Estimation Methods for Opponents' Utility Functions Using Boosting in Multi-Time Negotiations

    Takaki MATSUNE  Katsuhide FUJITA  

     
    PAPER-Information Network

      Pubricized:
    2018/07/10
      Vol:
    E101-D No:10
      Page(s):
    2474-2484

    Recently, multi-issue closed negotiations have attracted attention in multi-agent systems. In particular, multi-time and multilateral negotiation strategies are important topics in multi-issue closed negotiations. In multi-issue closed negotiations, an automated negotiating agent needs to have strategies for estimating an opponent's utility function by learning the opponent's behaviors since the opponent's utility information is not open to others. However, it is difficult to estimate an opponent's utility function for the following reasons: (1) Training datasets for estimating opponents' utility functions cannot be obtained. (2) It is difficult to apply the learned model to different negotiation domains and opponents. In this paper, we propose a novel method of estimating the opponents' utility functions using boosting based on the least-squares method and nonlinear programming. Our proposed method weights each utility function estimated by several existing utility function estimation methods and outputs improved utility function by summing each weighted function. The existing methods using boosting are based on the frequency-based method, which counts the number of values offered, considering the time elapsed when they offered. Our experimental results demonstrate that the accuracy of estimating opponents' utility functions is significantly improved under various conditions compared with the existing utility function estimation methods without boosting.

  • Designing Coded Aperture Camera Based on PCA and NMF for Light Field Acquisition

    Yusuke YAGI  Keita TAKAHASHI  Toshiaki FUJII  Toshiki SONODA  Hajime NAGAHARA  

     
    PAPER

      Pubricized:
    2018/06/20
      Vol:
    E101-D No:9
      Page(s):
    2190-2200

    A light field, which is often understood as a set of dense multi-view images, has been utilized in various 2D/3D applications. Efficient light field acquisition using a coded aperture camera is the target problem considered in this paper. Specifically, the entire light field, which consists of many images, should be reconstructed from only a few images that are captured through different aperture patterns. In previous work, this problem has often been discussed from the context of compressed sensing (CS), where sparse representations on a pre-trained dictionary or basis are explored to reconstruct the light field. In contrast, we formulated this problem from the perspective of principal component analysis (PCA) and non-negative matrix factorization (NMF), where only a small number of basis vectors are selected in advance based on the analysis of the training dataset. From this formulation, we derived optimal non-negative aperture patterns and a straight-forward reconstruction algorithm. Even though our method is based on conventional techniques, it has proven to be more accurate and much faster than a state-of-the-art CS-based method.

  • Parameterized Algorithms to Compute Ising Partition Function

    Hidefumi HIRAISHI  Hiroshi IMAI  Yoichi IWATA  Bingkai LIN  

     
    PAPER

      Vol:
    E101-A No:9
      Page(s):
    1398-1403

    Computing the partition function of the Ising model on a graph has been investigated from both sides of computer science and statistical physics, with producing fertile results of P cases, FPTAS/FPRAS cases, inapproximability and intractability. Recently, measurement-based quantum computing as well as quantum annealing open up another bridge between two fields by relating a tree tensor network representing a quantum graph state to a rank decomposition of the graph. This paper makes this bridge wider in both directions. An $O^*(2^{ rac{omega}{2} bw(G)})$-time algorithm is developed for the partition function on n-vertex graph G with branch decomposition of width bw(G), where O* ignores a polynomial factor in n and ω is the matrix multiplication parameter less than 2.37287. Related algorithms of $O^*(4^{rw( ilde{G})})$ time for the tree tensor network are given which are of interest in quantum computation, given rank decomposition of a subdivided graph $ ilde{G}$ with width $rw( ilde{G})$. These algorithms are parameter-exponential, i.e., O*(cp) for constant c and parameter p, and such an algorithm is not known for a more general case of computing the Tutte polynomial in terms of bw(G) (the current best time is O*(min{2n, bw(G)O(bw(G))})) with a negative result in terms of the clique-width, related to the rank-width, under ETH.

521-540hit(3945hit)