The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] PAR(2741hit)

241-260hit(2741hit)

  • Fast and Robust Disparity Estimation from Noisy Light Fields Using 1-D Slanted Filters

    Gou HOUBEN  Shu FUJITA  Keita TAKAHASHI  Toshiaki FUJII  

     
    PAPER

      Pubricized:
    2019/07/03
      Vol:
    E102-D No:11
      Page(s):
    2101-2109

    Depth (disparity) estimation from a light field (a set of dense multi-view images) is currently attracting much research interest. This paper focuses on how to handle a noisy light field for disparity estimation, because if left as it is, the noise deteriorates the accuracy of estimated disparity maps. Several researchers have worked on this problem, e.g., by introducing disparity cues that are robust to noise. However, it is not easy to break the trade-off between the accuracy and computational speed. To tackle this trade-off, we have integrated a fast denoising scheme in a fast disparity estimation framework that works in the epipolar plane image (EPI) domain. Specifically, we found that a simple 1-D slanted filter is very effective for reducing noise while preserving the underlying structure in an EPI. Moreover, this simple filtering does not require elaborate parameter configurations in accordance with the target noise level. Experimental results including real-world inputs show that our method can achieve good accuracy with much less computational time compared to some state-of-the-art methods.

  • Structural Compressed Network Coding for Data Collection in Cluster-Based Wireless Sensor Networks

    Yimin ZHAO  Song XIAO  Hongping GAN  Lizhao LI  Lina XIAO  

     
    PAPER-Network

      Pubricized:
    2019/05/21
      Vol:
    E102-B No:11
      Page(s):
    2126-2138

    To efficiently collect sensor readings in cluster-based wireless sensor networks, we propose a structural compressed network coding (SCNC) scheme that jointly considers structural compressed sensing (SCS) and network coding (NC). The proposed scheme exploits the structural compressibility of sensor readings for data compression and reconstruction. Random linear network coding (RLNC) is used to re-project the measurements and thus enhance network reliability. Furthermore, we calculate the energy consumption of intra- and inter-cluster transmission and analyze the effect of the cluster size on the total transmission energy consumption. To that end, we introduce an iterative reweighed sparsity recovery algorithm to address the all-or-nothing effect of RLNC and decrease the recovery error. Experiments show that the SCNC scheme can decrease the number of measurements required for decoding and improve the network's robustness, particularly when the loss rate is high. Moreover, the proposed recovery algorithm has better reconstruction performance than several other state-of-the-art recovery algorithms.

  • Accelerating Stochastic Simulations on GPUs Using OpenCL

    Pilsung KANG  

     
    LETTER-Fundamentals of Information Systems

      Pubricized:
    2019/07/23
      Vol:
    E102-D No:11
      Page(s):
    2253-2256

    Since first introduced in 2008 with the 1.0 specification, OpenCL has steadily evolved over the decade to increase its support for heterogeneous parallel systems. In this paper, we accelerate stochastic simulation of biochemical reaction networks on modern GPUs (graphics processing units) by means of the OpenCL programming language. In implementing the OpenCL version of the stochastic simulation algorithm, we carefully apply its data-parallel execution model to optimize the performance provided by the underlying hardware parallelism of the modern GPUs. To evaluate our OpenCL implementation of the stochastic simulation algorithm, we perform a comparative analysis in terms of the performance using the CPU-based cluster implementation and the NVidia CUDA implementation. In addition to the initial report on the performance of OpenCL on GPUs, we also discuss applicability and programmability of OpenCL in the context of GPU-based scientific computing.

  • Further Results on the Separating Redundancy of Binary Linear Codes

    Haiyang LIU  Lianrong MA  

     
    LETTER-Coding Theory

      Vol:
    E102-A No:10
      Page(s):
    1420-1425

    In this letter, we investigate the separating redundancy of binary linear codes. Using analytical techniques, we provide a general lower bound on the first separating redundancy of binary linear codes and show the bound is tight for a particular family of binary linear codes, i.e., cycle codes. In other words, the first separating redundancy of cycle codes can be determined. We also derive a deterministic and constructive upper bound on the second separating redundancy of cycle codes, which is shown to be better than the general deterministic and constructive upper bounds for the codes.

  • An Efficient Parallel Triangle Enumeration on the MapReduce Framework

    Hongyeon KIM  Jun-Ki MIN  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2019/07/11
      Vol:
    E102-D No:10
      Page(s):
    1902-1915

    A triangle enumerating problem is one of fundamental problems of graph data. Although several triangle enumerating algorithms based on MapReduce have been proposed, they still suffer from generating a lot of intermediate data. In this paper, we propose the efficient MapReduce algorithms to enumerate every triangle in the massive graph based on a vertex partition. Since a triangle is composed of an edge and a wedge, our algorithms check the existence of an edge connecting the end-nodes of each wedge. To generate every triangle from a graph in parallel, we first split a graph into several vertex partitions and group the edges and wedges in the graph for each pair of vertex partitions. Then, we form the triangles appearing in each group. Furthermore, to enhance the performance of our algorithm, we remove the duplicated wedges existing in several groups. Our experimental evaluation shows the performance of our proposed algorithm is better than that of the state-of-the-art algorithm in diverse environments.

  • Analysis of Relevant Quality Metrics and Physical Parameters in Softness Perception and Assessment System

    Zhiyu SHAO  Juan WU  Qiangqiang OUYANG  

     
    PAPER-Rehabilitation Engineering and Assistive Technology

      Pubricized:
    2019/06/11
      Vol:
    E102-D No:10
      Page(s):
    2013-2024

    Many quality metrics have been proposed for the compliance perception to assess haptic device performance and perceived results. Perceived compliance may be influenced by factors such as object properties, experimental conditions and human perceptual habits. In this paper, analysis of softness perception was conducted to find out relevant quality metrics dominating in the compliance perception system and their correlation with perception results, by expressing these metrics by basic physical parameters that characterizing these factors. Based on three psychophysical experiments, just noticeable differences (JNDs) for perceived softness of combination of different stiffness coefficients and damping levels rendered by haptic devices were analyzed. Interaction data during the interaction process were recorded and analyzed. Preliminary experimental results show that the discrimination ability of softness perception changes with the ratio of damping to stiffness when subjects exploring at their habitual speed. Analysis results indicate that quality metrics of Rate-hardness, Extended Rate-hardness and ratio of damping to stiffness have high correlation for perceived results. Further analysis results show that parameters that reflecting object properties (stiffness, damping), experimental conditions (force bandwidth) and human perceptual habits (initial speed, maximum force change rate) lead to the change of these quality metrics, which then bring different perceptual feeling and finally result in the change of discrimination ability. Findings in this paper may provide a better understanding of softness perception and useful guidance in improvement of haptic and teleoperation devices.

  • Hardware-Based Principal Component Analysis for Hybrid Neural Network Trained by Particle Swarm Optimization on a Chip

    Tuan Linh DANG  Yukinobu HOSHINO  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E102-A No:10
      Page(s):
    1374-1382

    This paper presents a hybrid architecture for a neural network (NN) trained by a particle swarm optimization (PSO) algorithm. The NN is implemented on the hardware side while the PSO is executed by a processor on the software side. In addition, principal component analysis (PCA) is also applied to reduce correlated information. The PCA module is implemented in hardware by the SystemVerilog programming language to increase operating speed. Experimental results showed that the proposed architecture had been successfully implemented. In addition, the hardware-based NN trained by PSO (NN-PSO) program was faster than the software-based NN trained by the PSO program. The proposed NN-PSO with PCA also obtained better recognition rates than the NN-PSO without-PCA.

  • Device-Free Targets Tracking with Sparse Sampling: A Kronecker Compressive Sensing Approach

    Sixing YANG  Yan GUO  Dongping YU  Peng QIAN  

     
    PAPER

      Pubricized:
    2019/04/26
      Vol:
    E102-B No:10
      Page(s):
    1951-1959

    We research device-free (DF) multi-target tracking scheme in this paper. The existing localization and tracking algorithms are always pay attention to the single target and need to collect a large amount of localization information. In this paper, we exploit the sparse property of multiple target locations to achieve target trace accurately with much less sampling both in the wireless links and the time slots. The proposed approach mainly includes the target localization part and target trace recovery part. In target localization part, by exploiting the inherent sparsity of the target number, Compressive Sensing (CS) is utilized to reduce the wireless links distributed. In the target trace recovery part, we exploit the compressive property of target trace, as well as designing the measurement matrix and the sparse matrix, to reduce the samplings in time domain. Additionally, Kronecker Compressive Sensing (KCS) theory is used to simultaneously recover the multiple traces both of the X label and the Y Label. Finally, simulations show that the proposed approach holds an effective recovery performance.

  • Underdetermined Direction of Arrival Estimation Based on Signal Sparsity

    Peng LI  Zhongyuan ZHOU  Mingjie SHENG  Peng HU  Qi ZHOU  

     
    PAPER-Antennas and Propagation

      Pubricized:
    2019/04/12
      Vol:
    E102-B No:10
      Page(s):
    2066-2072

    An underdetermined direction of arrival estimation method based on signal sparsity is proposed when independent and coherent signals coexist. Firstly, the estimate of the mixing matrix of the impinging signals is obtained by clustering the single source points which are detected by the ratio of time-frequency transforms of the received signals. Then, each column vector of the mixing matrix is processed by exploiting the forward and backward vectors in turn to obtain the directions of arrival of all signals. The number of independent signals and coherent signal groups that can be estimated by the proposed method can be greater than the number of sensors. The validity of the method is demonstrated by simulations.

  • Polarization Filtering Based Transmission Scheme for Wireless Communications

    Zhangkai LUO  Zhongmin PEI  Bo ZOU  

     
    LETTER-Digital Signal Processing

      Vol:
    E102-A No:10
      Page(s):
    1387-1392

    In this letter, a polarization filtering based transmission (PFBT) scheme is proposed to enhance the spectrum efficiency in wireless communications. In such scheme, the information is divided into several parts and each is conveyed by a polarized signal with a unique polarization state (PS). Then, the polarized signals are added up and transmitted by the dual-polarized antenna. At the receiver side, the oblique projection polarization filters (OPPFs) are adopted to separate each polarized signal. Thus, they can be demodulated separately. We mainly focus on the construction methods of the OPPF matrix when the number of the separate parts is 2 and 3 and evaluate the performance in terms of the capacity and the bit error rate. In addition, we also discuss the probability of the signal separation when the number of the separate parts is equal or greater than 4. Theoretical results and simulation results demonstrate the performance of the proposed scheme.

  • A Fast Iterative Check Polytope Projection Algorithm for ADMM Decoding of LDPC Codes by Bisection Method Open Access

    Yan LIN  Qiaoqiao XIA  Wenwu HE  Qinglin ZHANG  

     
    LETTER-Information Theory

      Vol:
    E102-A No:10
      Page(s):
    1406-1410

    Using linear programming (LP) decoding based on alternating direction method of multipliers (ADMM) for low-density parity-check (LDPC) codes shows lower complexity than the original LP decoding. However, the development of the ADMM-LP decoding algorithm could still be limited by the computational complexity of Euclidean projections onto parity check polytope. In this paper, we proposed a bisection method iterative algorithm (BMIA) for projection onto parity check polytope avoiding sorting operation and the complexity is linear. In addition, the convergence of the proposed algorithm is more than three times as fast as the existing algorithm, which can even be 10 times in the case of high input dimension.

  • A Taxonomy of Secure Two-Party Comparison Protocols and Efficient Constructions

    Nuttapong ATTRAPADUNG  Goichiro HANAOKA  Shinsaku KIYOMOTO  Tomoaki MIMOTO  Jacob C. N. SCHULDT  

     
    PAPER-Cryptography and Information Security

      Vol:
    E102-A No:9
      Page(s):
    1048-1060

    Secure two-party comparison plays a crucial role in many privacy-preserving applications, such as privacy-preserving data mining and machine learning. In particular, the available comparison protocols with the appropriate input/output configuration have a significant impact on the performance of these applications. In this paper, we firstly describe a taxonomy of secure two-party comparison protocols which allows us to describe the different configurations used for these protocols in a systematic manner. This taxonomy leads to a total of 216 types of comparison protocols. We then describe conversions among these types. While these conversions are based on known techniques and have explicitly or implicitly been considered previously, we show that a combination of these conversion techniques can be used to convert a perhaps less-known two-party comparison protocol by Nergiz et al. (IEEE SocialCom 2010) into a very efficient protocol in a configuration where the two parties hold shares of the values being compared, and obtain a share of the comparison result. This setting is often used in multi-party computation protocols, and hence in many privacy-preserving applications as well. We furthermore implement the protocol and measure its performance. Our measurement suggests that the protocol outperforms the previously proposed protocols for this input/output configuration, when off-line pre-computation is not permitted.

  • Eye Movement Measurement of Gazing at the Rim of a Column in Stereo Images with Yellow-Blue Equiluminance Random Dots Open Access

    Shinya MOCHIDUKI  Ayaka NUNOMURA  Hiroaki KUDO  Mitsuho YAMADA  

     
    PAPER

      Vol:
    E102-A No:9
      Page(s):
    1196-1204

    We studied the detection of the incongruence between the two eyes' retinal images from occlusion perception. We previously analyzed the evasion action caused by occlusion by using green-red equiluminance, which is processed by parvocellular cells. Here we analyzed this action by using yellow-blue equiluminance, which is said to be treated by koniocellular cells and parvocellular cells. We observed that there were the cases in which the subject could perceive incongruence by the occlusion and other cases in which the subject could not perceive it. Significant differences were not seen in all conditions. Because a difference was seen in an evasion action at the time of the rim occlusion gaze when we compare the result for the yellow-blue equiluminance with the green-red equiluminance, it is suggested that the response for each equiluminance is different. We were able to clarify the characteristic difference between parvocellular cells and koniocellular cells from an occlusion experiment.

  • Fast Hyperspectral Unmixing via Reweighted Sparse Regression Open Access

    Hongwei HAN  Ke GUO  Maozhi WANG  Tingbin ZHANG  Shuang ZHANG  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2019/05/28
      Vol:
    E102-D No:9
      Page(s):
    1819-1832

    The sparse unmixing of hyperspectral data has attracted much attention in recent years because it does not need to estimate the number of endmembers nor consider the lack of pure pixels in a given hyperspectral scene. However, the high mutual coherence of spectral libraries strongly affects the practicality of sparse unmixing. The collaborative sparse unmixing via variable splitting and augmented Lagrangian (CLSUnSAL) algorithm is a classic sparse unmixing algorithm that performs better than other sparse unmixing methods. In this paper, we propose a CLSUnSAL-based hyperspectral unmixing method based on dictionary pruning and reweighted sparse regression. First, the algorithm identifies a subset of the original library elements using a dictionary pruning strategy. Second, we present a weighted sparse regression algorithm based on CLSUnSAL to further enhance the sparsity of endmember spectra in a given library. Third, we apply the weighted sparse regression algorithm on the pruned spectral library. The effectiveness of the proposed algorithm is demonstrated on both simulated and real hyperspectral datasets. For simulated data cubes (DC1, DC2 and DC3), the number of the pruned spectral library elements is reduced by at least 94% and the runtime of the proposed algorithm is less than 10% of that of CLSUnSAL. For simulated DC4 and DC5, the runtime of the proposed algorithm is less than 15% of that of CLSUnSAL. For the real hyperspectral datasets, the pruned spectral library successfully reduces the original dictionary size by 76% and the runtime of the proposed algorithm is 11.21% of that of CLSUnSAL. These experimental results show that our proposed algorithm not only substantially improves the accuracy of unmixing solutions but is also much faster than some other state-of-the-art sparse unmixing algorithms.

  • Pyramid Predictive Attention Network for Medical Image Segmentation Open Access

    Tingxiao YANG  Yuichiro YOSHIMURA  Akira MORITA  Takao NAMIKI  Toshiya NAKAGUCHI  

     
    PAPER

      Vol:
    E102-A No:9
      Page(s):
    1225-1234

    In this paper, we propose a Pyramid Predictive Attention Network (PPAN) for medical image segmentation. In the medical field, the size of dataset generally restricts the performance of deep CNN and deploying the trained network with gross parameters into the terminal device with limited memory is an expectation. Our team aims to the future home medical diagnosis and search for lightweight medical image segmentation network. Therefore, we designed PPAN mainly made of Xception blocks which are modified from DeepLab v3+ and consist of separable depthwise convolutions to speed up the computation and reduce the parameters. Meanwhile, by utilizing pyramid predictions from each dimension stage will guide the network more accessible to optimize the training process towards the final segmentation target without degrading the performance. IoU metric is used for the evaluation on the test dataset. We compared our designed network performance with the current state of the art segmentation networks on our RGB tongue dataset which was captured by the developed TIAS system for tongue diagnosis. Our designed network reduced 80 percentage parameters compared to the most widely used U-Net in medical image segmentation and achieved similar or better performance. Any terminal with limited storage which is needed a segment of RGB image can refer to our designed PPAN.

  • Exploiting Packet-Level Parallelism of Packet Parsing for FPGA-Based Switches

    Junnan LI  Biao HAN  Zhigang SUN  Tao LI  Xiaoyan WANG  

     
    PAPER-Transmission Systems and Transmission Equipment for Communications

      Pubricized:
    2019/03/18
      Vol:
    E102-B No:9
      Page(s):
    1862-1874

    FPGA-based switches are appealing nowadays due to the balance between hardware performance and software flexibility. Packet parser, as the foundational component of FPGA-based switches, is to identify and extract specific fields used in forwarding decisions, e.g., destination IP address. However, traditional parsers are too rigid to accommodate new protocols. In addition, FPGAs usually have a much lower clock frequency and fewer hardware resources, compared to ASICs. In this paper, we present PLANET, a programmable packet-level parallel parsing architecture for FPGA-based switches, to overcome these two limitations. First, PLANET has flexible programmability of updating parsing algorithms at run-time. Second, PLANET highly exploits parallelism inside packet parsing to compensate FPGA's low clock frequency and reduces resource consumption with one-block recycling design. We implemented PLANET on an FPGA-based switch prototype with well-integrated datacenter protocols. Evaluation results show that our design can parse packets at up to 100 Gbps, as well as maintain a relative low parsing latency and fewer hardware resources than existing proposals.

  • Compressed Sensing in Magnetic Resonance Imaging Using Non-Randomly Under-Sampled Signal in Cartesian Coordinates

    Ryo KAZAMA  Kazuki SEKINE  Satoshi ITO  

     
    PAPER-Biological Engineering

      Pubricized:
    2019/05/31
      Vol:
    E102-D No:9
      Page(s):
    1851-1859

    Image quality depends on the randomness of the k-space signal under-sampling in compressed sensing MRI (CS-MRI), especially for two-dimensional image acquisition. We investigate the feasibility of non-random signal under-sampling CS-MRI to stabilize the quality of reconstructed images and avoid arbitrariness in sampling point selection. Regular signal under-sampling for the phase-encoding direction is adopted, in which sampling points are chosen at equal intervals for the phase-encoding direction while varying the sampling density. Curvelet transform was adopted to remove the aliasing artifacts due to regular signal under-sampling. To increase the incoherence between the measurement matrix and the sparsifying transform function, the scale of the curvelet transform was varied in each iterative image reconstruction step. We evaluated the obtained images by the peak-signal-to-noise ratio and root mean squared error in localized 3×3 pixel regions. Simulation studies and experiments showed that the signal-to-noise ratio and the structural similarity index of reconstructed images were comparable to standard random under-sampling CS. This study demonstrated the feasibility of non-random under-sampling based CS by using the multi-scale curvelet transform as a sparsifying transform function. The technique may help to stabilize the obtained image quality in CS-MRI.

  • Multi-Party Computation for Modular Exponentiation Based on Replicated Secret Sharing

    Kazuma OHARA  Yohei WATANABE  Mitsugu IWAMOTO  Kazuo OHTA  

     
    PAPER-Cryptography and Information Security

      Vol:
    E102-A No:9
      Page(s):
    1079-1090

    In recent years, multi-party computation (MPC) frameworks based on replicated secret sharing schemes (RSSS) have attracted the attention as a method to achieve high efficiency among known MPCs. However, the RSSS-based MPCs are still inefficient for several heavy computations like algebraic operations, as they require a large amount and number of communication proportional to the number of multiplications in the operations (which is not the case with other secret sharing-based MPCs). In this paper, we propose RSSS-based three-party computation protocols for modular exponentiation, which is one of the most popular algebraic operations, on the case where the base is public and the exponent is private. Our proposed schemes are simple and efficient in both of the asymptotic and practical sense. On the asymptotic efficiency, the proposed schemes require O(n)-bit communication and O(1) rounds,where n is the secret-value size, in the best setting, whereas the previous scheme requires O(n2)-bit communication and O(n) rounds. On the practical efficiency, we show the performance of our protocol by experiments on the scenario for distributed signatures, which is useful for secure key management on the distributed environment (e.g., distributed ledgers). As one of the cases, our implementation performs a modular exponentiation on a 3,072-bit discrete-log group and 256-bit exponent with roughly 300ms, which is an acceptable parameter for 128-bit security, even in the WAN setting.

  • A Space-Efficient Separator Algorithm for Planar Graphs

    Ryo ASHIDA  Sebastian KUHNERT  Osamu WATANABE  

     
    PAPER-Graph algorithms

      Vol:
    E102-A No:9
      Page(s):
    1007-1016

    Miller [9] proposed a linear-time algorithm for computing small separators for 2-connected planar graphs. We explain his algorithm and present a way to modify it to a space efficient version. Our algorithm can be regarded as a log-space reduction from the separator construction to the breadth first search tree construction.

  • A Fast Cross-Validation Algorithm for Kernel Ridge Regression by Eigenvalue Decomposition

    Akira TANAKA  Hideyuki IMAI  

     
    LETTER-Numerical Analysis and Optimization

      Vol:
    E102-A No:9
      Page(s):
    1317-1320

    A fast cross-validation algorithm for model selection in kernel ridge regression problems is proposed, which is aiming to further reduce the computational cost of the algorithm proposed by An et al. by eigenvalue decomposition of a Gram matrix.

241-260hit(2741hit)