The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

881-900hit(3945hit)

  • One-bit Matrix Compressed Sensing Algorithm for Sparse Matrix Recovery

    Hui WANG  Sabine VAN HUFFEL  Guan GUI  Qun WAN  

     
    LETTER-Digital Signal Processing

      Vol:
    E99-A No:2
      Page(s):
    647-650

    This paper studies the problem of recovering an arbitrarily distributed sparse matrix from its one-bit (1-bit) compressive measurements. We propose a matrix sketching based binary method iterative hard thresholding (MSBIHT) algorithm by combining the two dimensional version of BIHT (2DBIHT) and the matrix sketching method, to solve the sparse matrix recovery problem in matrix form. In contrast to traditional one-dimensional BIHT (BIHT), the proposed algorithm can reduce computational complexity. Besides, the MSBIHT can also improve the recovery performance comparing to the 2DBIHT method. A brief theoretical analysis and numerical experiments show the proposed algorithm outperforms traditional ones.

  • Performance of an Inline RZ-DPSK Pulse Compression Using Raman Amplifier and Its Application in OTDM Tributary

    Quynh NGUYEN QUANG NHU  Hung NGUYEN TAN  Quang NGUYEN-THE  Motoharu MATSUURA  Naoto KISHI  

     
    PAPER

      Vol:
    E99-C No:2
      Page(s):
    227-234

    We experimentally investigate the performance of a distributed Raman amplifier (DRA)-based pulse compressor for a phase modulated signal. A 10 Gb/s return-to-zero (RZ)-differential phase shift keying (DPSK) signal is compressed to picosecond range after transmission. Pulsewidth is continuously compressed in a wide range from 20 to 3.2 ps by changing the pump power of the DRA while the compressed waveforms are well-matched with sech2 function. Error-free operations at bit-error-rate (BER) of 10-9 are achieved for the compressed signals of various pulsewidths with low power penalties within 2.3 dB compared to the back-to-back. After the compression, the 10 Gb/s signal is used to generate a 40 Gb/s RZ-DPSK optical time division multiplexing (OTDM) signal. This 40 Gb/s OTDM signal is then successfully demultiplexed to 10 Gb/s DPSK signal by using an optical gate based on four-wave mixing (FWM) in a highly nonlinear fiber (HNLF).

  • A Fast Quantum Computer Simulator Based on Register Reordering

    Masaki NAKANISHI  Miki MATSUYAMA  Yumi YOKOO  

     
    PAPER-Computer System

      Pubricized:
    2015/11/19
      Vol:
    E99-D No:2
      Page(s):
    332-340

    Quantum computer simulators play an important role when we evaluate quantum algorithms. Quantum computation can be regarded as parallel computation in some sense, and thus, it is suitable to implement a simulator on hardware that can process a lot of operations in parallel. In this paper, we propose a hardware quantum computer simulator. The proposed simulator is based on the register reordering method that shifts and swaps registers containing probability amplitudes so that the probability amplitudes of target basis states can be quickly selected. This reduces the number of large multiplexers and improves clock frequency. We implement the simulator on an FPGA. Experiments show that the proposed simulator has scalability in terms of the number of quantum bits, and can simulate quantum algorithms faster than software simulators.

  • The Controllability of Power Grids in Comparison with Classical Complex Network Models

    Yi-Jia ZHANG  Zhong-Jian KANG  Xin-Feng LI  Zhe-Ming LU  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2015/10/20
      Vol:
    E99-D No:1
      Page(s):
    279-282

    The controllability of complex networks has attracted increasing attention within various scientific fields. Many power grids are complex networks with some common topological characteristics such as small-world and scale-free features. This Letter investigate the controllability of some real power grids in comparison with classical complex network models with the same number of nodes. Several conclusions are drawn after detailed analyses using several real power grids together with Erdös-Rényi (ER) random networks, Wattz-Strogatz (WS) small-world networks, Barabási-Albert (BA) scale-free networks and configuration model (CM) networks. The main conclusion is that most driver nodes of power grids are hub-free nodes with low nodal degree values of 1 or 2. The controllability of power grids is determined by degree distribution and heterogeneity, and power grids are harder to control than WS networks and CM networks while easier than BA networks. Some power grids are relatively difficult to control because they require a far higher ratio of driver nodes than ER networks, while other power grids are easier to control for they require a driver node ratio less than or equal to ER random networks.

  • A Collision Attack on a Double-Block-Length Compression Function Instantiated with 8-/9-Round AES-256

    Jiageng CHEN  Shoichi HIROSE  Hidenori KUWAKADO  Atsuko MIYAJI  

     
    PAPER

      Vol:
    E99-A No:1
      Page(s):
    14-21

    This paper presents the first non-trivial collision attack on the double-block-length compression function presented at FSE 2006 instantiated with round-reduced AES-256: f0(h0||h1,M)||f1(h0||h1,M) such that f0(h0||h1, M) = Eh1||M(h0)⊕h0 , f1(h0||h1,M) = Eh1||M(h0⊕c)⊕h0⊕c , where || represents concatenation, E is AES-256 and c is a 16-byte non-zero constant. The proposed attack is a free-start collision attack using the rebound attack proposed by Mendel et al. The success of the proposed attack largely depends on the configuration of the constant c: the number of its non-zero bytes and their positions. For the instantiation with AES-256 reduced from 14 rounds to 8 rounds, it is effective if the constant c has at most four non-zero bytes at some specific positions, and the time complexity is 264 or 296. For the instantiation with AES-256 reduced to 9 rounds, it is effective if the constant c has four non-zero bytes at some specific positions, and the time complexity is 2120. The space complexity is negligible in both cases.

  • Turbidity Underwater Image Restoration Using Spectral Properties and Light Compensation

    Huimin LU  Yujie LI  Shota NAKASHIMA  Seiichi SERIKAWA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2015/10/20
      Vol:
    E99-D No:1
      Page(s):
    219-227

    Absorption, scattering, and color distortion are three major issues in underwater optical imaging. Light rays traveling through water are scattered and absorbed according to their wavelength. Scattering is caused by large suspended particles that degrade underwater optical images. Color distortion occurs because different wavelengths are attenuated to different degrees in water; consequently, images of ambient underwater environments are dominated by a bluish tone. In the present paper, we propose a novel underwater imaging model that compensates for the attenuation discrepancy along the propagation path. In addition, we develop a fast weighted guided normalized convolution domain filtering algorithm for enhancing underwater optical images. The enhanced images are characterized by a reduced noise level, better exposure in dark regions, and improved global contrast, by which the finest details and edges are enhanced significantly.

  • Ontology Based Framework for Interactive Self-Assessment of e-Health Applications Open Access

    Wasin PASSORNPAKORN  Sinchai KAMOLPHIWONG  

     
    INVITED PAPER

      Pubricized:
    2015/10/21
      Vol:
    E99-D No:1
      Page(s):
    2-9

    Personal e-healthcare service is growing significantly. A large number of personal e-health measuring and monitoring devices are now in the market. However, to achieve better health outcome, various devices or services need to work together. This coordination among services remains challenge, due to their variations and complexities. To address this issue, we have proposed an ontology-based framework for interactive self-assessment of RESTful e-health services. Unlike existing e-health service frameworks where they had tightly coupling between services, as well as their data schemas were difficult to change and extend in the future. In our work, the loosely coupling among services and flexibility of each service are achieved through the design and implementation based on HYDRA vocabulary and REST principles. We have implemented clinical knowledge through the combination of OWL-DL and SPARQL rules. All of these services evolve independently; their interfaces are based on REST principles, especially HATEOAS constraints. We have demonstrated how to apply our framework for interactive self-assessment in e-health applications. We have shown that it allows the medical knowledge to drive the system workflow according to the event-driven principles. New data schema can be maintained during run-time. This is the essential feature to support arriving of IoT (Internet of Things) based medical devices, which have their own data schema and evolve overtime.

  • The Optimal MMSE-Based OSIC Detector for MIMO System

    Yunchao SONG  Chen LIU  Feng LU  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E99-B No:1
      Page(s):
    232-239

    The ordered successive interference cancellation (OSIC) detector based on the minimum mean square error (MMSE) criterion has been proved to be a low-complexity detector with efficient bit error rate (BER) performance. As the well-known MMSE-Based OSIC detector, the MMSE-Based vertical Bell Laboratories Layered Space-Time (VBLAST) detector, whose computational complexity is cubic, can not attain the minimum BER performance. Some approaches to reducing the BER of the MMSE-Based VBLAST detector have been contributed, however these improvements have large computational complexity. In this paper, a low complexity MMSE-Based OSIC detector called MMSE-OBEP (ordering based on error probability) is proposed to improve the BER performance of the previous MMSE-Based OSIC detectors, and it has cubic complexity. The proposed detector derives the near-exact error probability of the symbols in the MMSE-Based OSIC detector, thus giving priority to detect the symbol with the smallest error probability can minimize the error propagation in the MMSE-Based OSIC detector and enhance the BER performance. We show that, although the computational complexity of the proposed detector is cubic, it can provide better BER performance than the previous MMSE-Based OSIC detector.

  • Wavelet Pyramid Based Multi-Resolution Bilateral Motion Estimation for Frame Rate Up-Conversion

    Ran LI  Hongbing LIU  Jie CHEN  Zongliang GAN  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2015/06/03
      Vol:
    E99-D No:1
      Page(s):
    208-218

    The conventional bilateral motion estimation (BME) for motion-compensated frame rate up-conversion (MC-FRUC) can avoid the problem of overlapped areas and holes but usually results in lots of inaccurate motion vectors (MVs) since 1) the MV of an object between the previous and following frames is more likely to have no temporal symmetry with respect to the target block of the interpolated frame and 2) the repetitive patterns existing in video frame lead to the problem of mismatch due to the lack of the interpolated block. In this paper, a new BME algorithm with a low computational complexity is proposed to resolve the above problems. The proposed algorithm incorporates multi-resolution search into BME, since it can easily utilize the MV consistency between two adjacent pyramid levels and spatial neighboring MVs to correct the inaccurate MVs resulting from no temporal symmetry while guaranteeing low computational cost. Besides, the multi-resolution search uses the fast wavelet transform to construct the wavelet pyramid, which not only can guarantee low computational complexity but also can reserve the high-frequency components of image at each level while sub-sampling. The high-frequency components are used to regularize the traditional block matching criterion for reducing the probability of mismatch in BME. Experiments show that the proposed algorithm can significantly improve both the objective and subjective quality of the interpolated frame with low computational complexity, and provide the better performance than the existing BME algorithms.

  • Using Bregmann Divergence Regularized Machine for Comparison of Molecular Local Structures

    Raissa RELATOR  Nozomi NAGANO  Tsuyoshi KATO  

     
    LETTER-Artificial Intelligence, Data Mining

      Pubricized:
    2015/10/06
      Vol:
    E99-D No:1
      Page(s):
    275-278

    Although many 3D structures have been solved for proteins to date, functions of some proteins remain unknown. To predict protein functions, comparison of local structures of proteins with pre-defined model structures, whose functions have been elucidated, is widely performed. For the comparison, the root mean square deviation (RMSD) has been used as a conventional index. In this work, adaptive deviation was incorporated, along with Bregmann Divergence Regularized Machine, in order to detect analogous local structures with such model structures more effectively than the conventional index.

  • Model-Based Compressive Sensing Applied to Landmine Detection by GPR Open Access

    Riafeni KARLINA  Motoyuki SATO  

     
    PAPER

      Vol:
    E99-C No:1
      Page(s):
    44-51

    We propose an effective technique for estimation of targets by ground penetrating radar (GPR) using model-based compressive sensing (CS). We demonstrate the technique's performance by applying it to detection of buried landmines. The conventional CS algorithm enables the reconstruction of sparse subsurface images using much reduced measurement by exploiting its sparsity. However, for landmine detection purposes, CS faces some challenges because the landmine is not exactly a point target and also faces high level clutter from the propagation in the medium. By exploiting the physical characteristics of the landmine using model-based CS, the probability of landmine detection can be increased. Using a small pixel size, the landmine reflection in the image is represented by several pixels grouped in a three dimensional plane. This block structure can be used in the model based CS processing for imaging the buried landmine. The evaluation using laboratory data and datasets obtained from an actual mine field in Cambodia shows that the model-based CS gives better reconstruction of landmine images than conventional CS.

  • Wideband Power Spectrum Sensing and Reconstruction Based on Single Channel Sub-Nyquist Sampling

    Weichao SUN  Zhitao HUANG  Fenghua WANG  Xiang WANG  Shaoyi XIE  

     
    PAPER

      Vol:
    E99-A No:1
      Page(s):
    167-176

    A major challenge in wideband spectrum sensing, in cognitive radio system for example, is the requirement of a high sampling rate which may exceed today's best analog-to-digital converters (ADCs) front-end bandwidths. Compressive sampling is an attractive way to reduce the sampling rate. The modulated wideband converter (MWC) proposed recently is one of the most successful compressive sampling hardware architectures, but it has high hardware complexity owing to its parallel channels structure. In this paper, we design a single channel sub-Nyquist sampling scheme to bring substantial savings in terms of not only sampling rate but also hardware complexity, and we also present a wideband power spectrum sensing and reconstruction method for bandlimited wide-sense stationary (WSS) signals. The total sampling rate is only one channel rate of the MWC's. We evaluate the performance of the sensing model by computing the probability of detecting signal occupancy in terms of the signal-to-noise ratio (SNR) and other practical parameters. Simulation results underline the promising performance of proposed approach.

  • A Refined Estimator of Multicomponent Third-Order Polynomial Phase Signals

    GuoJian OU  ShiZhong YANG  JianXun DENG  QingPing JIANG  TianQi ZHANG  

     
    PAPER-Fundamental Theories for Communications

      Vol:
    E99-B No:1
      Page(s):
    143-151

    This paper describes a fast and effective algorithm for refining the parameter estimates of multicomponent third-order polynomial phase signals (PPSs). The efficiency of the proposed algorithm is accompanied by lower signal-to-noise ratio (SNR) threshold, and computational complexity. A two-step procedure is used to estimate the parameters of multicomponent third-order PPSs. In the first step, an initial estimate for the phase parameters can be obtained by using fast Fourier transformation (FFT), k-means algorithm and three time positions. In the second step, these initial estimates are refined by a simple moving average filter and singular value decomposition (SVD). The SNR threshold of the proposed algorithm is lower than those of the non-linear least square (NLS) method and the estimation refinement method even though it uses a simple moving average filter. In addition, the proposed method is characterized by significantly lower complexity than computationally intensive NLS methods. Simulations confirm the effectiveness of the proposed method.

  • New Current-Mode Multipliers by CNTFET-Based n-Valued Binary Converters

    Mona MORADI  Reza FAGHIH MIRZAEE  Keivan NAVI  

     
    PAPER-Electronic Circuits

      Vol:
    E99-C No:1
      Page(s):
    100-107

    This paper presents new Binary Converters (or current-mode compressors) by the usage of carbon nanotube field effect transistors. The new designs are made of three parts: 1) the input currents which are converted to voltage; 2) threshold detectors; and 3) the output current flow paths. In addition, an 8×8-bit multiplier is considered as a bench mark to estimate their efficiency degrees. The first approach is based on high-order Binary Converters, and the second one is only composed of 4BCs and Half Adders.

  • Optimization of Multicast Delivery for Threshold Secret Shared Content

    Nagao OGINO  Yuto NAKAMURA  Shigehiro ANO  

     
    PAPER-Network

      Vol:
    E98-B No:12
      Page(s):
    2419-2430

    A threshold secret sharing scheme can realize reliable delivery of important content using redundant routes through a network. Furthermore, multicast delivery of threshold secret shared content can achieve efficient resource utilization thanks to the application of multicast and network coding techniques to multiple pieces of the content. Nevertheless, a tradeoff exists between reliability and efficiency if multicast content delivery uses network coding. This paper proposes a flexible multicast delivery scheme for threshold secret shared content that can control the tradeoff between reliability and efficiency. The proposed scheme classifies all the pieces obtained from the original content into multiple groups, and each group is subjected to network coding independently. An optimization procedure is proposed for the multicast delivery scheme, which involves two different heuristic delivery route computation methods applicable to large-scale networks. Evaluation results show that the optimized multicast delivery scheme adopting an appropriate grouping method and classifying the pieces into a suitable number of groups can minimize the required link bandwidth while satisfying a specified content loss probability requirement.

  • Almost Sure Convergence Coding Theorems of One-Shot and Multi-Shot Tunstall Codes for Stationary Memoryless Sources

    Mitsuharu ARIMURA  

     
    PAPER-Source Coding

      Vol:
    E98-A No:12
      Page(s):
    2393-2406

    Almost sure convergence coding theorems of one-shot and multi-shot Tunstall codes are proved for stationary memoryless sources. Coding theorem of one-shot Tunstall code is proved in the case that the leaf count of Tunstall tree increases. On the other hand, coding theorem is proved for multi-shot Tunstall code with increasing parsing count, under the assumption that the Tunstall tree grows as the parsing proceeds. In this result, it is clarified that the theorem for the one-shot Tunstall code is not a corollary of the theorem for the multi-shot Tunstall code. In the case of the multi-shot Tunstall code, it can be regarded that the coding theorem is proved for the sequential algorithm such that parsing and coding are processed repeatedly. Cartesian concatenation of trees and geometric mean of the leaf counts of trees are newly introduced, which play crucial roles in the analyses of multi-shot Tunstall code.

  • Off-Grid DOA Estimation Based on Analysis of the Convexity of Maximum Likelihood Function

    Liang LIU  Ping WEI  Hong Shu LIAO  

     
    LETTER-Digital Signal Processing

      Vol:
    E98-A No:12
      Page(s):
    2705-2708

    Spatial compressive sensing (SCS) has recently been applied to direction-of-arrival (DOA) estimation, owing to its advantages over conventional versions. However the performance of compressive sensing (CS)-based estimation methods degrades when the true DOAs are not exactly on the discretized sampling grid. We solve the off-grid DOA estimation problem using the deterministic maximum likelihood (DML) estimation method. In this letter, on the basis of the convexity of the DML function, we propose a computationally efficient algorithm framework for off-grid DOA estimation. Numerical experiments demonstrate the superior performance of the proposed methods in terms of accuracy, robustness and speed.

  • The Error Exponent of Zero-Rate Multiterminal Hypothesis Testing for Sources with Common Information

    Makoto UEDA  Shigeaki KUZUOKA  

     
    PAPER-Shannon Theory

      Vol:
    E98-A No:12
      Page(s):
    2384-2392

    The multiterminal hypothesis testing problem with zero-rate constraint is considered. For this problem, an upper bound on the optimal error exponent is given by Shalaby and Papamarcou, provided that the positivity condition holds. Our contribution is to prove that Shalaby and Papamarcou's upper bound is valid under a weaker condition: (i) two remote observations have a common random variable in the sense of Gácks and Körner, and (ii) when the value of the common random variable is fixed, the conditional distribution of remaining random variables satisfies the positivity condition. Moreover, a generalization of the main result is also given.

  • Development and Evaluation of Near Real-Time Automated System for Measuring Consumption of Seasonings

    Kazuaki NAKAMURA  Takuya FUNATOMI  Atsushi HASHIMOTO  Mayumi UEDA  Michihiko MINOH  

     
    PAPER-Human-computer Interaction

      Pubricized:
    2015/09/07
      Vol:
    E98-D No:12
      Page(s):
    2229-2241

    The amount of seasonings used during food preparation is quite important information for modern people to enable them to cook delicious dishes as well as to take care for their health. In this paper, we propose a near real-time automated system for measuring and recording the amount of seasonings used during food preparation. Our proposed system is equipped with two devices: electronic scales and a camera. Seasoning bottles are basically placed on the electronic scales in the proposed system, and the scales continually measure the total weight of the bottles placed on them. When a chef uses a certain seasoning, he/she first picks up the bottle containing it from the scales, then adds the seasoning to a dish, and then returns the bottle to the scales. In this process, the chef's picking and returning actions are monitored by the camera. The consumed amount of each seasoning is calculated as the difference in weight between before and after it is used. We evaluated the performance of the proposed system with experiments in 301 trials in actual food preparation performed by seven participants. The results revealed that our system successfully measured the consumption of seasonings in 60.1% of all the trials.

  • An Energy-Efficient 24T Flip-Flop Consisting of Standard CMOS Gates for Ultra-Low Power Digital VLSIs

    Yuzuru SHIZUKU  Tetsuya HIROSE  Nobutaka KUROKI  Masahiro NUMA  Mitsuji OKADA  

     
    PAPER-Circuit Design

      Vol:
    E98-A No:12
      Page(s):
    2600-2606

    In this paper, we propose a low-power circuit-shared static flip-flop (CS2FF) for extremely low power digital VLSIs. The CS2FF consists of five static NORs and two inverters (INVs). The CS2FF utilizes a positive edge of a buffered clock signal, which is generated from a root clock, to take data into a master latch and a negative edge of the root clock to hold the data in a slave latch. The total number of transistors is only 24, which is the same as the conventional transmission-gate flip flop (TGFF) used in the most standard cell libraries. SPICE simulations in 0.18-µm standard CMOS process demonstrated that our proposed CS2FF achieved clock-to-Q delay of 18.3ns, setup time of 10.0ns, hold time of 5.5ns, and power dissipation of 9.7nW at 1-MHz clock frequency and 0.5-V power supply. The physical design area increased by 16% and power dissipation was reduced by 13% compared with those of conventional TGFF. Measurement results demonstrated that our proposed CS2FF can operate at 0.352V with extremely low energy of 5.93fJ.

881-900hit(3945hit)