The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] density(275hit)

101-120hit(275hit)

  • Real-Time Counting People in Crowded Areas by Using Local Empirical Templates and Density Ratios

    Dao-Huu HUNG  Gee-Sern HSU  Sheng-Luen CHUNG  Hideo SAITO  

     
    PAPER-Recognition

      Vol:
    E95-D No:7
      Page(s):
    1791-1803

    In this paper, a fast and automated method of counting pedestrians in crowded areas is proposed along with three contributions. We firstly propose Local Empirical Templates (LET), which are able to outline the foregrounds, typically made by single pedestrians in a scene. LET are extracted by clustering foregrounds of single pedestrians with similar features in silhouettes. This process is done automatically for unknown scenes. Secondly, comparing the size of group foreground made by a group of pedestrians to that of appropriate LET captured in the same image patch with the group foreground produces the density ratio. Because of the local scale normalization between sizes, the density ratio appears to have a bound closely related to the number of pedestrians who induce the group foreground. Finally, to extract the bounds of density ratios for groups of different number of pedestrians, we propose a 3D human models based simulation in which camera viewpoints and pedestrians' proximity are easily manipulated. We collect hundreds of typical occluded-people patterns with distinct degrees of human proximity and under a variety of camera viewpoints. Distributions of density ratios with respect to the number of pedestrians are built based on the computed density ratios of these patterns for extracting density ratio bounds. The simulation is performed in the offline learning phase to extract the bounds from the distributions, which are used to count pedestrians in online settings. We reveal that the bounds seem to be invariant to camera viewpoints and humans' proximity. The performance of our proposed method is evaluated with our collected videos and PETS 2009's datasets. For our collected videos with the resolution of 320 × 240, our method runs in real-time with good accuracy and frame rate of around 30 fps, and consumes a small amount of computing resources. For PETS 2009's datasets, our proposed method achieves competitive results with other methods tested on the same datasets [1],[2].

  • Dynamic Bubble-Check Algorithm for Check Node Processing in Q-Ary LDPC Decoders

    Wei LIN  Baoming BAI  Xiao MA  Rong SUN  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E95-B No:5
      Page(s):
    1815-1818

    A simplified algorithm for check node processing of extended min-sum (EMS) q-ary LDPC decoders is presented in this letter. Compared with the bubble check algorithm, the so-called dynamic bubble-check (DBC) algorithm aims to further reduce the computational complexity for the elementary check node (ECN) processing. By introducing two flag vectors in ECN processing, The DBC algorithm can use the minimum number of comparisons at each step. Simulation results show that, DBC algorithm uses significantly fewer comparison operations than the bubble check algorithm, and presents no performance loss compared with standard EMS algorithm on AWGN channels.

  • Further Results on the Stopping Distance of Array LDPC Matrices

    Haiyang LIU  Lu HE  Jie CHEN  

     
    PAPER-Coding Theory

      Vol:
    E95-A No:5
      Page(s):
    918-926

    Given an odd prime q and an integer m ≤ q, an array-based parity-check matrix H(m,q) can be constructed for a quasi-cyclic low-density parity-check (LDPC) code C(m,q). For m=4 and q ≥ 11, we prove the stopping distance of H(4,q) is 10, which is equal to the minimum Hamming distance of the associated code C(4,q). In addition, a tighter lower bound on the stopping distance of H(m,q) is also given for m > 4 and q ≥ 11.

  • Importance Sampling for Turbo Codes over Slow Rayleigh Fading Channels

    Takakazu SAKAI  Koji SHIBATA  

     
    LETTER-Coding Theory

      Vol:
    E95-A No:5
      Page(s):
    982-985

    This study shows a fast simulation method of turbo codes over slow Rayleigh fading channels. The reduction of the simulation time is achieved by applying importance sampling (IS). The conventional IS method of turbo codes over Rayleigh fading channels focuses only on modification of additive white Gaussian noise (AWGN) sequences. The proposed IS method biases not only AWGNs but also channel gains of the Rayleigh fading channels. The computer runtime of the proposed method is about 1/5 of that of the conventional IS method on the evaluation of a frame error rate of 10-6. When we compare with the Monte Carlo simulation method, the proposed method needs only 1/100 simulation runtime under the condition of the same accuracy of the estimator.

  • On the Hardness of Subset Sum Problem from Different Intervals

    Jun KOGURE  Noboru KUNIHIRO  Hirosuke YAMAMOTO  

     
    PAPER-Cryptography and Information Security

      Vol:
    E95-A No:5
      Page(s):
    903-908

    The subset sum problem, which is often called as the knapsack problem, is known as an NP-hard problem, and there are several cryptosystems based on the problem. Assuming an oracle for shortest vector problem of lattice, the low-density attack algorithm by Lagarias and Odlyzko and its variants solve the subset sum problem efficiently, when the “density” of the given problem is smaller than some threshold. When we define the density in the context of knapsack-type cryptosystems, weights are usually assumed to be chosen uniformly at random from the same interval. In this paper, we focus on general subset sum problems, where this assumption may not hold. We assume that weights are chosen from different intervals, and make analysis of the effect on the success probability of above algorithms both theoretically and experimentally. Possible application of our result in the context of knapsack cryptosystems is the security analysis when we reduce the data size of public keys.

  • Time-Domain Processing of Frequency-Domain Data and Its Application

    Wen-Long CHIN  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E95-B No:4
      Page(s):
    1406-1409

    Based on our previous work, this work presents a complete method for time-domain processing of frequency-domain data with evenly-spaced frequency indices, together with its application. The proposed method can be used to calculate the cross spectral and power spectral densities for the frequency indices of interest. A promising application for the time-domain processing of frequency-domain data, particularly for calculating the summation of frequency-domain cross- and auto-correlations in orthogonal frequency-division multiplexing (OFDM) systems, is studied. The advantages of the time-domain processing of frequency-domain data are 1) the ability to rapidly acquire the properties that are readily available in the frequency domain and 2) the reduced complexity. The proposed fast algorithm directly employs time-domain samples, and hence, does not need the fast Fourier transform (FFT) operation. The proposed algorithm has a lower complexity (required complex multiplications ∼ O(N)) than conventional techniques.

  • A Reduced Complexity Linear QC-LDPC Encoding with Parity Vector Correction Technique

    Chanho YOON  Hoojin LEE  Joonhyuk KANG  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E95-B No:4
      Page(s):
    1402-1405

    A new approach for encoding one class of quasi-cyclic low-density parity-check (QC-LDPC) codes is proposed. The proposed encoding method is applicable to parity-check matrices having dual-diagonal parity structure with single column of weight three in the parity generation region. Instead of finding the parity bits directly, the proposed method finds parity bits through vector correction. While the proposed LDPC encoding scheme is readily applicable to matrices defined in the IEEE physical layer standards, the computational complexity of the post processing operation for extraction of correction vector requires less effort than solving the linear equations involved with finding the parity bit as proposed by Myung et al.

  • DOA Estimation of Multiple Speech Sources from a Stereophonic Mixture in Underdetermined Case

    Ning DING  Nozomu HAMADA  

     
    PAPER-Engineering Acoustics

      Vol:
    E95-A No:4
      Page(s):
    735-744

    This paper proposes a direction-of-arrival (DOA) estimation method of multiple speech sources from a stereophonic mixture in an underdetermined case where the number of sources exceeds the number of sensors. The method relies on the sparseness of speech signals in time-frequency (T-F) domain representation which means multiple independent speakers have a small overlap. At first, a selection of T-F cells bearing reliable spatial information is proposed by an introduced reliability index which is defined by the estimated interaural phase difference at each T-F cell. Then, a statistical error propagation model between the phase difference at T-F cell and its consequent DOA is introduced. By employing this model and the sparseness in T-F domain the DOA estimation problem is altered to obtaining local peaks of probability density function of DOA. Finally the kernel density estimator approach based on the proposed statistical model is applied. The performance of the proposed method is assessed by conducted experiments. Our method outperforms others both in accuracy for real observed data and in robustness for simulation with additional diffused noise.

  • PSD Map Construction Scheme Based on Compressive Sensing in Cognitive Radio Networks

    Javad Afshar JAHANSHAHI  Mohammad ESLAMI  Seyed Ali GHORASHI  

     
    PAPER

      Vol:
    E95-B No:4
      Page(s):
    1056-1065

    of late, many researchers have been interested in sparse representation of signals and its applications such as Compressive Sensing in Cognitive Radio (CR) networks as a way of overcoming the issue of limited bandwidth. Compressive sensing based wideband spectrum sensing is a novel approach in cognitive radio systems. Also in these systems, using spatial-frequency opportunistic reuse is emerged interestingly by constructing and deploying spatial-frequency Power Spectral Density (PSD) maps. Since the CR sensors are distributed in the region of support, the sensed PSD by each sensor should be transmitted to a master node (base-station) in order to construct the PSD maps in space and frequency domains. When the number of sensors is large, this data transmission which is required for construction of PSD map can be challenging. In this paper, in order to transmit the CR sensors' data to the master node, the compressive sensing based scheme is used. Therefore, the measurements are sampled in a lower sampling rate than of the Nyquist rate. By using the proposed method, an acceptable PSD map for cognitive radio purposes can be achieved by only 30% of full data transmission. Also, simulation results show the robustness of the proposed method against the channel variations in comparison with classical methods. Different solution schemes such as Basis Pursuit, Lasso, Lars and Orthogonal Matching Pursuit are used and the quality performance of them is evaluated by several simulation results over a Rician channel with respect to several different compression and Signal to Noise Ratios. It is also illustrated that the performance of Basis Pursuit and Lasso methods outperform the other compression methods particularly in higher compression rates.

  • SIS Junctions for Millimeter and Submillimeter Wave Mixers Open Access

    Takashi NOGUCHI  Toyoaki SUZUKI  Tomonori TAMURA  

     
    INVITED PAPER

      Vol:
    E95-C No:3
      Page(s):
    320-328

    We have developed a process for the fabrication of high-quality Nb/AlOx/Nb tunnel junctions with small area and high current densities for the heterodyne mixers at millimeter and submillimeter wavelengths. Their dc I-V curves are numerically studied, including the broadening of quasiparticle density of states resulting from the existence of an imaginary part of the gap energy of Nb. We have found both experimentally and numerically that the subgap current is strongly dependent on bias voltage at temperatures below 4.2 K unlike the prediction of the BCS tunneling theory. It is shown that calculated dc I-V curves taking into account the complex number of the gap energy agree well with those of Nb/AlOx/Nb junctions measured at temperatures from 0.4 to 4.2 K. We have successfully built receivers at millimeter and submillimeter wavelengths with the noise temperature as low as 4 times the quantum photon noise, employing those high-quality Nb/AlOx/Nb junctions. Those low-noise receivers are to be installed in the ALMA (Atacama Large Millimeter/Submillimeter Array) telescope and they are going into series production now.

  • Design of Quasi-Cyclic Cycle LDPC Codes over GF(q)

    ShuKai HU  Chao CHEN  Rong SUN  XinMei WANG  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E95-B No:3
      Page(s):
    983-986

    Quasi-cyclic (QC) low-density parity-check (LDPC) codes have several appealing properties regarding decoding, storage requirements and encoding aspects. In this paper, we focus on the QC LDPC codes over GF(q) whose parity-check matrices have fixed column weight j = 2. By investigating two subgraphs in the Tanner graphs of the corresponding base matrices, we derive two upper bounds on the minimum Hamming distance for this class of codes. In addition, a method is proposed to construct QC LDPC codes over GF(q), which have good Hamming distance distributions. Simulations show that our designed codes have good performance.

  • Low-Complexity Memory Access Architectures for Quasi-Cyclic LDPC Decoders

    Ming-Der SHIEH  Shih-Hao FANG  Shing-Chung TANG  Der-Wei YANG  

     
    PAPER-Computer System

      Vol:
    E95-D No:2
      Page(s):
    549-557

    Partially parallel decoding architectures are widely used in the design of low-density parity-check (LDPC) decoders, especially for quasi-cyclic (QC) LDPC codes. To comply with the code structure of parity-check matrices of QC-LDPC codes, many small memory blocks are conventionally employed in this architecture. The total memory area usually dominates the area requirement of LDPC decoders. This paper proposes a low-complexity memory access architecture that merges small memory blocks into memory groups to relax the effect of peripherals in small memory blocks. A simple but efficient algorithm is also presented to handle the additional delay elements introduced in the memory merging method. Experiment results on a rate-1/2 parity-check matrix defined in the IEEE 802.16e standard show that the LDPC decoder designed using the proposed memory access architecture has the lowest area complexity among related studies. Compared to a design with the same specifications, the decoder implemented using the proposed architecture requires 33% fewer gates and is more power-efficient. The proposed new memory access architecture is thus suitable for the design of low-complexity LDPC decoders.

  • Lowering Error Floors of Irregular LDPC Codes by Combining Construction and Decoding

    Xiaopeng JIAO  Jianjun MU  Fan FANG  Rong SUN  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E95-B No:1
      Page(s):
    271-274

    Irregular low-density parity-check (LDPC) codes generally have good decoding performance in the waterfall region, but they exhibit higher error floors than regular ones. In this letter, we present a hybrid method, which combines code construction and the iterative decoding algorithm, to tackle this problem. Simulation results show that the proposed scheme decreases the error floor significantly for irregular LDPC codes over binary-input additive white Gaussian noise (BIAWGN) channel.

  • A Novel Bayes' Theorem-Based Saliency Detection Model

    Xin HE  Huiyun JING  Qi HAN  Xiamu NIU  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E94-D No:12
      Page(s):
    2545-2548

    We propose a novel saliency detection model based on Bayes' theorem. The model integrates the two parts of Bayes' equation to measure saliency, each part of which was considered separately in the previous models. The proposed model measures saliency by computing local kernel density estimation of features in the center-surround region and global kernel density estimation of features at each pixel across the whole image. Under the proposed model, a saliency detection method is presented that extracts DCT (Discrete Cosine Transform) magnitude of local region around each pixel as the feature. Experiments show that the proposed model not only performs competitively on psychological patterns and better than the current state-of-the-art models on human visual fixation data, but also is robust against signal uncertainty.

  • Single-Layer Trunk Routing Using Minimal 45-Degree Lines

    Kyosuke SHINODA  Yukihide KOHIRA  Atsushi TAKAHASHI  

     
    PAPER-Physical Level Design

      Vol:
    E94-A No:12
      Page(s):
    2510-2518

    In recent Printed Circuit Boards (PCB), the design size and density have increased, and the improvement of routing tools for PCB is required. There are several routing tools which generate high quality routing patterns when connection requirement can be realized by horizontal and vertical segments only. However, in high density PCB, the connection requirements cannot be realized when only horizontal and vertical segments are used. Up to one third nets can not be realized if no non-orthogonal segments are used. In this paper, a routing method for a single-layer routing area that handles higher density designs in which 45-degree segments are used locally to relax the routing density is introduced. In the proposed method, critical zones in which non-orthogonal segments are required in order to realize the connection requirements are extracted, and 45-degree segments are used only in these zones. By extracting minimal critical zones, the other area that can be used to improve the quality of routing pattern without worry about connectivity issues is maximized. Our proposed method can utilize the routing methods which generate high quality routing pattern even if they only handle horizontal and vertical segments as subroutines. Experiments show that the proposed method analyzes a routing problem properly, and that the routing is realized by using 45-degree segments effectively.

  • Effects of Additive Elements on TFT Characteristics in Amorphous IGZO Films under Light Illumination Stress Open Access

    Shinya MORITA  Satoshi YASUNO  Aya MIKI  Toshihiro KUGIMIYA  

     
    INVITED PAPER

      Vol:
    E94-C No:11
      Page(s):
    1739-1744

    We have studied effects of additive elements into the channel layers of amorphous IGZO TFTs on threshold voltage shift issues under light illumination stress condition. By addition of Hf or Si element, the Vth shift under light illumination and negative bias-temperature stress and illumination stress conditions was drastically suppressed while the switching operation of TFTs using IGZO with Mn or Cu was not observed. It was found that the addition of Si or Hf element into the IGZO channel layer leads to reducing the hole trap sites formed at or near the gate insulator/IGZO channel interface.

  • Boosting Learning Algorithm for Pattern Recognition and Beyond Open Access

    Osamu KOMORI  Shinto EGUCHI  

     
    INVITED PAPER

      Vol:
    E94-D No:10
      Page(s):
    1863-1869

    This paper discusses recent developments for pattern recognition focusing on boosting approach in machine learning. The statistical properties such as Bayes risk consistency for several loss functions are discussed in a probabilistic framework. There are a number of loss functions proposed for different purposes and targets. A unified derivation is given by a generator function U which naturally defines entropy, divergence and loss function. The class of U-loss functions associates with the boosting learning algorithms for the loss minimization, which includes AdaBoost and LogitBoost as a twin generated from Kullback-Leibler divergence, and the (partial) area under the ROC curve. We expand boosting to unsupervised learning, typically density estimation employing U-loss function. Finally, a future perspective in machine learning is discussed.

  • Optimized Relay Selection Strategy Based on GF(2p) for Adaptive Network Coded Cooperation

    Kaibin ZHANG  Liuguo YIN  Jianhua LU  

     
    LETTER-Wireless Communication Technologies

      Vol:
    E94-B No:10
      Page(s):
    2912-2915

    Adaptive network coded cooperation (ANCC) scheme may have excellent performance for data transmission from a large collection of terminals to a common destination in wireless networks. However, the random relay selection strategy for ANCC protocol may generate the distributed low-density parity-check (LDPC) codes with many short cycles which may cause error floor and performance degradation. In this paper, an optimized relay selection strategy for ANCC is proposed. Before data communication, by exploiting low-cost information interaction between the destination and terminals, the proposed method generates good assembles of distributed LDPC codes and its storage requirement reduces dramatically. Simulation results demonstrate that the proposed relay selection protocol significantly outperforms the random relay selection strategy.

  • Nonbinary Quasi-Cyclic LDPC Cycle Codes with Low-Density Systematic Quasi-Cyclic Generator Matrices

    Yang YANG  Chao CHEN  Jianjun MU  Jing WANG  Rong SUN  Xinmei WANG  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E94-B No:9
      Page(s):
    2620-2623

    In this letter, we propose an appealing class of nonbinary quasi-cyclic low-density parity-check (QC-LDPC) cycle codes. The parity-check matrix is carefully designed such that the corresponding generator matrix has some nice properties: 1) systematic, 2) quasi-cyclic, and 3) sparse, which allows a parallel encoding with low complexity. Simulation results show that the performance of the proposed encoding-aware LDPC codes is comparable to that of the progressive-edge-growth (PEG) constructed nonbinary LDPC cycle codes.

  • Multi-Stage Decoding Scheme with Post-Processing for LDPC Codes to Lower the Error Floors

    Beomkyu SHIN  Hosung PARK  Jong-Seon NO  Habong CHUNG  

     
    LETTER-Fundamental Theories for Communications

      Vol:
    E94-B No:8
      Page(s):
    2375-2377

    In this letter, we propose a multi-stage decoding scheme with post-processing for low-density parity-check (LDPC) codes, which remedies the rapid performance degradation in the high signal-to-noise ratio (SNR) range known as error floor. In the proposed scheme, the unsuccessfully decoded words of the previous decoding stage are re-decoded by manipulating the received log-likelihood ratios (LLRs) of the properly selected variable nodes. Two effective criteria for selecting the probably erroneous variable nodes are also presented. Numerical results show that the proposed scheme can correct most of the unsuccessfully decoded words of the first stage having oscillatory behavior, which are regarded as a main cause of the error floor.

101-120hit(275hit)