1-13hit |
In this paper we extend hyperparameter-free sparse signal reconstruction approaches to permit the high-resolution time delay estimation of spread spectrum signals and demonstrate their feasibility in terms of both performance and computation complexity by applying them to the ISO/IEC 24730-2.1 real-time locating system (RTLS). Numerical examples show that the sparse asymptotic minimum variance (SAMV) approach outperforms other sparse algorithms and multiple signal classification (MUSIC) regardless of the signal correlation, especially in the case where the incoming signals are closely spaced within a Rayleigh resolution limit. The performance difference among the hyperparameter-free approaches decreases significantly as the signals become more widely separated. SAMV is sometimes strongly influenced by the noise correlation, but the degrading effect of the correlated noise can be mitigated through the noise-whitening process. The computation complexity of SAMV can be feasible for practical system use by setting the power update threshold and the grid size properly, and/or via parallel implementations.
Bei ZHAO Chen CHENG Zhenguo MA Feng YU
Cross correlation is a general way to estimate time delay of arrival (TDOA), with a computational complexity of O(n log n) using fast Fourier transform. However, since only one spike is required for time delay estimation, complexity can be further reduced. Guided by Chinese Remainder Theorem (CRT), this paper presents a new approach called Co-prime Aliased Sparse FFT (CASFFT) in O(n1-1/d log n) multiplications and O(mn) additions, where m is smooth factor and d is stage number. By adjusting these parameters, it can achieve a balance between runtime and noise robustness. Furthermore, it has clear advantage in parallelism and runtime for a large range of signal-to-noise ratio (SNR) conditions. The accuracy and feasibility of this algorithm is analyzed in theory and verified by experiment.
Zhixin LIU Dexiu HU Yongjun ZHAO Chengcheng LIU
Considering the obvious bias of the traditional interpolation method, a novel time delay estimation (TDE) interpolation method with sub-sample accuracy is presented in this paper. The proposed method uses a generalized extended approximation method to obtain the objection function. Then the optimized interpolation curve is generated by Second-order Cone programming (SOCP). Finally the optimal TDE can be obtained by interpolation curve. The delay estimate of proposed method is not forced to lie on discrete samples and the sample points need not to be on the interpolation curve. In the condition of the acceptable computation complexity, computer simulation results clearly indicate that the proposed method is less biased and outperforms the other interpolation algorithms in terms of estimation accuracy.
In the traditional time delay estimation methods, it is usually implicitly assumed that the observed signals are either only direct path propagate or coherently received. In practice, the multipath propagation and incoherent reception always exist simultaneously. In response to this situation, the joint maximum likelihood (ML) estimation of multipath delays and system error is proposed, and the estimation of the number of multipath is considered as well for the specific incoherent signal model. Furthermore, an algorithm based Gibbs sampling is developed to solve the multi-dimensional nonlinear ML estimation. The efficiency of the proposed estimator is demonstrated by simulation results.
Bo WU Yan WANG Xiuying CAO Pengcheng ZHU
Attenuated and delayed versions of the pulse signal overlap in multipath propagation. Previous algorithms can resolve them only if signal sampling is ideal, but fail to resolve two counterparts with non-ideal sampling. In this paper, we propose a novel method which can resolve the general types of non-ideally sampled pulse signals in the time domain via Taylor Series Expansion (TSE) and estimate multipath signals' precise time delays and amplitudes. In combination with the CLEAN algorithm, the overlapped pulse signal parameters are estimated one by one through an iteration method. Simulation results verify the effectiveness of the proposed method.
The Generalized cross-correlation (GCC) method is most commonly used for time delay estimation (TDE). However, the GCC method can result in false peak errors (FPEs) especially at a low signal to noise ratio (SNR). These FPEs significantly degrade TDE, since the estimation error, which is the difference between a true time delay and an estimated time delay, is larger than at least one sampling period. This paper introduces an algorithm that estimates two peaks for two cross-correlation functions using three types of signals such as a reference signal, a delayed signal, and a delayed signal with an additional time delay of half a sampling period. A peak selection algorithm is also proposed in order to identify which peak is closer to the true time delay using subsample TDE methods. This paper presents simulations that compare the algorithms' performance for varying amounts of noise and delay. The proposed algorithms can be seen to display better performance, in terms of the probability of the integer TDE errors, as well as the mean and standard deviation of absolute values of the time delay estimation errors.
Seong-Hyun JANG Yeong-Sam KIM Sang-Hoon YOON Jong-Wha CHONG
In this letter, we analyze the effect of the size of observed data on the performance of time delay estimation (TDE) in the chirp spread spectrum (CSS) system. By adjusting the size of observed data, we reduce the effect of DC offsets, which would otherwise degrade the performance of TDE based on CSS, and we optimize the performance of TDE in CSS system. Finally, we derive the optimal size of observed data of TDE in CSS system.
Kenneth Wing Kin LUI Hing Cheung SO
In this Letter, the problem of estimating the time-difference-of-arrival between signals received at two spatially separated sensors is addressed. By taking discrete Fourier transform of the sensor outputs, time delay estimation corresponds to finding the frequency of a noisy sinusoid with time-varying amplitude. The generalized weighted linear predictor is utilized to estimate the time delay and it is shown that its estimation accuracy attains Cramér-Rao lower bound.
Yanxin YAO Qishan ZHANG Dongkai YANG
A method is proposed for estimating code and carrier phase parameters of GNSS reflected signals in low SNR (signal-to-noise ratio) environments. Simulation results show that the multipath impact on code and carrier with 0.022 C/A chips delay can be estimated in 0 dB SNR in the condition of 46 MHz sampling rate.
Jang Sub KIM Ho Jin SHIN Dong Ryeol SHIN
In this paper, a multiuser receiver based on a Gaussian Mixture Sigma Point Particle Filter (GMSPPF), which can be used for joint channel coefficient estimation and time delay tracking in CDMA communication systems, is introduced. The proposed algorithm has better improved estimation performance than either Extended Kalman Filter (EKF) or Particle Filter (PF). The Cramer-Rao Lower Bound (CRLB) is derived for the estimator, and the simulation result demonstrates that it is almost completely near-far resistant. For this reason, it is believed that the proposed estimator can replace well-known filters such as the EKF or PF.
A new method of explicitly adaptive time delay estimation (EATDE) algorithm is proposed for estimating a varying time delay parameter. The proposed method is based on the Haar wavelet transform of cross-correlations. The proposed algorithm can be viewed as a gradient-based optimization of lowpass filtered cross-correlations, but requires less computational power. The algorithm shows a global convergence property for wide-band signals with uncorrelated noises. A convergence analysis including mean behavior, mean-square-error behavior, and steady-state error of delay estimate is given. Simulation results are also provided to demonstrate the performance of the proposed algorithm.
This paper addresses the estimation of time delay between two spatially separated noisy signals by system identification modeling with the input and output corrupted by additive white Gaussian noise. The proposed method is based on a modified adaptive Butler-Cantoni equalizer that decouples noise variance estimation from channel estimation. The bias in time delay estimates that is induced by input noise is reduced by an IIR whitening filter whose coefficients are found by the Burg algorithm. For step time-variant delays, a dual mode operation scheme is adopted in which we define a normal operating (tracking) mode and an interrupt operating (optimization) mode. In the tracking mode, only a few coefficients of the impulse response vector are monitored through L1-normed finite forward differences tracking, while in the optimization mode, the time delay optimized. Simulation results confirm the superiority of the proposed approach at low signal-to-noise ratios.
Feng-Xiang GE Qun WAN Jian YANG Ying-Ning PENG
The problem of the super-resolution time delay estimation of the real stationary signals is addressed in this paper. The time delay estimation is first converted into a frequency estimation problem. Then a MUSIC-type algorithm to estimate the subsequent frequency from the single-experiment data is proposed, which not only avoids the mathematical model mismatching but also utilizes the advantages of the subspace-based methods. The mean square errors (MSEs) of the time delay estimate of the MUSIC-type method for varying signal-to-noise (SNR) and separation of two received signal components are shown to illustrate that they approximately coincide with the corresponding Cramer-Rao bound (CRB). Finally, the comparison between the MUSIC-type method and the other conventional methods is presented to show the advantages of the proposed method in this paper.