Zhangkai LUO Huali WANG Wanghan LV Hui TIAN
In this letter, a novel mainlobe anti-jamming method via eigen-projection processing and covariance matrix reconstruction is proposed. The present work mainly focuses on two aspects: the first aspect is to obtain the eigenvector of the mainlobe interference accurately in order to form the eigen-projection matrix to suppress the mainlobe interference. The second aspect is to reconstruct the covariance matrix which is uesd to calculate the adaptive weight vector for forming an ideal beam pattern. Additionally, the self-null effect caused by the signal of interest and the sidelobe interferences elimination are also considered in the proposed method. Theoretical analysis and simulation results demonstrate that the proposed method can suppress the mainlobe interference effectively and achieve a superior performance.
Han-Byul LEE Jae-Eun LEE Hae-Seung LIM Seong-Hee JEONG Seong-Cheol KIM
In this paper, we propose an efficient clutter suppression algorithm for automotive radar systems in iron-tunnel environments. In general, the clutters in iron tunnels makes it highly likely that automotive radar systems will fail to detect targets. In order to overcome this drawback, we first analyze the cepstral characteristic of the iron tunnel clutter to determine the periodic properties of the clutters in the frequency domain. Based on this observation, we suggest for removing the periodic components induced by the clutters in iron tunnels in the cepstral domain by using the cepstrum editing process. To verify the clutter suppression of the proposed method experimentally, we performed measurements by using 77GHz frequency modulated continuous waveform radar sensors for an adaptive cruise control (ACC) system. Experimental results show that the proposed method is effective to suppress the clutters in iron-tunnel environments in the sense that it improves the early target detection performance for ACC significantly.
Katsuya NAKAHIRA Jun MASHINO Jun-ichi ABE Daisuke MURAYAMA Tadao NAKAGAWA Takatoshi SUGIYAMA
This paper proposes a dynamic spectrum controlled (DSTC) channel allocation algorithm to increase the total throughput of satellite communication (SATCOM) systems. To effectively use satellite resources such as the satellite's maximum transponder bandwidth and maximum transmission power and to handle the propagation gain variation at all earth stations, the DSTC algorithm uses two new transmission techniques: spectrum compression and spectrum division. The algorithm controls various transmission parameters, such as the spectrum compression ratio, number of spectrum divisions, combination of modulation method and FEC coding rate (MODCOD), transmission power, and spectrum bandwidth to ensure a constant transmission bit rate under variable propagation conditions. Simulation results show that the DSTC algorithm achieves up to 1.6 times higher throughput than a simple MODCOD-based algorithm.
Ying-Ren CHIEN Po-Yu CHEN Shih-Hau FANG
Powerful jammers are able to disable consumer-grade global navigation satellite system (GNSS) receivers under normal operating conditions. Conventional anti-jamming techniques based on the time-domain are unable to effectively suppress wide-band interference, such as chirp-like jammer. This paper proposes a novel anti-jamming architecture, combining wavelet packet signal analysis with adaptive filtering theory to mitigate chirp interference. Exploiting the excellent time-frequency resolution of wavelet technologies makes it possible to generate a reference chirp signal, which is basically a “de-noised” jamming signal. The reference jamming signal then is fed into an adaptive predictor to function as a refined jamming signal such that it predicts a replica of the jammer from the received signal. The refined chirp signal is then subtracted from the received signal to realize the aim of anti-jamming. Simulation results demonstrate the effectiveness of the proposed method in combating chirp interference in Galileo receivers. We achieved jamming-to-signal power ratio (JSR) of 50dB with an acquisition probability exceeding 90%, which is superior to many anti-jamming techniques based on the time-domain, such as conventional adaptive notch filters. The proposed method was also implemented in an software-defined GPS receiver for further validation.
Huyen T. T. TRAN Nam PHAM NGOC Yong Ju JUNG Anh T. PHAM Truong Cong THANG
HTTP Adaptive Streaming (HAS) has become a popular solution for multimedia delivery nowadays. Because of throughput variations, video quality fluctuates during a streaming session. Therefore, a main challenge in HAS is how to evaluate the overall video quality of a session. In this paper, we explore the impacts of quality values and quality variations in HAS. We propose to use the histogram of segment quality values and the histogram of quality gradients in a session to model the overall video quality. Subjective test results show that the proposed model has very high prediction performance for different videos. Especially, the proposed model provides insights into the influence factors of the overall quality, thus leading to suggestions to improve the quality of streaming video.
Periodic interference frequently affects the measurement of small signals and causes problems in clinical diagnostics. Adaptive filters can be used as potential tools for cancelling such interference. However, when the interference has a frequency fluctuation, the ideal adaptive-filter coefficients for cancelling the interference also fluctuate. When the adaptation property of the algorithm is slow compared with the frequency fluctuation, the interference-cancelling performance is degraded. However, if the adaptation is too quick, the performance is degraded owing to the target signal. To overcome this problem, we propose an adaptive filter that suppresses the fluctuation of the ideal coefficients by utilizing a $rac{pi}{2}$ phase-delay device. This method assumes a frequency response that characterizes the transmission path from the interference source to the main input signal to be sufficiently smooth. In the numerical examples, the proposed method exhibits good performance in the presence of a frequency fluctuation when the forgetting factor is large. Moreover, we show that the proposed method reduces the calculation cost.
We study a use of Gaussian kernels with a wide range of scales for nonlinear function estimation. The estimation task can then be split into two sub-tasks: (i) model selection and (ii) learning (parameter estimation) under the selected model. We propose a fully-adaptive and all-in-one scheme that jointly carries out the two sub-tasks based on the multikernel adaptive filtering framework. The task is cast as an asymptotic minimization problem of an instantaneous fidelity function penalized by two types of block l1-norm regularizers. Those regularizers enhance the sparsity of the solution in two different block structures, leading to efficient model selection and dictionary refinement. The adaptive generalized forward-backward splitting method is derived to deal with the asymptotic minimization problem. Numerical examples show that the scheme achieves the model selection and learning simultaneously, and demonstrate its striking advantages over the multiple kernel learning (MKL) method called SimpleMKL.
The overdrive technique is widely used to eliminate motion blur in liquid-crystal displays (LCDs). However, this technique requires a large frame memory to store the previous frame. A reduction in the frame memory requires an image compression algorithm suitable for real-time data processing. In this paper, we present an algorithm based on multimode-color-conversion block truncation coding (MCC-BTC) to obtain a constant output bit rate and high overdrive performance. The MCC-BTC algorithm uses four compression methods, one of which is selected. The four compression modes either use the single-bitmap-generation method or the subsampling method for chrominance. As shown in the simulation results, the proposed algorithm improves the performance of both coding (up to 2.73dB) and overdrive (up to 2.61dB), and the visual quality is improved in comparison to other competing algorithms in literature.
Taishi HASHIMOTO Koji NISHIMURA Toru SATO
The design and performance evaluation is presented of a partially adaptive array that suppresses clutter from low elevation angles in atmospheric radar observations. The norm-constrained and directionally constrained minimization of power (NC-DCMP) algorithm has been widely used to suppress clutter in atmospheric radars, because it can limit the signal-to-noise ratio (SNR) loss to a designated amount, which is the most important design factor for atmospheric radars. To suppress clutter from low elevation angles, adding supplemental antennas that have high response to the incoming directions of clutter has been considered to be more efficient than to divide uniformly the high-gain main array. However, the proper handling of the gain differences of main and sub-arrays has not been well studied. We performed numerical simulations to show that using the proper gain weighting, the sub-array configuration has better clutter suppression capability per unit SNR loss than the uniformly divided arrays of the same size. The method developed is also applied to an actual observation dataset from the MU radar at Shigaraki, Japan. The properly gain-weighted NC-DCMP algorithm suppresses the ground clutter sufficiently with an average SNR loss of about 1 dB less than that of the uniform-gain configuration.
Ramesh KUMAR Abdul AZIZ Inwhee JOE
In this paper, we propose and analyze the opportunistic amplify-and-forward (AF) relaying scheme using antenna selection in conjunction with different adaptive transmission techniques over Rayleigh fading channels. In this scheme, the best antenna of a source and the best relay are selected for communication between the source and destination. Closed-form expressions for the outage probability and average symbol error rate (SER) are derived to confirm that increasing the number of antennas is the best option as compared with increasing the number of relays. We also obtain closed-form expressions for the average channel capacity under three different adaptive transmission techniques: 1) optimal power and rate adaptation; 2) constant power with optimal rate adaptation; and 3) channel inversion with a fixed rate. The channel capacity performance of the considered adaptive transmission techniques is evaluated and compared with a different number of relays and various antennas configurations for each adaptive technique. Our derived analytical results are verified through extensive Monte Carlo simulations.
Lijing MA Huihui BAI Mengmeng ZHANG Yao ZHAO
In this paper, a novel scheme of the adaptive sampling of block compressive sensing is proposed for natural images. In view of the contents of images, the edge proportion in a block can be used to represent its sparsity. Furthermore, according to the edge proportion, the adaptive sampling rate can be adaptively allocated for better compressive sensing recovery. Given that there are too many blocks in an image, it may lead to a overhead cost for recording the ratio of measurement of each block. Therefore, K-means method is applied to classify the blocks into clusters and for each cluster a kind of ratio of measurement can be allocated. In addition, we design an iterative termination condition to reduce time-consuming in the iteration of compressive sensing recovery. The experimental results show that compared with the corresponding methods, the proposed scheme can acquire a better reconstructed image at the same sampling rate.
Hiroki KURODA Masao YAMAGISHI Isao YAMADA
For the nonlinear acoustic echo cancellation, we present an algorithm to estimate the threshold of the clipping effect and the room impulse response vector by suppressing their time-varying cost function. A common way to suppress the time-varying cost function of a pair of parameters is to alternatingly minimize the function with respect to each parameter while keeping the other fixed, which we refer to as adaptive alternating minimization. However, since the cost function for the threshold is nonconvex, the conventional methods approximate the exact minimizations by gradient descent updates, which causes serious degradation of the estimation accuracy in some occasions. In this paper, by exploring the fact that the cost function for the threshold becomes piecewise quadratic, we propose to exactly minimize the cost function for the threshold in a closed form while suppressing the cost function for the impulse response vector in an online manner, which we call exact-online adaptive alternating minimization. The proposed method is expected to approximate more efficiently the adaptive alternating minimization strategy than the conventional methods. Numerical experiments demonstrate the efficacy of the proposed method.
Peeramed CHODKAVEEKITYADA Hajime FUKUCHI
Rain attenuation can drastically impact the service availability of satellite communication, especially in the higher frequency bands above 20 GHz, such as the Ka-band. Several countermeasures, including site and time diversity, have been proposed to maintain satellite link service. In this paper, we evaluate the performance of a power boost beam method, which is an adaptive satellite power control technology based on using rain radar data obtained throughout Japan to forecast the power margin. Boost beam analysis is considered for different beam sizes (50, 100, 150, and 200km) and beam numbers (1-4 beams) for a total of 16 cases. Moreover, we used a constant boost power corresponding to the rainfall rate of 20mm/h. The obtained results show that in comparison to the case with no boost, the effective rain intensity in each boost case was reduced.
Xuyang WANG Pengyuan ZHANG Qingwei ZHAO Jielin PAN Yonghong YAN
The introduction of deep neural networks (DNNs) leads to a significant improvement of the automatic speech recognition (ASR) performance. However, the whole ASR system remains sophisticated due to the dependent on the hidden Markov model (HMM). Recently, a new end-to-end ASR framework, which utilizes recurrent neural networks (RNNs) to directly model context-independent targets with connectionist temporal classification (CTC) objective function, is proposed and achieves comparable results with the hybrid HMM/DNN system. In this paper, we investigate per-dimensional learning rate methods, ADAGRAD and ADADELTA included, to improve the recognition of the end-to-end system, based on the fact that the blank symbol used in CTC technique dominates the output and these methods give frequent features small learning rates. Experiment results show that more than 4% relative reduction of word error rate (WER) as well as 5% absolute improvement of label accuracy on the training set are achieved when using ADADELTA, and fewer epochs of training are needed.
This paper is a sequel to [4] in which the system is generalized by including unknown time-varying delays in both states and input. Regarding the controller, the design of adaptive gain is simplified by including only x1 and u whereas full states are used in [4]. Moreover, it is shown that the proposed controller is also applicable to a class of upper triangular nonlinear systems. An example is given for illustration.
Tsubasa OCHIAI Shigeki MATSUDA Hideyuki WATANABE Xugang LU Chiori HORI Hisashi KAWAI Shigeru KATAGIRI
Among various training concepts for speaker adaptation, Speaker Adaptive Training (SAT) has been successfully applied to a standard Hidden Markov Model (HMM) speech recognizer, whose state is associated with Gaussian Mixture Models (GMMs). On the other hand, focusing on the high discriminative power of Deep Neural Networks (DNNs), a new type of speech recognizer structure, which combines DNNs and HMMs, has been vigorously investigated in the speaker adaptation research field. Along these two lines, it is natural to conceive of further improvement to a DNN-HMM recognizer by employing the training concept of SAT. In this paper, we propose a novel speaker adaptation scheme that applies SAT to a DNN-HMM recognizer. Our SAT scheme allocates a Speaker Dependent (SD) module to one of the intermediate layers of DNN, treats its remaining layers as a Speaker Independent (SI) module, and jointly trains the SD and SI modules while switching the SD module in a speaker-by-speaker manner. We implement the scheme using a DNN-HMM recognizer, whose DNN has seven layers, and elaborate its utility over TED Talks corpus data. Our experimental results show that in the supervised adaptation scenario, our Speaker-Adapted (SA) SAT-based recognizer reduces the word error rate of the baseline SI recognizer and the lowest word error rate of the SA SI recognizer by 8.4% and 0.7%, respectively, and by 6.4% and 0.6% in the unsupervised adaptation scenario. The error reductions gained by our SA-SAT-based recognizers proved to be significant by statistical testing. The results also show that our SAT-based adaptation outperforms, regardless of the SD module layer selection, its counterpart SI-based adaptation, and that the inner layers of DNN seem more suitable for SD module allocation than the outer layers.
Motohiro NAKAMURA Shinnosuke OYA Takahiro OKABE Hendrik P. A. LENSCH
Self-luminous light sources in the real world often have nonnegligible sizes and radiate light inhomogeneously. Acquiring the model of such a light source is highly important for accurate image synthesis and understanding. In this paper, we propose an approach to measuring 4D light fields of self-luminous extended light sources by using a liquid crystal (LC) panel, i.e. a programmable optical filter and a diffuse-reflection board. The proposed approach recovers the 4D light field from the images of the board illuminated by the light radiated from a light source and passing through the LC panel. We make use of the feature that the transmittance of the LC panel can be controlled both spatially and temporally. The approach enables multiplexed sensing and adaptive sensing, and therefore is able to acquire 4D light fields more efficiently and densely than the straightforward method. We implemented the prototype setup, and confirmed through a number of experiments that our approach is effective for modeling self-luminous extended light sources in the real world.
In this paper, we address the problem of projective template matching which aims to estimate parameters of projective transformation. Although homography can be estimated by combining key-point-based local features and RANSAC, it can hardly be solved with feature-less images or high outlier rate images. Estimating the projective transformation remains a difficult problem due to high-dimensionality and strong non-convexity. Our approach is to quantize the parameters of projective transformation with binary finite field and search for an appropriate solution as the final result over the discrete sampling set. The benefit is that we can avoid searching among a huge amount of potential candidates. Furthermore, in order to approximate the global optimum more efficiently, we develop a level-wise adaptive sampling (LAS) method under genetic algorithm framework. With LAS, the individuals are uniformly selected from each fitness level and the elite solution finally converges to the global optimum. In the experiment, we compare our method against the popular projective solution and systematically analyse our method. The result shows that our method can provide convincing performance and holds wider application scope.
Jie LIU Zhuochen XIE Huijie LIU Zhengmin ZHANG
In this paper, a new non-uniform weight-updating scheme for adaptive digital beamforming (DBF) is proposed. The unique feature of the letter is that the effective working range of the beamformer is extended and the computational complexity is reduced by introducing the robust DBF based on worst-case performance optimization. The robust parameter for each weight updating is chosen by analyzing the changing rate of the Direction of Arrival (DOA) of desired signal in LEO satellite communication. Simulation results demonstrate the improved performance of the new Non-Uniform Weight-Updating Beamformer (NUWUB).
Estimation of the time delay of arrival (TDOA) problem is important to acoustic source localization. The TDOA estimation problem is defined as finding the relative delay between several microphone signals for the direct sound. To estimate TDOA, the generalized cross-correlation (GCC) method is the most frequently used, but it has a disadvantage in terms of reverberant environments. In order to overcome this problem, the adaptive eigenvalue decomposition (AED) method has been developed, which estimates the room transfer function and finds the direct-path delay. However, the algorithm does not take into account the fact that the room transfer function is a sparse channel, and so sometimes the estimated transfer function is too dense, resulting in failure to exact direct-path and delay. In this paper, an enhanced AED algorithm that makes use of a proportionate step-size control and a direct-path constraint is proposed instead of a constant step size and the L2-norm constraint. The simulation results show that the proposed algorithm has enhanced performance as compared to both the conventional AED method and the phase-transform (PHAT) algorithm.