A quasi-periodic signal is a periodic signal with period and amplitude variations. Several physiological signals, including the electrocardiogram (ECG), can be treated as quasi-periodic. Vector quantization (VQ) is a valuable and universal tool for signal compression. However, compressing quasi-periodic signals using VQ presents several problems. First, a pre-trained codebook has little adaptation to signal variations, resulting in no quality control of reconstructed signals. Secondly, the periodicity of the signal causes data redundancy in the codebook, where many codevectors are highly correlated. These two problems are solved by the proposed codebook replenishment VQ (CRVQ) scheme based on a bar-shaped (BS) codebook structure. In the CRVQ, codevectors can be updated online according to signal variations, and the quality of reconstructed signals can be specified. With the BS codebook structure, the codebook redundancy is reduced significantly and great codebook storage space is saved; moreover variable-dimension (VD) codevectors can be used to minimize the coding bit rate subject to a distortion constraint. The theoretic rationale and implementation scheme of the VD-CRVQ is given. The ECG data from the MIT/BIH arrhythmic database are tested, and the result is substantially better than that of using other VQ compression methods.
Takashi KOHAMA Shogo NAKAMURA Hiroshi HOSHINO
The recording of electrocardiogram (ECG) signals for the purpose of finding arrhythmias takes 24 hours. Generally speaking, changes in R-R intervals are used to detect arrhythmias. Our purpose is to develop an algorithm which efficiently detects R-R intervals. This system uses the R-wave position to calculate R-R intervals and then detects any arrhythmias. The algorithm searches for only the short time duration estimated from the most recent R-wave position in order to detect the next R-wave efficiently. We call this duration a WINDOW. A WINDOW is decided according to a proposed search algorithm so that the next R-wave can be expected in the WINDOW. In a case in which an S-wave is enhanced for some reason such as the manner in which the electrodes are installed in the system, the S-wave positions are taken to calculate the peak intervals instead of the R-wave. However, baseline wander and noise contained in the ECG signal have a deterrent effect on the accuracy with which the R-wave or the S-wave position is determined. In order to improve detection, the ECG signal is preprocessed using a Band-Pass Filter (BPF) which is composed of simple Cascaded Integrator Comb (CIC) filters. The American Heart Association (AHA) database was used in the simulation with the proposed algorithm. Accurate detection of the R-wave position was achieved in 99% of cases and efficient extraction of R-R intervals was possible.
In this paper, we propose a new denoising algorithm based on the dyadic wavelet transform (DWT) for ECG signals corrupted with different types of synthesized noise. Using the property that DWT is overcomplete, we define some convex sets in the set of wavelet coefficients and give an iterative method of the projection on the convex sets. The results show that the noises are not only removed from ECG signals, but also the ECG signals are reconstructed, which is used in detecting QRS complex. The performance of the proposed algorithm is demonstrated by some experiments in comparison with the conventional methods.
Takanori UCHIYAMA Kenzo AKAZAWA Akira SASAMORI
This paper proposed a new algorithm of data compression for ambulatory ECG, where no distortion was included in the reconstructed signal, templates were constructed selectively from detected beats, and categorized ECG morphologies (templates) could be displayed in decoding the compressed data. This algorithm consisted of subtracting a best-fit template from the detected beat with an aid of multi-template matching, first differencing of the resulting residuals and modified Huffman coding. This algorithm was evaluated by applying it to ECG signals of the American Heart Association (AHA) data base in terms of bit rates. Following features were indicated. (1) Decompressed signal coincided completely with the original sampled ECG data. (2) Bit rate was approximately 800 bps at the appropriate threshold 50-60 units (1 unit2.4µVolt) for the template matching. This bit rate was almost the same as that of the direct compression (encoding the first differenced signal of original signal). (3) The decompressed templates could make it easy to classify the templates into the normal and abnormal beats; this could be executed without fully decompressing the ECG signal.
Nitish V. THAKOR Yi-chun SUN Hervé RIX Pere CAMINAL
MultiWave data compression algorithm is based on the multiresolution wavelet techniqu for decomposing Electrocardiogram (ECG) signals into their coarse and successively more detailed components. At each successive resolution, or scale, the data are convolved with appropriate filters and then the alternate samples are discarded. This procedure results in a data compression rate that increased on a dyadic scale with successive wavelet resolutions. ECG signals recorded from patients with normal sinus rhythm, supraventricular tachycardia, and ventriular tachycardia are analyzed. The data compression rates and the percentage distortion levels at each resolution are obtained. The performance of the MultiWave data compression algorithm is shown to be superior to another algorithm (the Turning Point algorithm) that also carries out data reduction on a dyadic scale.
This paper describes and analyzed several indices in assessing algorithms of data compression of electrocardiograms, such as the cross correlation (CC), the percent root mean square difference (PRD), and a new measure of standardized root mean square difference (SRD). Although these indices are helpful to objectively evaluate the algorithms, the visual examination of the reconstructed waveform is indispensable to decide the optimal compression ratio. This paper presents the clinical significance of selected waveforms which are prone to be distorted or neglected in the restored waveforms but are crucial for cardiologists to diagnose the patient. A database of electrocardiograms is also proposed for the comparative evaluation of compression algorithms.
Yoshiaki SAITOH Yasushi HASEGAWA Tohru KIRYU Jun'ichi HORI
We use the B spline function and apply the Oslo algorithm to minimize the number of control points in electrocardiogram (ECG) waveform compression under the limitation of evaluation indexes. This method is based on dynamic programming matching to transfer the control points of a reference ECG waveform to the succeeding ECG waveforms. This reduces the execution time for beat-to-beat processing. We also reduced the processing time at several compression stages. When the difference percent normalized root mean square difference is around 10, our method gives the highest compression ratio at a sampling frequency of 250 Hz.
Hiroyoshi MORITA Kingo KOBAYASHI
A method for the compression of ECG data is presented. The method is based on the edit distance algorithm developed in the file comparison problems. The edit distance between two sequences of symbols is defined as the number of edit operations required to transform a sequence of symbols into the other. We adopt the edit distance algorithm to obtain a list of edit operations, called edit script, which transforms a reference pulse into a pulse selected from ECG data. If the decoder knows the same reference, it can reproduce the original pulse, only from the edit script. The amount of the edit script is expected to be smaller than that of the original pulse when the two pulses look alike and thereby we can reduce the amount of space to store the data. Applying the proposed scheme to the raw data of ECG, we have achieved a high compression about 14: 1 without losing the significant features of signals.
In the present paper, we focus ourselves on the turning point (TP) algorithm proposed by Mueller and evaluate its performance when applied to a Gaussian signal with definite covariance function. Then the ECG wave is modeled by Gaussian signals: namely, the ECG is divided into two segments, the baseline segment and the QRS segment. The baseline segment is modeled by a Gaussian signal with butterworth spectrum and the QRS one by a narrow-band Gaussian signal. Performance of the TP algorithm is evaluated and compared when it is applied to a real ECG signal and its Gaussian model. The compression rate (CR) and the normalized mean square error (NMSE) are used as measures of performance. These measures show good coincidence with each other when applied to Gaussian signals with the mentioned spectra. Our results suggest that performance evaluation of the compression algorithms based on the stochastic-process model of ECG waves may be effective.
Jie CHEN Shuichi ITOH Takeshi HASHIMOTO
A new method for the compression of electrocardiographic (ECG) data is presented. The method is based on the orthonormal wavelet analysis recently developed in applied mathematics. By using wavelet transform, the original signal is decomposed into a set of sub-signals with different frequency channels corresponding to the different physical features of the signal. By utilizing the optimum bit allocation scheme, each decomposed sub-signal is treated according to its contribution to the total reconstruction distortion and to the bit rate. In our experiments, compression ratios (CR) from 13.5: 1 to 22.9: 1 with the corresponding percent rms difference (PRD) between 5.5% and 13.3% have been obtained at a clinically acceptable signal quality. Experimental results show that the proposed method seems suitable for the compression of ECG data in the sense of high compression ratio and high speed.
Susumu TSUDA Koichi SHIMIZU Goro MATSUMOTO
A technique was developed to reduce ECG data efficiently within a controlled accuracy. The sampled and digitized data of the original waveform of an ECG is transformed in three major processes. They are the calculation of a beat-to-beat variation, a polygonal approximation and the calculation of the difference between consecutive node points. Then, an adaptive coding technique is applied to minimize redundancies in the data. It was demonstrated that the ECG waveform sampled in 200 Hz, 10 bit/sample, 5 µV/digit could be reduced with the bit reduction ratio of about 10% and within the reconstruction error of about 2.5%. A polygonal approximation method, called MSAPA, was newly developed as a modification of the well known method, SAPA. It was shown that the MSAPA gave better reduction efficiency and smaller reconstruction error than the SAPA, when it was applied to the beat-to-beat variation waveform. The importance of the low-pass filtering as a preprocessing for the polygonal approximation was confirmed in concrete examples. The efficiency of the proposed technique was compared with the cased in which the polygonal approximation was not used. Through these analyses, it was found that the redundancy elimination of the coding technique worked effectively in the proposed technique.