The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

3761-3780hit(3945hit)

  • Multiwave: A Wavelet-Based ECG Data Compression Algorithm

    Nitish V. THAKOR  Yi-chun SUN  Hervé RIX  Pere CAMINAL  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1462-1469

    MultiWave data compression algorithm is based on the multiresolution wavelet techniqu for decomposing Electrocardiogram (ECG) signals into their coarse and successively more detailed components. At each successive resolution, or scale, the data are convolved with appropriate filters and then the alternate samples are discarded. This procedure results in a data compression rate that increased on a dyadic scale with successive wavelet resolutions. ECG signals recorded from patients with normal sinus rhythm, supraventricular tachycardia, and ventriular tachycardia are analyzed. The data compression rates and the percentage distortion levels at each resolution are obtained. The performance of the MultiWave data compression algorithm is shown to be superior to another algorithm (the Turning Point algorithm) that also carries out data reduction on a dyadic scale.

  • Load Balancing Based on Load Coherence between Continuous Images for an Object-Space Parallel Ray-Tracing System

    Hiroaki KOBAYASHI  Hideyuki KUBOTA  Susumu HORIGUCHI  Tadao NAKAMURA  

     
    PAPER-Computer Systems

      Vol:
    E76-D No:12
      Page(s):
    1490-1499

    The ray-tracing algorithm can synthesize very realistic images. However, the ray tracing is very time consuming. To solve this problem, a load balancing strategy using temporal coherence between images in an animation is presented for balancing computational loads among processing elements of a parallel processng system. Our parallel processing model is based on a space subdivision method for the ray-tracing algorithm. A subdivided object space is distributed among processing elements of the parallel system. To clarify the effectiveness of the load balancing strategy, we examine the system performance by computer simulation.

  • Data Compression of ECG Based on the Edit Destance Algorithms

    Hiroyoshi MORITA  Kingo KOBAYASHI  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1443-1453

    A method for the compression of ECG data is presented. The method is based on the edit distance algorithm developed in the file comparison problems. The edit distance between two sequences of symbols is defined as the number of edit operations required to transform a sequence of symbols into the other. We adopt the edit distance algorithm to obtain a list of edit operations, called edit script, which transforms a reference pulse into a pulse selected from ECG data. If the decoder knows the same reference, it can reproduce the original pulse, only from the edit script. The amount of the edit script is expected to be smaller than that of the original pulse when the two pulses look alike and thereby we can reduce the amount of space to store the data. Applying the proposed scheme to the raw data of ECG, we have achieved a high compression about 14: 1 without losing the significant features of signals.

  • Electrocardiogram Data Compression by the Oslo Algorithm and DP Matching

    Yoshiaki SAITOH  Yasushi HASEGAWA  Tohru KIRYU  Jun'ichi HORI  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1411-1418

    We use the B spline function and apply the Oslo algorithm to minimize the number of control points in electrocardiogram (ECG) waveform compression under the limitation of evaluation indexes. This method is based on dynamic programming matching to transfer the control points of a reference ECG waveform to the succeeding ECG waveforms. This reduces the execution time for beat-to-beat processing. We also reduced the processing time at several compression stages. When the difference percent normalized root mean square difference is around 10, our method gives the highest compression ratio at a sampling frequency of 250 Hz.

  • Performance Evaluation of ECG Compression Algorithms by Reconstruction Error and Diagnostic Response

    Kohro TAKAHASHI  Satoshi TAKEUCHI  Norihito OHSAWA  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1404-1410

    An electrocardiogram (ECG) data compression algorithm using a polygonal approximation and the template beat variation method (TBV) has been evaluated by reconstruction error and automatic interpretation. The algorithm combining SAPA3 with TBV (SAPA3/TBV) has superior compression performance in PRD and compression ratio. The reconstruction errors, defined as the difference of the amplitude and the time duration between the original ECG and the reconstructed one, are large at waves with small amplitude and/or gradual slopes such as the P wave. Tracing rebuilt from the compressed ECG has been analysed using the automatic interpretative program, and the diagnostic answers with the realated measurements have been compared with the results obtained on the original ECG. The data compression algorithms (SAPA3 and SAPA3/TBV) have been tested on 100 cases in the data base produced by CSE. The reconstruction errors are related to the diagnostic errors. The TBV method suppresses these errors and more than 90% of diagnostic agreements at the error limit of 15µV can be obtained.

  • Data Compression of Long Time ECG Recording Using BP and PCA Neural Networks

    Yasunori NAGASAKA  Akira IWATA  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1434-1442

    The performances of BPNN (neural network trained by back propagation) and PCANN (neural network which computes principal component analysis) for ECG data compression have been investigated from several points of view. We have compared them with an existing data compression method TOMEK. We used MIT/BIH arrhythmia database as ECG data. Both BPNN and PCANN showed better results than TOMEK. They showed 1.1 to 1.4 times higher compression than TOMEK to achieve the same accuracy of reproduction (13.0% of PRD and 99.0% of CC). While PCANN showed better learning ability than BPNN in simple learning task, BPNN was a little better than PCANN regarding compression rates. Observing the reproduced waveforms, BPNN and PCANN had almost the same performance, and they were superior to TOMEK. The following characteristics were obtained from the experiments. Since PCANN is sensitive to the learning rate, we had to precisely control the learning rate while the learning is in progress. We also found the tendency that PCANN needs larger amount of iteration in learning than BPNN for getting the same performance. PCANN showed better learning ability than BPNN, however, the total learning cost were almost the same between BPNN and PCANN due to the large amount of iteration. We analyzed the connection weight patterns. Since PCANN has a clear mathematical background, its behavior can be explained theoretically. BPNN sometimes generated the connection weights which were similar to the principal components. We supposed that BPNN may occasionally generate those patterns, and performs well while doing that. Finally we concluded as follows. Although the difference of the performances is smal, it was always observed and PCANN never exceeded BPNN. When the ease of analysis or the relation to mathematics is important, PCANN is suitable. It will be useful for the study of the recorded data such as statistics.

  • The Application of a Data-Driven Processor to Automotive Engine Control

    Kenji SHIMA  Koichi MUNAKATA  Shoichi WASHINO  Shinji KOMORI  Yasuya KAJIWARA  Setsuhiro SHIMOMURA  

     
    PAPER

      Vol:
    E76-C No:12
      Page(s):
    1794-1803

    Automotive electronics technology has become extremely advanced in the regions of automotive engine control, anti-skid brake control, and others. These control systems require highly advanced control performance and high speed microprocessors which can rapidly execute interrupt processing. Automotive engine control systems are now widely utilized in cars with high speed, high power engines. At present, it is generally acknowledged that such high performance engine control for the 10,000 rpm, 12 cylinder engines requires three or more conventional microprocessors. We fabricated an engine control system prototype incorporating the data-driven processor under development, which was installed in an actual automobile. In this paper, the characteristics of the engine control program and simulation results are firstly discussed. Secondly, the structure of the engine control system prototype and the control performance applied to the actual automobile are shown. Finally, from the results of software simulation and the installation of the engine control system prototype with the data-driven processor, we conclude that a single chip data-driven microprocessor can control a high speed, high power, 10,000 rpm, 12 cylinder automobile engine.

  • Fundamentals of the Decision of Optimum Factors in he ECG Data Compression

    Masa ISHIJIMA  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1398-1403

    This paper describes and analyzed several indices in assessing algorithms of data compression of electrocardiograms, such as the cross correlation (CC), the percent root mean square difference (PRD), and a new measure of standardized root mean square difference (SRD). Although these indices are helpful to objectively evaluate the algorithms, the visual examination of the reconstructed waveform is indispensable to decide the optimal compression ratio. This paper presents the clinical significance of selected waveforms which are prone to be distorted or neglected in the restored waveforms but are crucial for cardiologists to diagnose the patient. A database of electrocardiograms is also proposed for the comparative evaluation of compression algorithms.

  • Efficient Application of Coding Technique for Data Compression of ECG

    Susumu TSUDA  Koichi SHIMIZU  Goro MATSUMOTO  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1425-1433

    A technique was developed to reduce ECG data efficiently within a controlled accuracy. The sampled and digitized data of the original waveform of an ECG is transformed in three major processes. They are the calculation of a beat-to-beat variation, a polygonal approximation and the calculation of the difference between consecutive node points. Then, an adaptive coding technique is applied to minimize redundancies in the data. It was demonstrated that the ECG waveform sampled in 200 Hz, 10 bit/sample, 5 µV/digit could be reduced with the bit reduction ratio of about 10% and within the reconstruction error of about 2.5%. A polygonal approximation method, called MSAPA, was newly developed as a modification of the well known method, SAPA. It was shown that the MSAPA gave better reduction efficiency and smaller reconstruction error than the SAPA, when it was applied to the beat-to-beat variation waveform. The importance of the low-pass filtering as a preprocessing for the polygonal approximation was confirmed in concrete examples. The efficiency of the proposed technique was compared with the cased in which the polygonal approximation was not used. Through these analyses, it was found that the redundancy elimination of the coding technique worked effectively in the proposed technique.

  • A Specific Design Approach for Automotive Microcomputers

    Nobusuke ABE  Shozo SHIROTA  

     
    PAPER

      Vol:
    E76-C No:12
      Page(s):
    1788-1793

    When used for automotive applications, microcomputers have to meet two requirements more demanding than those for general use. One of these requirements is to respond to external events within a time scale of microseconds; the other is the high quality and high reliability necessary for the severe environmental operating conditions and the ambitious market requirements inherent to automotive applications. These needs especially the latter one have been responded to by further elaboration of each basic technology involved in semiconductor manufacturing. At the same time, various logic parts have been built into the microcomputer. This paper deals with several design approaches to the high quality and high reliability objective. First, testability improvement by the logical separation method focusing on the logic simulation model for generating test vectors, which enables us to reduce the time required for test vector development in half. Next, noise suppression methods to gain electromagnetic compatibility (EMC). Then, simplified memory transistor's analysis to evaluate the V/I-characteristics directly via external pins without opening the model seal, removing the passivation and placing a probe needle on the chip. Finally, increased reliability of on-chip EPROM using a special circuit raising the threshold value by approximately 1(V) compared to EPROM's without such a circuit.

  • A CMOS Time-to-Digital Converter LSI with Half-Nanosecond Resolution Using a Ring Gate Delay Line

    Takamoto WATANABE  Yasuaki MAKINO  Yoshinori OHTSUKA  Shigeyuki AKITA  Tadashi HATTORI  

     
    PAPER

      Vol:
    E76-C No:12
      Page(s):
    1774-1779

    The development of highly accurate and durable control system is becoming a must for todays high performance automobiles. For example, it is necessary to up-grade todays materials and methods creating more sensitive sensors, higher speed processors and more accurate actuators, while also being more durable. Thus, the development of a CMOS time-to-digital converter LSI with half-nanosecond resolution, which controls only pulse signals was achieved by employing 1.5 µm CMOS technology. The new signal detecting circuit, 1.1 mm2 in size, converts time to numerical values over a wide measurement range (13 bits). The compact digital circuit employs a newly developed "ring gate delay system". Within the LSI the fully digital circuit is highly durable. This allows it to be utilized even under severe conditions (for example an operating ambient temperature of 130). In order to measure time accurately, a method of correcting the variation of measurement time data employing a real-time conversion fully digital circuit is described. This method allows for fully automatic correction with a microcomputer, so no manual adjustment is required. In addition to sensor circuit applications, the LSI has great potential for Application Specific Integrated Circuit, (ASIC) such as a function cell with is a completely new method of measuring time.

  • Data Compression of Ambulatory ECG by Using Multi-Template Matching and Residual Coding

    Takanori UCHIYAMA  Kenzo AKAZAWA  Akira SASAMORI  

     
    PAPER

      Vol:
    E76-D No:12
      Page(s):
    1419-1424

    This paper proposed a new algorithm of data compression for ambulatory ECG, where no distortion was included in the reconstructed signal, templates were constructed selectively from detected beats, and categorized ECG morphologies (templates) could be displayed in decoding the compressed data. This algorithm consisted of subtracting a best-fit template from the detected beat with an aid of multi-template matching, first differencing of the resulting residuals and modified Huffman coding. This algorithm was evaluated by applying it to ECG signals of the American Heart Association (AHA) data base in terms of bit rates. Following features were indicated. (1) Decompressed signal coincided completely with the original sampled ECG data. (2) Bit rate was approximately 800 bps at the appropriate threshold 50-60 units (1 unit2.4µVolt) for the template matching. This bit rate was almost the same as that of the direct compression (encoding the first differenced signal of original signal). (3) The decompressed templates could make it easy to classify the templates into the normal and abnormal beats; this could be executed without fully decompressing the ECG signal.

  • Tree-Based Approaches to Automatic Generation of Speech Synthesis Rules for Prosodic Parameters

    Yoichi YAMASHITA  Manabu TANAKA  Yoshitake AMAKO  Yasuo NOMURA  Yoshikazu OHTA  Atsunori KITOH  Osamu KAKUSHO  Riichiro MIZOGUCHI  

     
    PAPER

      Vol:
    E76-A No:11
      Page(s):
    1934-1941

    This paper describes automatic generation of speech synthesis rules which predict a stress level for each bunsetsu in long noun phrases. The rules are inductively inferred from a lot of speech data by using two kinds of tree-based methods, the conventional decision tree and the SBR-tree methods. The rule sets automatically generated by two methods have almost the same performance and decrease the prediction error to about 14 Hz from 23 Hz of the accent component value. The rate of the correct reproduction of the change for adjacent bunsetsu pairs is also used as a measure for evaluating the generated rule sets and they correctly reproduce the change of about 80%. The effectiveness of the rule sets is verified through the listening test. And, with regard to the comprehensiveness of the generated rules, the rules by the SBR-tree methods are very compact and easy to human experts to interpret and matches the former studies.

  • A Framework for a Responsive Network Protocol for Internetworking Environments

    Atsushi SHIONOZAKI  Mario TOKORO  

     
    PAPER

      Vol:
    E76-D No:11
      Page(s):
    1365-1374

    A responsive network architecture is essential in future open distributed systems. In this paper, a framework that provides the foundations for a responsive network architecture for an internetworking environment is proposed. It is called the Virtually Separated Link (VSL) model. By incorporating this framework, communication of both data and control information can be completed in bounded time. Consequently, a protocol can initiate a recovery mechanism in bounded time, or allow an application to do the same. Its functionalities augment existing resource reservation protocols that support multimedia communication. An overview of a real-time network protocol that is based on this framework is also presented.

  • Changing Operational Modes in the Context of Pre Run-Time Scheduling

    Gerhard FOHLER  

     
    PAPER

      Vol:
    E76-D No:11
      Page(s):
    1333-1340

    Typical processes controlled by hard real-time computer systems undergo several, mutually exclusive modes of operation. By deterministically switching among a number of static schedules, a pre run-time scheduled system is able to adapt to changing environmental situations. This paper presents concepts for specification of mode changes, construction of static schedules for modes and transitions, and timely run-time execution of mode changes. We propose concepts for mode changes in the context pre run-time scheduled hard real-time systems. While MARS is used to illustrate the concepts' application, they are applicable to a variety of systems. Our methods adhere closely to the ones established for single modes. By decomposing the system into a set of disjoint modes, the design process and its comprehension are facilitated, testing efforts are reduced significantly, and solutions are enabled which do not exist if all system activities of all modes are combined into a single schedule.

  • A Reconfigurable Parallel Processor Based on a TDLCA Model

    Masahiro TSUNOYAMA  Masataka KAWANAKA  Sachio NAITO  

     
    PAPER

      Vol:
    E76-D No:11
      Page(s):
    1358-1364

    This paper proposes a reconfigurable parallel processor based on a two-dimensional linear celular automaton model. The processor based on the model can be reconfigured quickly by utilizing the characteristics of the automaton used for its model. Moreover, the processor has short data path length between processing elements compared with the length of the processor based on one-dimensional linear cellular automaton model which has been already discussed. The processing elements of the processor based on the two-dimensional linear cellular automaton model are regarded as cells and the operational states of the processor are treated as the states of the automaton. When faults are detected, the processor can be reconfigured by changing its state under the state transition function of the processor determined by the weighting function of the automaton model. The processor can be reconfigured within a clock period required for making a state transition. This processor is extremely effective for real-time data processing systems required high reliability.

  • A Consensus-Based Model for Responsive Computing

    Miroslaw MALEK  

     
    INVITED PAPER

      Vol:
    E76-D No:11
      Page(s):
    1319-1324

    The emerging discipline of responsive systems demands fault-tolerant and real-time performance in uniprocessor, parallel, and distributed computing environments. The new proposal for responsiveness measure is presented, followed by an introduction of a model for responsive computing. The model, called CONCORDS (CONsensus/COmputation for Responsive Distributed Systems), is based on the integration of various forms of consensus and computation (progress or recovery). The consensus tasks include clock synchronization, diagnosis, checkpointing scheduling and resource allocation.

  • An Investigation on Space-Time Tradeoff of Routing Schemes in Large Computer Networks

    Kenji ISHIDA  

     
    PAPER

      Vol:
    E76-D No:11
      Page(s):
    1341-1347

    Space-time tradeoff is a very fundamental issue to design a fault-tolerant real-time (called responsive) system. Routing a message in large computer networks is efficient when each node knows the full topology of the whole network. However, in the hierarchical routing schemes, no node knows the full topology. In this paper, a tradeoff between an optimality of path length (message delay: time) and the amount of topology information (routing table size: space) in each node is presented. The schemes to be analyzed include K-scheme (by Kamoun and Kleinrock), G-scheme (by Garcia and Shacham), and I-scheme (by authors). The analysis is performed by simulation experiments. The results show that, with respect to average path length, I-scheme is superior to both K-scheme and G-scheme, and that K-scheme is better than G-scheme. Additionally, an average path length in I-scheme is about 20% longer than the optimal path length. On the other hand, for the routing table size, three schemes are ranked in reverse direction. However, with respect to the order of size of routing table, the schemes have the same complexity O (log n) where n is the number of nodes in a network.

  • A Note on One-Way Multicounter Machines and Cooperating Systems of One-Way Finite Automata

    Yue WANG  Katsushi INOUE  Itsuo TAKANAMI  

     
    LETTER-Automaton, Language and Theory of Computing

      Vol:
    E76-D No:10
      Page(s):
    1302-1306

    For each two positive integers r, s, let [1DCM(r)-Time(ns)] ([1NCM(r)-Time(ns)]) and [1DCM(r)-Space(ns)] ([1NCM(r)-Space(ns)]) be the classes of languages accepted in time ns and in space ns, respectively, by one-way deterministic (nondeterministic) r-counter machines. We show that for each X{D, N}, [1XCM(r)-Time(ns)][1XCM(r+1)-Time(ns)] and [1XCM(r)-Space(ns)][1XCM(r+1)-Space(ns)]. We also investigate the relationships between one-way multicounter machines and cooperating systems of one-way finite automata. In particular, it is shown that one-way (one-) counter machines and cooperating systems of two one-way finite automata are equivalent in accepting power.

  • COACH:A Computer Aided Design Tool for Computer Architects

    Hiroki AKABOSHI  Hiroto YASUURA  

     
    PAPER

      Vol:
    E76-A No:10
      Page(s):
    1760-1769

    A modern architect can not design high performance computer architecture without thinking all factors of performance from hardware level (logic/layout design) to system level (application programs, operating systems, and compilers). For computer architecture design, there are few practical CAD tools, which support design activities of the architect. In this paper, we propose a CAD tool, called COACH, for computer architecture design. COACH supports architecture design from hardware level to system level. To make a high-performance general purpose computer system, the architect evaluates system performance as well as hardware level performance. To evaluate hardware level performance accurately, logic/layout synthesis tools and simulator are used for evaluation. Logic/layout synthesis tools translate the architecture design into logic circuits and layout pattern and simulator is used to get accurate information on hardware level performance which consists of clock frequency, the number of transistors, power consumption, and so on. To evaluate system level performance, a compiler generator is introducd. The compiler generator generates a compiler of a programming language from the desripition of architecture design. The designed architecture is simulated in the behavior level with programs compiled by the compiler, and the architect can get information on system level performance which consists of program execution steps, etc. From both hardware level performance and system level performance, the architect can evaluate and revise his/her architecture, considering the architecture from hardware level to system level. In this paper, we propose a new design methodology which uses () logic/layout synthesis tools and simulators as tools for architecture design and () a compiler generator for system level evaluation. COACH, a CAD system based on the methodology, is discussed and a prototype of COACH is implemented. Using the design methodology, two processors are designed. The result of the designs shows that the proposed design methodology are effective in architecture design.

3761-3780hit(3945hit)