The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.48

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E83-A No.8  (Publication Date:2000/08/25)

    Special Section on Digital Signal Processing
  • FOREWORD

    Akinori NISHIHARA  

     
    FOREWORD

      Page(s):
    1497-1497
  • Blind Separation of Sources: Methods, Assumptions and Applications

    Ali MANSOUR  Allan Kardec BARROS  Noboru OHNISHI  

     
    SURVEY PAPER

      Page(s):
    1498-1512

    The blind separation of sources is a recent and important problem in signal processing. Since 1984, it has been studied by many authors whilst many algorithms have been proposed. In this paper, the description of the problem, its assumptions, its currently applications and some algorithms and ideas are discussed.

  • The Synthesis of Low-Peak Orthogonal-Base-Set Sequences Using Trigonometric Function Aliasing

    Takafumi HAYASHI  William L. MARTENS  

     
    PAPER-Theory of Signals

      Page(s):
    1513-1522

    This paper presents a new technique for the synthesis of orthogonal-base-set sequences suitable for applications requiring sets of uncorrelated pseudo-white-noise sources. The synthesized sequences (vectors) are orthogonal to each other, and each sequence also has a flat power spectrum and low peak factor. In order to construct the orthogonal-base-set sequences, the new application of ta-sequence (trigonometric function aliasing sequence) introduced in this paper uses Latin-squares and Walsh-Hadamard sequences. The ta-sequence itself is a very new concept, and the method presented here provides the means for generating various orthogonal-base-set sequences at sizes required for such applications as system measurement (needing uncorrelated test signals), pseudo noise synthesis for spread spectrum communication, and audio signal processing (needing synthesis of stereo or multichannel signals from mono sources).

  • A Design of Near Perfect Reconstruction Linear-Phase QMF Banks Based on Hybrid Steepest Descent Method

    Hiroshi HASEGAWA  Isao YAMADA  Kohichi SAKANIWA  

     
    PAPER-Filter Banks

      Page(s):
    1523-1530

    In this paper, we propose a projection based design of near perfect reconstruction QMF banks. An advantage of this method is that additional design specifications are easily implemented by defining new convex sets. To apply convex projection technique, the main difficulty is how to approximate the design specifications by some closed convex sets. In this paper, introducing a notion of Magnitude Product Space where a pair of magnitude responses of analysis filters is expressed as a point, we approximate design requirements of QMF banks by multiple closed convex sets in this space. The proposed method iteratively applies a convex projection technique, Hybrid Steepest Descent Method, to find a point corresponding to the optimal analysis filters at each stage, where the closed convex sets are dynamically improved. Design examples show that the proposed design method leads to significant improvement over conventional design methods.

  • Studies on the Convergence Speed of Over-Sampled Subband Adaptive Digital Filters

    Shuichi OHNO  

     
    PAPER-Adaptive Signal Processing

      Page(s):
    1531-1538

    To evaluate or compare the convergence speed of adaptive digital filters (ADF) with least mean squared (LMS) algorithm, the condition numbers of correlation matrices of tap-input vectors are often used. In this paper, however, the comparison of the conventional fullband ADF and the subband ADF based on their condition numbers is shown to be invalid. In some cases, the over-sampled subband ADF converges faster than the fullband ADF, although the former has larger condition numbers. To explain the above phenomenon, an expression for the convergence behavior of the subband ADF and simulation results are provided.

  • Analysis on Convergence Property of INLMS Algorithm Suitable for Fixed Point Processing

    Kensaku FUJII  Juro OHGA  

     
    PAPER-Adaptive Signal Processing

      Page(s):
    1539-1544

    The individually normalized least mean square (INLMS) algorithm is proposed as an adaptive algorithm suitable for the fixed point processing. The convergence property of the INLMS algorithm, however, is not yet analyzed enough. This paper first derives an equation describing the convergence property by exploiting the technique of expressing the INLMS algorithm as a first order infinite impulse response (IIR) filter. According to the equation derived thus, the decreasing process of the estimation error is represented as the response of another IIR filter expression. By using the representation, this paper second derives the convergence condition of the INLMS algorithm as the range of the step size making a low path filter of the latter IIR filter. This paper also derives the step size maximizing the convergence speed as the maximum coefficient of the latter IIR filter and finally clarifies the range of the step size recommended in the practical system design.

  • Fast Implementation Technique for Improving Throughput of RLS Adaptive Filters

    Kiyoshi NISHIKAWA  Hitoshi KIYA  

     
    PAPER-Adaptive Signal Processing

      Page(s):
    1545-1550

    This paper proposes a fast implementation technique for RLS adaptive filters. The technique has an adjustable parameter to trade the throughput and the rate of convergence of the filter according to the applications. The conventional methods for improving the throughput do not have this kind of adjustability so that the proposed technique will expand the area of applications for the RLS algorithm. We show that the improvement of the throughput can be easily achieved by rearranging the formula of the RLS algorithm and that there are no need for faster PEs for the improvement.

  • Analysis of the Sign-Sign Algorithm Based on Gaussian Distributed Tap Weights

    Shin'ichi KOIKE  

     
    PAPER-Adaptive Signal Processing

      Page(s):
    1551-1558

    In this paper, a new set of difference equations is derived for transient analysis of the convergence of adaptive FIR filters using the Sign-Sign Algorithm with Gaussian reference input and additive Gaussian noise. The analysis is based on the assumption that the tap weights are jointly Gaussian distributed. Residual mean squared error after convergence and simpler approximate difference equations are further developed. Results of experiment exhibit good agreement between theoretically calculated convergence and that of simulation for a wide range of parameter values of adaptive filters.

  • Extraction of Subimages by Lifting Wavelet Filters

    Shigeru TAKANO  Koichi NIIJIMA  

     
    PAPER-Image/Visual Signal Processing

      Page(s):
    1559-1565

    This paper proposes a method for extracting subimages from a huge reference image by learning lifting wavelet filters. Lifting wavelet filters are biorthogonal wavelet filters containing free parameters developed by Sweldens. Our method is to learn such free parameters using some training subimages so as to vanish their high frequency components in the y- and x-directions. The learnt wavelet filters have the feature of training subimages. Applying such wavelet filters to the reference image, we can detect the locations where the high frequency components are almost the same as those of the target subimage.

  • Accelerated Image Halftoning Technique Using Improved Genetic Algorithm

    Hernan AGUIRRE  Kiyoshi TANAKA  Tatsuo SUGIMURA  

     
    PAPER-Image/Visual Signal Processing

      Page(s):
    1566-1574

    This paper presents an accelerated image halftoning technique using an improved genetic algorithm with tiny populations. The algorithm is based on a new cooperative model for genetic operators in GA. Two kinds of operators are used in parallel to produce offspring: (i) SRM (Self-Reproduction with Mutation) to introduce diversity by means of Adaptive Dynamic-Block (ADB) mutation inducing the appearance of beneficial mutations. (ii) CM (Crossover and Mutation) to promote the increase of beneficial mutations in the population. SRM applies qualitative mutation only to the bits inside a mutation block and controls the required exploration-exploitation balance through its adaptive mechanism. An extinctive selection mechanism subjects SRM's and CM's offspring to compete for survival. The simulation results show that our scheme impressively reduces computer memory and processing time required to obtain high quality halftone images. For example, compared to the conventional image halftoning technique with GA, the proposed algorithm using only a 2% population size required about 15% evaluations to generate high quality images. The results make our scheme appealing for practical implementations of the image halftoning technique using GA.

  • IFS Coding of Non-Homogeneous Fractal Images Using Grobner Basis Techniques

    Toshimizu ABIKO  Masayuki KAWAMATA  

     
    PAPER-Image/Visual Signal Processing

      Page(s):
    1575-1581

    This paper proposes a moment based encoding algorithm for iterated function system (IFS) coding of non-homogeneous fractal images with unequal probabilities. Moment based encoding algorithms for IFS coding of non-homogeneous fractal images require a solution of simultaneous algebraic equations that are difficult to handle with numerical root-finding methods. The proposed algorithm employs a variable elimination method using Grobner bases with floating-point coefficients in order to derive a numerically solvable equation with a single unknown. The algorithm also employs a varying associated-probabilities method for the purpose of decreasing the computational complexity of calculating Grobner bases. Experimental results show that the average computation time for encoding a non-homogeneous fractal image of 256256 pixels and 256 gray levels is about 200 seconds on a PC with a 400 MHz AMD K6-III processor.

  • A Method of Extracting Embedded Binary Data from JPEG Bitstreams Using Standard JPEG Decoder

    Yoshihiro NOGUCHI  Hiroyuki KOBAYASHI  Hitoshi KIYA  

     
    PAPER-Image/Visual Signal Processing

      Page(s):
    1582-1588

    We proposed a method for embedding binary data into JPEG bitstreams and extracting embedded data from JPEG bitstreams using the standard JPEG decoder. In the proposed method, we can decode the image from JPEG bitstreams into which the binary data is embedded first using the traditional standard JPEG decoder, and then we can extract the embedded binary data perfectly by the post-processing from the decoded JPEG image. For the post-processing, we use only the decoded image data to extract the embedded binary data. Namely, we do not need any kind of particular parameters, which are used for JPEG decoding, such as quantization table value. Thus, we can use the traditional standard JPEG decoder for the pre-processing of extracting binary data. Furthermore, we address the effect of the calculation bit accuracy of discrete cosine transform (DCT) and inverse discrete cosine transform (IDCT) for extracting embedded binary data perfectly as post-processing. Simulations using extracting embedded binary data as post-processing are presented to quantify some performance factors concerned. And we confirmed that the proposed method could be of practical use.

  • Repeating Image Watermarking Technique by the Visual Cryptography

    Chuen-Ching WANG  Shen-Chuan TAI  Chong-Shou YU  

     
    PAPER-Image/Visual Signal Processing

      Page(s):
    1589-1598

    A repeating watermarking technique based on visual secret sharing (VSS) scheme provides the watermark repeated throughout the image for avoiding the image cropping. In this paper, the watermark is divided into public watermark and secret watermark by using the VSS scheme to improve the security of the proposed watermarking technique. Unlike the traditional methods, the original watermark does not have to be embedded into the host image directly and, thus, it is hard to be detected or removed by the pirates or hackers. The retrieved watermark extracted from the watermarked image does not require the complete original image, but requires a secret watermark. Furthermore, the watermarking technique suits the watermark with an adaptive size of binary image for designing the watermarking system. The experimental results show that the proposed method can withstand the common image processing operations, such as filtering, lossy compression and the cropping attacking etc. The embedded watermark is imperceptible, and that the extracted watermark identifies clearly the owner's copyright.

  • Approximation of Chaotic Dynamics by Using Smaller Number of Data Based upon the Genetic Programming and Its Applications

    Yoshikazu IKEDA  Shozo TOKINAGA  

     
    PAPER-Nonlinear Signal Processing

      Page(s):
    1599-1607

    This paper deals with the identification of system equation of the chaotic dynamics by using smaller number of data based upon the genetic programming (GP). The problem to estimate the system equation from the chaotic data is important to analyze the structure of dynamics in the fields such as the business and economics. Especially, for the prediction of chaotic dynamics, if the number of data is restricted, we can not use conventional numerical method such as the linear-reconstruction of attractors and the prediction by using the neural networks. In this paper we utilize an efficient method to identify the system equation by using the GP. In the GP, the performance (fitness) of each individual is defined as the inversion of the root mean square error of the spectrum obtained by the original and predicted time series to suppress the effect of the initial value of variables. Conventional GA (Genetic Algorithm) is combined to optimize the constants in equations and to select the primitives in the GP representation. By selecting a pair of individuals having higher fitness, the crossover operation is applied to generate new individuals. The crossover operation used here means the replacement of a part of tree in individual A by a part of tree in individual B. To avoid the meaningless genetic operation, the validity of prefix representation of the subtree to be embedded to the other tree is probed by using the stack count. These newly generated individuals replace old individuals with lower fitness. The mutation operation is also used to avoid the convergence to the local minimum. In the simulation study, the identification method is applied at first to the well known chaotic dynamics such as the Logistic map and the Henon map. Then, the method is applied to the identification of the chaotic data of various time series by using one dimensional and higher dimensional system. The result shows better prediction than conventional ones in cases where the number of data is small.

  • Motion Estimation with Power Scalability and Its VHDL Model

    Ayuko TAKAGI  Shogo MURAMATSU  Hitoshi KIYA  

     
    PAPER-Implementations of Signal Processing Systems

      Page(s):
    1608-1613

    In MPEG standard, motion estimation (ME) is used to eliminate the temporal redundancy of video frames. This ME is the most time-consuming task in the encoding of video sequences and is also the one using the most power. Using low-bit images can save power of ME and a conventional architecture fixed to a certain bit width is used for low-bit motion estimator. It is known that there is a trade-off between power and image quality. ME may be used in various situations, and the relation between demands for power or image quality will depend on those circumstances. We therefore developed an architecture for a low-bit motion estimator with adjustable power consumption. In this architecture, we can select the bit width for the input image and adjust the amount of power for ME. To evaluate its effectiveness, we designed the motion estimator by VHDL and used the synthesis results to estimate the performance.

  • An Architectural Study of an MPEG-2 422P@HL Encoder Chip Set

    Ayako HARADA  Shin-ichi HATTORI  Tadashi KASEZAWA  Hidenori SATO  Tetsuya MATSUMURA  Satoshi KUMAKI  Kazuya ISHIHARA  Hiroshi SEGAWA  Atsuo HANAMI  Yoshinori MATSUURA  Ken-ichi ASANO  Toyohiko YOSHIDA  Masahiko YOSHIMOTO  Tokumichi MURAKAMI  

     
    PAPER-Implementations of Signal Processing Systems

      Page(s):
    1614-1623

    An MPEG-2 422P@HL encoder chip set composed of a preprocessing LSI, an encoding LSI, and a motion estimation LSI is described. This chip set realizes a two-type scalability of picture resolution and quality, and executes a hierarchical coding control in the overall encoder system. Due to its scalable architecture, the chip set realizes a 422P@HL video encoder with multi-chip configuration. This single encoding LSI achieves 422P@ML video, audio, and system encoding in real time. It employs an advanced hybrid architecture with a 162 MHz media processor and dedicated video processing hardware. It also has dual communication ports for parallel processing with multi-chip configuration. Transferring of reconstructed data and macroblock characteristic data between neighboring encoder modules is executed via these ports. The preprocessing LSI is fabricated using 0.25 micron three-layer metal CMOS technology and integrates 560 K gates in an area of 12.0 mm 12.0 mm . The encoding LSI is fabricated using 0.25 micron four-layer metal CMOS technology and integrates 11 million transistors in an area of 14.2 mm 14.2 mm . The motion estimation LSI is fabricated using 0.35 micron three-layer metal CMOS technology. It integrates 1.9 million transistors in an area of 8.5 mm 8.5 mm . This chip set makes various system configurations possible and allows for a compact and cost-effective video encoder with high picture quality.

  • Higher-Order Cyclostationarity Based Direction Estimation of Coherent Narrow-Band Signals

    Jingmin XIN  Hiroyuki TSUJI  Akira SANO  

     
    PAPER-Applications of Signal Processing

      Page(s):
    1624-1633

    To improve the resolution capability of the directions-of-arrival (DOA) estimation, some subspace-based methods have recently been developed by exploiting the specific signal properties (e.g. non-Gaussian property and cyclostationarity) of communication signals. However, these methods perform poorly as the ordinary subspace-based methods in multipath propagation situations, which are often encountered in mobile communication systems because of various reflections. In this paper, we investigate the direction estimation of coherent signals by jointly utilizing the merits of higher-order statistics and cyclostationarity to enhance the performance of DOA estimation and to effectively reject interference and noise. For estimating the DOA of narrow-band coherent signals impinging on a uniform linear array, a new higher-order cyclostationarity based approach is proposed by incorporating a subarray scheme into a linear prediction technique. This method can improve the resolution capability and alleviate the difficulty of choosing the optimal lag parameter. It is shown numerically that the proposed method is superior to the conventional ones.

  • A Sample Correlation Method for Source Number Detection

    Hsien-Tsai WU  

     
    PAPER-Applications of Signal Processing

      Page(s):
    1634-1640

    In this paper, the effective uses of Gerschgorin radii of the similar transformed covariance matrix for source number estimation are introduced. A heuristic approach is used for developing the detection criteria. The heuristic approach applying the visual Gerschgorin disk method (VGD), developed from the projection concept, overcomes the problems in cases of small data samples, an unknown noise model, and data dependency. Furthermore, Gerschgorin disks can be formed into two distinct, non-overlapping collections; one for signals and the other for noises. The number of sources can be visually determined by counting the number of Gerschgorin disks for signals. The proposed method is based on the sample correlation coefficient to normalize the signal Gerschgorin radii for source number detection. The performance of VGD shows improved detection capabilities over Gerschgorin Disk Estimator (GDE) in Gaussian white noise process and was used successfully in measured experimental data.

  • A Hierarchical Bayesian Approach to Regularization Problems with Multiple Hyperparameters

    Ryo TAKEUCHI  Susumu NAKAZAWA  Kazuma KOIZUMI  Takashi MATSUMOTO  

     
    PAPER-Applications of Signal Processing

      Page(s):
    1641-1650

    The Tikhonov regularization theory converts ill-posed inverse problems into well-posed problems by putting penalty on the solution sought. Instead of solving an inverse problem, the regularization theory minimizes a weighted sum of "data error" and "penalty" function, and it has been successfully applied to a variety of problems including tomography, inverse scattering, detection of radiation sources and early vision algorithms. Since the function to be minimized is a weighted sum of functions, one should estimate appropriate weights. This is a problem of hyperparameter estimation and a vast literature exists. Another problem is how one should compare a particular penalty function (regularizer) with another. This is a special class of model comparison problems which are generally difficult. A Hierarchical Bayesian scheme is proposed with multiple hyperparameters in order to cope with data containing subsets which consist of different degree of smoothness. The scheme outperforms the previous scheme with single hyperparameter.

  • A Simple Nonlinear Pre-Filtering for a Set-Theoretic Linear Blind Deconvolution Scheme

    Masanori KATO  Isao YAMADA  Kohichi SAKANIWA  

     
    LETTER-Multidimensional Signal Processing

      Page(s):
    1651-1653

    In this letter, we remark a well-known nonlinear filtering technique realize immediate effect to suppress the influence of the additive measurement noise in the input to a set theoretic linear blind deconvolution scheme. Numerical examples show ε-separating nonlinear pre-filtering techniques work suitably to this noisy blind deconvolution problem.

  • Regular Section
  • A Low-Cost Floating Point Vectoring Algorithm Based on CORDIC

    Jeong-A LEE  Kees-Jan van der KOLK  Ed F. A. DEPRETTERE  

     
    PAPER-Digital Signal Processing

      Page(s):
    1654-1662

    In this paper we develop a CORDIC-based floating-point vectoring algorithm which reduces significantly the amount of microrotation steps as compared to the conventional algorithm. The overhead required to accomplish this is minimized by the introduction of an angle selection function which considers only a few of the total amount of bits used to represent the vector being rotated. At the same time, the cost of individual microrotations is kept low by the utilization of a fast rotations angle base.

  • A New FPGA Architecture for High Performance Bit-Serial Pipeline Datapath

    Akihisa OHTA  Tsuyoshi ISSHIKI  Hiroaki KUNIEDA  

     
    PAPER-VLSI Design Technology and CAD

      Page(s):
    1663-1672

    In this paper, we present our work on the design of a new FPGA architecture targeted for high-performance bit-serial pipeline datapath. Bit-parallel systems require large amount of routing resource which is especially critical in using FPGAs. Their device utilization and operation frequency become low because of large routing penalty. Whereas bit-serial circuits are very efficient in routing, therefore are able to achieve a very high logic utilization. Our proposed FPGA architecture is designed taking into account the structure of bit-serial circuits to optimize the logic and routing architecture. Our FPGA guarantees near 100% logic utilization with a straightforward place and route tool due to high routability of bit-serial circuits and simple routing interconnect architecture. The FPGA chip core which we designed consists of around 200k transistors on 3.5 mm square substrate using 0.5 µm 2-metal CMOS process technology.

  • Modelling Integer Programming with Logic: Language and Implementation

    Qiang LI  Yike GUO  Tetsuo IDA  

     
    PAPER-Numerical Analysis and Optimization

      Page(s):
    1673-1680

    The classical algebraic modelling approach for integer programming (IP) is not suitable for some real world IP problems, since the algebraic formulations allow only for the description of mathematical relations, not logical relations. In this paper, we present a language + for IP, in which we write logical specification of an IP problem. + is a language based on the predicate logic, but is extended with meta predicates such as at_least(m,S), where m is a non-negative integer, meaning that at least m predicates in the set S of formulas hold. The meta predicates facilitate reasoning about a model of an IP problem rigorously and logically. + is executable in the sense that formulas in + are mechanically translated into a set of mathematical formulas, called IP formulas, which most of existing IP solvers accept. We give a systematic method for translating formulas in + to IP formulas. The translation is rigorously defined, verified and implemented in Mathematica 3.0. Our work follows the approach of McKinnon and Williams, and elaborated the language in that (1) it is rigorously defined, (2) transformation to IP formulas is more optimised and verified, and (3) the transformation is completely given in Mathematica 3.0 and is integrated into IP solving environment as a tool for IP.

  • An Optimization of Credit-Based Payment for Electronic Toll Collection Systems

    Goichiro HANAOKA  Tsuyoshi NISHIOKA  Yuliang ZHENG  Hideki IMAI  

     
    PAPER-Information Security

      Page(s):
    1681-1690

    Credit-based electronic payment systems are considered to play important roles in future automated payment systems. Like most other types of payment systems, however, credit-based systems proposed so far generally involve computationally expensive cryptographic operations. Such a relatively heavy computational load is preventing credit-based systems from being used in applications which require very fast processing. A typical example is admission-fee payment at the toll gate of an expressway without stopping a vehicle that travels at a high speed. In this article, we propose a very fast credit-based electronic payment protocol for admission-fee payment. More specifically, we propose a payment system between a high-speed vehicle and a toll gate which uses only very simple and fast computations. The proposed system makes use of an optimized Key Pre-distribution System (or KPS) to obtain high resistance against collusion attacks.

  • Coding Theorems for Secret-Key Authentication Systems

    Hiroki KOGA  Hirosuke YAMAMOTO  

     
    PAPER-Information Theory

      Page(s):
    1691-1703

    This paper provides the Shannon theoretic coding theorems on the success probabilities of the impersonation attack and the substitution attack against secret-key authentication systems. Though there are many studies that develop lower bounds on the success probabilities, their tight upper bounds are rarely discussed. This paper characterizes the tight upper bounds in an extended secret-key authentication system that includes blocklength K and permits the decoding error probability tending to zero as K . In the extended system an encoder encrypts K source outputs to K cryptograms under K keys and transmits K cryptograms to a decoder through a public channel in the presence of an opponent. The decoder judges whether K cryptograms received from the public channel are legitimate or not under K keys shared with the encoder. It is shown that 2-KI(W;E) is the minimal attainable upper bound of the success probability of the impersonation attack, where I(W;E) denotes the mutual information between a cryptogram W and a key E. In addition, 2-KH(E|W) is proved to be the tight upper bound of the probability that the opponent can correctly guess K keys from transmitted K cryptograms, where H(E|W) denotes the conditional entropy of E given W.

  • Tradeoffs between Error Performance and Decoding Complexity in Multilevel 8-PSK Codes with UEP Capabilities and Multistage Decoding

    Motohiko ISAKA  Robert H. MORELOS-ZARAGOZA  Marc P. C. FOSSORIER  Shu LIN  Hideki IMAI  

     
    PAPER-Coding Theory

      Page(s):
    1704-1712

    In this paper, we investigate multilevel coding and multistage decoding for satellite broadcasting with moderate decoding complexity. An unconventional signal set partitioning is used to achieve unequal error protection capabilities. Two possibilities are shown and analyzed for practical systems: (i) linear block component codes with near optimum decoding, (ii) punctured convolutional component codes with a common trellis structure.

  • An Evaluation of the Physiological Effects of CRT Displays on Computer Users

    Sufang CHEN  Xiangshi REN  HunSoo KIM  Yoshio MACHI  

     
    PAPER-General Fundamentals and Boundaries

      Page(s):
    1713-1719

    An experiment was conducted to measure and compare the physiological effects of three types of CRT on users. We proposed a new strategy for measuring the user's level of relaxation. In this strategy, called "Task Break Monitoring (TBM)," the subjects took a break with eyes closed after each interaction with the computer. During each break, electroencephalogram (EEG), especially alpha 1 waves, electrocardiogram (ECG) and galvanic skin resistance (GSR) were monitored and recorded. The results show that the type of CRT display which emits far-infrared rays modulated by a FIR-fan induce less fatigue in users while they are working and reduce the recovery time after the task was completed. We believe "TBM" to be an important innovation in human computer research and development because the after effects of computer use have an obvious bearing on recovery time, user endurance and psychological attitude to the technology in general etc.

  • A 3 V Low Power 156/622/1244 Mbps CMOS Parallel Clock and Data Recovery Circuit for Optical Communications

    Hae-Moon SEO  Chang-Gene WOO  Sang-Won OH  Sung-Wook JUNG  Pyung CHOI  

     
    PAPER-General Fundamentals and Boundaries

      Page(s):
    1720-1727

    This paper presents the implementation of a 3 V low power multi-rate of 156, 622, and 1244 Mbps clock and data recovery circuit (CDR) for optical communications tranceiver using new parallel clock recovery architecture based on dual charge-pump PLL. Designed circuit recovers eight-phase clock signals which are one-eighth frequency of the input signal. While the typical system uses the method that compares the input data with recovered clock, the proposed circuit compares a 1/2-bit delayed input data with the serial data generated by the recovered eight-phase clock signals. The advantage of the circuit is that the implementation is easy, since each sub blocks have one-eighth frequency of the input data signal. Morevover, since the circuit works at one-eighth frequency of the input data, it dissipates less power than conventional CMOS recovery circuit. Simulation results show that this recovery circuit can work with power dissipation of less than 40 mW with a single 3 V supply. All the simulations are based on HYUNDAI 0.65 µm N-Well CMOS double-poly double-metal technology.

  • Dynamic Power Dissipation of Track/Hold Circuit

    Hiroyuki SATO  Haruo KOBAYASHI  

     
    LETTER-Analog Signal Processing

      Page(s):
    1728-1731

    This paper describes the formula for dynamic power dissipation of a track/hold circuit as a function of the input frequency, the input amplitude, the sampling frequency, the track/hold duty cycle, the power supply voltage and the hold capacitance for a sinusoidal input.

  • Trade off between Page Number and Number of Edge-Crossings on the Spine of Book Embeddings of Graphs

    Miki Shimabara MIYAUCHI  

     
    LETTER-Graphs and Networks

      Page(s):
    1732-1734

    This paper studies the problem of book-embeddings of graphs. When each edge is allowed to appear in one or more pages by crossing the spine of a book, it is well known that every graph G can be embedded in a 3-page book. Recently, it has been shown that there exists a 3-page book embedding of G in which each edge crosses the spine O(log2 n) times. This paper considers a book with more than three pages. In this case, it is known that a complete graph Kn with n vertices can be embedded in a n/2 -page book without any edge-crossings on the spine. Thus it becomes an interesting problem to devise book-embeddings of G so as to reduce both the number of pages used and the number of edge-crossings over the spine. This paper shows that there exists a d-page book embedding of G in which each edge crosses the spine O(logd n) times. As a direct corollary, for any real number s, there is an ns -page book embedding of G in which each edge crosses the spine a constant number of times. In another paper, Enomoto-Miyauchi-Ota show that for an integer d, if n is sufficiently large compared with d, then for any embedding of Kn into a d-page book, there must exist Ω(n2 logd n) points at which edges cross over the spine. This means our result is the best possible for Kn in this case.