The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.48

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E90-A No.8  (Publication Date:2007/08/01)

    Special Section on Papers Selected from the 21st Symposium on Signal Processing
  • FOREWORD

    Hitoshi KIYA  

     
    FOREWORD

      Page(s):
    1503-1503
  • Adaptive Processing over Distributed Networks

    Ali H. SAYED  Cassio G. LOPES  

     
    INVITED PAPER

      Page(s):
    1504-1510

    The article describes recent adaptive estimation algorithms over distributed networks. The algorithms rely on local collaborations and exploit the space-time structure of the data. Each node is allowed to communicate with its neighbors in order to exploit the spatial dimension, while it also evolves locally to account for the time dimension. Algorithms of the least-mean-squares and least-squares types are described. Both incremental and diffusion strategies are considered.

  • Explicit Formula for Predictive FIR Filters and Differentiators Using Hahn Orthogonal Polynomials

    Saed SAMADI  Akinori NISHIHARA  

     
    PAPER

      Page(s):
    1511-1518

    An explicit expression for the impulse response coefficients of the predictive FIR digital filters is derived. The formula specifies a four-parameter family of smoothing FIR digital filters containing the Savitsky-Goaly filters, the Heinonen-Neuvo polynomial predictors, and the smoothing differentiators of arbitrary integer orders. The Hahn polynomials, which are orthogonal with respect to a discrete variable, are the main tool employed in the derivation of the formula. A recursive formula for the computation of the transfer function of the filters, which is the z-transform of a terminated sequence of polynomial ordinates, is also introduced. The formula can be used to design structures with low computational complexity for filters of any order.

  • POCS-Based Texture Reconstruction Method Using Clustering Scheme by Kernel PCA

    Takahiro OGAWA  Miki HASEYAMA  

     
    PAPER

      Page(s):
    1519-1527

    A new framework for reconstruction of missing textures in digital images is introduced in this paper. The framework is based on a projection onto convex sets (POCS) algorithm including a novel constraint. In the proposed method, a nonlinear eigenspace of each cluster obtained by classification of known textures within the target image is applied to the constraint. The main advantage of this approach is that the eigenspace can approximate the textures classified into the same cluster in the least-squares sense. Furthermore, by monitoring the errors converged by the POCS algorithm, a selection of the optimal cluster to reconstruct the target texture including missing intensities can be achieved. This POCS-based approach provides a solution to the problem in traditional methods of not being able to perform the selection of the optimal cluster due to the missing intensities within the target texture. Consequently, all of the missing textures are successfully reconstructed by the selected cluster's eigenspaces which correctly approximate the same kinds of textures. Experimental results show subjective and quantitative improvement of the proposed reconstruction technique over previously reported reconstruction techniques.

  • Players Clustering Based on Graph Theory for Tactics Analysis Purpose in Soccer Videos

    Hirofumi KON  Miki HASEYAMA  

     
    PAPER

      Page(s):
    1528-1533

    In this paper, a new method for clustering of players in order to analyze games in soccer videos is proposed. The proposed method classifies players who are closely related in terms of soccer tactics into one group. Considering soccer tactics, the players in one group are located near each other. For this reason, the Euclidean distance between the players is an effective measurement for the clustering of players. However, the distance is not sufficient to extract tactics-based groups. Therefore, we utilize a modified version of the community extraction method, which finds community structure by dividing a non-directed graph. The use of this method in addition to the distance enables accurate clustering of players.

  • Image Magnification by a Compact Method with Preservation of Preferential Components

    Akira HIRABAYASHI  

     
    PAPER

      Page(s):
    1534-1541

    Bicubic interpolation is one of the standard approaches for image magnification since it can be easily computed and does not require a priori knowledge nor a complicated model. In spite of such convenience, the images enlarged by bicubic interpolation are blurry, in particular for large magnification factors. This may be explained by four constraints of bicubic interpolation. Hence, by relaxing or replacing the constraints, we propose a new magnification method, which performs better than bicubic interpolation, but retains its compactness. One of the constraints is about criterion, which we replace by a criterion requiring that all pixel values are reproduced and preferential components in input images are perfectly reconstructed. We show that, by choosing the low frequency components or edge enhancement components in the DCT basis as the preferential components, the proposed method performs better than bicubic interpolation, with the same, or even less amount of computation.

  • Audio-Based Shot Classification for Audiovisual Indexing Using PCA, MGD and Fuzzy Algorithm

    Naoki NITANDA  Miki HASEYAMA  

     
    PAPER

      Page(s):
    1542-1548

    An audio-based shot classification method for audiovisual indexing is proposed in this paper. The proposed method mainly consists of two parts, an audio analysis part and a shot classification part. In the audio analysis part, the proposed method utilizes both principal component analysis (PCA) and Mahalanobis generalized distance (MGD). The effective features for the analysis can be automatically obtained by using PCA, and these features are analyzed based on MGD, which can take into account the correlations of the data set. Thus, accurate analysis results can be obtained by the combined use of PCA and MGD. In the shot classification part, the proposed method utilizes a fuzzy algorithm. By using the fuzzy algorithm, the mixing rate of the multiple audio sources can be roughly measured, and thereby accurate shot classification can be attained. Results of experiments performed by applying the proposed method to actual audiovisual materials are shown to verify the effectiveness of the proposed method.

  • A New Adaptive Filter Algorithm for System Identification Using Independent Component Analysis

    Jun-Mei YANG  Hideaki SAKAI  

     
    PAPER

      Page(s):
    1549-1554

    This paper proposes a new adaptive filter algorithm for system identification by using an independent component analysis (ICA) technique, which separates the signal from noisy observation under the assumption that the signal and noise are independent. We first introduce an augmented state-space expression of the observed signal, representing the problem in terms of ICA. By using a nonparametric Parzen window density estimator and the stochastic information gradient, we derive an adaptive algorithm to separate the noise from the signal. The proposed ICA-based algorithm does not suppress the noise in the least mean square sense but to maximize the independence between the signal part and the noise. The computational complexity of the proposed algorithm is compared with that of the standard NLMS algorithm. The stationary point of the proposed algorithm is analyzed by using an averaging method. We can directly use the new ICA-based algorithm in an acoustic echo canceller without double-talk detector. Some simulation results are carried out to show the superiority of our ICA method to the conventional NLMS algorithm.

  • Improvement of the Stability and Cancellation Performance for the Active Noise Control System Using the Simultaneous Perturbation Method

    Yukinobu TOKORO  Yoshinobu KAJIKAWA  Yasuo NOMURA  

     
    PAPER

      Page(s):
    1555-1563

    In this paper, we propose the introduction of a frequency domain variable perturbation control and a leaky algorithm to the frequency domain time difference simultaneous perturbation (FDTDSP) method in order to improve the cancellation performance and the stability of the active noise control (ANC) system using the perturbation method. Since the ANC system using the perturbation method does not need the secondary path model, it has an advantage of being able to track the secondary path changes. However, the conventional perturbation method has the problem that the cancellation performance deteriorates over the entire frequency band when the frequency response of the secondary path has dips because the magnitude of the perturbation is controlled in the time domain. Moreover, the stability of this method also deteriorates in consequence of the dips. On the other hand, the proposed method can improve the cancellation performance by providing the appropriate magnitude of the perturbation over the entire frequency band and stabilizing the system operation. The effectiveness of the proposed method is demonstrated through simulation and experimental results.

  • Bandwidth Extension with Hybrid Signal Extrapolation for Audio Coding

    Chatree BUDSABATHON  Akinori NISHIHARA  

     
    PAPER

      Page(s):
    1564-1569

    In this paper, we propose a blind method using hybrid signal extrapolation at the decoder to regenerate lost high-frequency components which are removed by encoders. At first, a decoded signal spectral resolution is enhanced by time domain linear predictive extrapolation and then the cut off frequency of each frame is estimated to avoid the spectrum gap between the end of original low frequency spectrum and the beginning of reconstructed high frequency spectrum. By utilizing a correlation between the high frequency spectrum and low frequency spectrum, the low frequency spectrum component is employed to reconstruct the high frequency spectrum component by frequency domain linear predictive extrapolation. Experimental results show an effective improvement of the proposed method in terms of SNR and human listening test results. The proposed method can be used to reconstruct the lost high frequency component to improve the perceptual quality of audio independent of the compression method.

  • Stereophonic Acoustic Echo Canceler Based on Two-Filter Scheme

    Noriaki MURAKOSHI  Akinori NISHIHARA  

     
    PAPER

      Page(s):
    1570-1578

    This paper presents a novel stereophonic acoustic echo canceling scheme without preprocessing. To accurately estimate echo path keeping the high level of performance in echo erasing, this scheme uses two filters, of which one filter is utilized as a guideline which does not erases echo but helps updating of the other filter, which actually erases echo. In addition, we propose a new filter dividing technique to apply to the filter divide scheme, and utilize this as the guideline. Numerical examples demonstrate that the proposed scheme improves the convergence behavior compared to conventional methods both in system mismatch (i.e., normalized coefficients error) and Echo Return Loss Enhancement (ERLE).

  • Robust F0 Estimation Based on Complex LPC Analysis for IRS Filtered Noisy Speech

    Keiichi FUNAKI  Tatsuhiko KINJO  

     
    PAPER

      Page(s):
    1579-1586

    This paper proposes a novel robust fundamental frequency (F0) estimation algorithm based on complex-valued speech analysis for an analytic speech signal. Since analytic signal provides spectra only over positive frequencies, spectra can be accurately estimated in low frequencies. Consequently, it is considered that F0 estimation using the residual signal extracted by complex-valued speech analysis can perform better for F0 estimation than that for the residual signal extracted by conventional real-valued LPC analysis. In this paper, the autocorrelation function weighted by AMDF is adopted for the F0 estimation criterion and four signals; speech signal, analytic speech signal, LPC residual and complex LPC residual, are evaluated for the F0 estimation. Speech signals used in the experiments were an IRS filtered speech corrupted by adding white Gaussian noise or Pink noise whose noise levels are 10, 5, 0, -5 [dB]. The experimental results demonstrate that the proposed algorithm based on complex LPC residual can perform better than other methods in noisy environment.

  • Speech Enhancement Based on MAP Estimation Using a Variable Speech Distribution

    Yuta TSUKAMOTO  Arata KAWAMURA  Youji IIGUNI  

     
    PAPER

      Page(s):
    1587-1593

    In this paper, a novel speech enhancement algorithm based on the MAP estimation is proposed. The proposed speech enhancer adaptively changes the speech spectral density used in the MAP estimation according to the sum of the observed power spectra. In a speech segment, the speech spectral density approaches to Rayleigh distribution to keep the quality of the enhanced speech. While in a non-speech segment, it approaches to an exponential distribution to reduce noise effectively. Furthermore, when the noise is super-Gaussian, we modify the width of Gaussian so that the Gaussian model with the modified width approximates the distribution of the super-Gaussian noise. This technique is effective in suppressing residual noise well. From computer experiments, we confirm the effectiveness of the proposed method.

  • Simple but Efficient Antenna Selection for MISO-OFDM Systems

    Shuichi OHNO  Kenichi YAMAGUCHI  Kok Ann Donny TEO  

     
    PAPER

      Page(s):
    1594-1600

    Simple but efficient antenna selection schemes are proposed for the downlink of Orthogonal Frequency Division Multiplexing (OFDM) transmission with multiple transmit antennas over frequency selective fading channels, where transmit antennas are selected at the mobile terminal and the base station is informed of the selected antennas through feedback channel. To obtain the optimal antenna selection, channel frequency responses are required and performances have to be evaluated at all the subcarriers. To reduce the computational complexity at mobile terminal, time-domain channels are utilized for antenna selection in place of channel frequency responses. Our scheme does not guarantee the optimal antenna selection but is shown by numerical simulations to yield reasonable selections. Moreover, by using a specially designed pilot OFDM preamble, an antenna selection without channel estimation is developed. Efficiencies of our suboptimal antenna selections with less computational complexities are verified by numerical simulations.

  • New Simultaneous Timing and Frequency Synchronization Utilizing Matched Filters for OFDM Systems

    Shigenori KINJO  Hiroshi OCHI  

     
    PAPER

      Page(s):
    1601-1610

    Orthogonal frequency division multiplexing (OFDM) is an attractive technique to accomplish wired or wireless broadband communications. Since it has been adopted as the terrestrial digital-video-broadcasting standard in Europe, it has also subsequently been embedded into many broadband communication standards. Many techniques for frame timing and frequency synchronization of OFDM systems have been studied as a result of its increasing importance. We propose a new technique of simultaneously synchronizing frame timing and frequency utilizing matched filters. First, a new short preamble consisting of short sequences multiplied by a DBPSK coded sequence is proposed. Second, we show that the new short preamble results in a new structure for matched filters consisting of a first matched filter, a DBPSK decoder, and a second matched filter. We can avoid the adverse effects of carrier frequency offset (CFO) when frame timing is synchronized because a DBPSK decoder has been deployed between the first and second matched filters. In addition, we show that the CFO can be directly estimated from the peak value of matched filter output. Finally, our simulation results demonstrate that the proposed scheme outperforms the conventional schemes.

  • High Accuracy Bicubic Interpolation Using Image Local Features

    Shuai YUAN  Masahide ABE  Akira TAGUCHI  Masayuki KAWAMATA  

     
    LETTER

      Page(s):
    1611-1615

    In this paper, we propose a novel bicubic method for digital image interpolation. Since the conventional bicubic method does not consider image local features, the interpolated images obtained by the conventional bicubic method often have a blurring problem. In this paper, the proposed bicubic method adopts both the local asymmetry features and the local gradient features of an image in the interpolation processing. Experimental results show that the proposed method can obtain high accuracy interpolated images.

  • Linearization of Loudspeaker Systems Using a Subband Parallel Cascade Volterra Filter

    Hideyuki FURUHASHI  Yoshinobu KAJIKAWA  Yasuo NOMURA  

     
    LETTER

      Page(s):
    1616-1619

    In this paper, we propose a low complexity realization method for compensating for nonlinear distortion. Generally, nonlinear distortion is compensated for by a linearization system using a Volterra kernel. However, this method has a problem of requiring a huge computational complexity for the convolution needed between an input signal and the 2nd-order Volterra kernel. The Simplified Volterra Filter (SVF), which removes the lines along the main diagonal of the 2nd-order Volterra kernel, has been previously proposed as a way to reduce the computational complexity while maintaining the compensation performance for the nonlinear distortion. However, this method cannot greatly reduce the computational complexity. Hence, we propose a subband linearization system which consists of a subband parallel cascade realization method for the 2nd-order Volterra kernel and subband linear inverse filter. Experimental results show that this proposed linearization system can produce the same compensation ability as the conventional method while reducing the computational complexity.

  • Regular Section
  • Vibration Modeling and Design of Piezoelectric Floating Mass Transducer for Implantable Middle Ear Hearing Devices

    Eung-Pyo HONG  Min-Kyu KIM  Il-Yong PARK  Seung-ha LEE  Yongrae ROH  Jin-Ho CHO  

     
    PAPER-Engineering Acoustics

      Page(s):
    1620-1627

    In this paper, a simple piezoelectric floating mass transducer (PFMT) for implantable middle ear hearing devices (IMEHDs) is proposed and its modeling and designing are studied. The transducer which can be implanted in the meddle ear consists of a PMN-PT multi-layered piezoelectric actuator, an elastic material, and a metal case. The proposed transducer has a simple structure and the force generated from the piezoelectric actuator is efficiently transferred to the ossicles of the middle ear. For the analysis of the vibration characteristics, the transducer attached on the ossicle is simplified into a simple mechanical model considering the mass of an incus. And the vibration displacement of the model is calculated using computer simulation and verified by the experimental results. It is shown that the designed PFMT can allow implantation in the middle ear cavity and provide a sufficiently high output of more than 100 nm of vibration displacement. Plus, it is verified that the vibration characteristics of PFMT can be controlled through adjustment of the metal case size and the elastic material of the transducer.

  • An Efficient Speech Enhancement Algorithm for Digital Hearing Aids Based on Modified Spectral Subtraction and Companding

    Young Woo LEE  Sang Min LEE  Yoon Sang JI  Jong Shill LEE  Young Joon CHEE  Sung Hwa HONG  Sun I. KIM  In Young KIM  

     
    PAPER-Speech and Hearing

      Page(s):
    1628-1635

    Digital hearing aid users often complain of difficulty in understanding speech in the presence of background noise. To improve speech perception in a noisy environment, various speech enhancement algorithms have been applied in digital hearing aids. In this study, a speech enhancement algorithm using modified spectral subtraction and companding is proposed for digital hearing aids. We adjusted the biases of the estimated noise spectrum, based on a subtraction factor, to decrease the residual noise. Companding was applied to the channel of the formant frequency based on the speech presence indicator to enhance the formant. Noise suppression was achieved while retaining weak speech components and avoiding the residual noise phenomena. Objective and subjective evaluation under various environmental conditions confirmed the improvement due to the proposed algorithm. We tested segmental SNR and Log Likelihood Ratio (LLR), which have higher correlation with subjective measures. Segmental SNR has the highest and LLR the lowest correlation of the methods tested. In addition, we confirmed by spectrogram that the proposed method significantly reduced the residual noise and enhanced the formants. A mean opinion score that represented the global perception score was tested; this produced the highest quality speech using the proposed method. The results show that the proposed speech enhancement algorithm is beneficial for hearing aid users in noisy environments.

  • Design of M-Channel Perfect Reconstruction Filter Banks with IIR-FIR Hybrid Building Blocks

    Shunsuke IWAMURA  Taizo SUZUKI  Yuichi TANAKA  Masaaki IKEHARA  

     
    PAPER-Digital Signal Processing

      Page(s):
    1636-1643

    This paper discusses a new structure of M-channel IIR perfect reconstruction filter banks. A novel building block defined as a cascade connection of some IIR building blocks and FIR building blocks is presented. An IIR building block is written by state space representation, where we easily obtain a stable filter bank by setting eigenvalues of the state transition matrix into the unit circle. Due to cascade connection of building blocks, we are able to design a system with a larger number of free parameters while keeping the stability. We introduce the condition which obtains the new building block without increasing of the filter order in spite of cascade connection. Additionally, by showing the simulation results, we show that this implementation has a better stopband attenuation than conventional methods.

  • VLSI Architecture for the Low-Computation Cycle and Power-Efficient Recursive DFT/IDFT Design

    Lan-Da VAN  Chin-Teng LIN  Yuan-Chu YU  

     
    PAPER-Digital Signal Processing

      Page(s):
    1644-1652

    In this paper, we propose one low-computation cycle and power-efficient recursive discrete Fourier transform (DFT)/inverse DFT (IDFT) architecture adopting a hybrid of input strength reduction, the Chebyshev polynomial, and register-splitting schemes. Comparing with the existing recursive DFT/IDFT architectures, the proposed recursive architecture achieves a reduction in computation-cycle by half. Appling this novel low-computation cycle architecture, we could double the throughput rate and the channel density without increasing the operating frequency for the dual tone multi-frequency (DTMF) detector in the high channel density voice over packet (VoP) application. From the chip implementation results, the proposed architecture is capable of processing over 128 channels and each channel consumes 9.77 µW under 1.2 V@20 MHz in TSMC 0.13 1P8M CMOS process. The proposed VLSI implementation shows the power-efficient advantage by the low-computation cycle architecture.

  • A Parallel Implementation of the PBSGDS Method for Solving CBAU Optimization Problems

    Shieh-Shing LIN  

     
    PAPER-Systems and Control

      Page(s):
    1653-1660

    In previous research, we have proposed a parallel block scaled gradient with decentralized step-size (PBSGDS) method. The method circumvents the difficulty of determining a step-size in the distributed computing environment and enables the proposed parallel algorithm to execute in a distributed computer network with limited amount of date transfer. In this paper, we implement the parallel algorithm within two real Independent System Operator (ISO) Networks, including homogeneous and heterogeneous types PCs-Networks environments, and demonstrate the computational efficiency and numerical satiability through numerous simulation test results in solving a Convex Block Additive Unconstrained (CBAU) optimization problem. Furthermore, the test results show that the performance of the proposed parallel algorithm appears more attractive due to the asynchronous effect in the distributed computing environment.

  • A SPICE-Oriented Nonexistence Test for DC Solutions of Nonlinear Circuits

    Wataru KUROKI  Kiyotaka YAMAMURA  

     
    PAPER-Nonlinear Problems

      Page(s):
    1661-1668

    As a powerful computational test for nonexistence of a DC solution of a nonlinear circuit, the LP test is well-known. This test is useful for finding all solutions of nonlinear circuits; it is also useful for verifying the nonexistence of a DC operating point in a given region where operating points should not exist. However, the LP test has not been widely used in practical circuit simulation because the programming is not easy for non-experts or beginners. In this paper, we propose a new LP test that can be easily implemented on SPICE without programming. The proposed test is useful because we can easily check the nonexistence of a solution using SPICE only.

  • A Dual-Mode Bluetooth Transceiver with a Two-Point-Modulated Polar-Loop Transmitter and a Frequency-Offset-Compensated Receiver

    Takashi OSHIMA  Masaru KOKUBO  

     
    PAPER-Circuit Theory

      Page(s):
    1669-1678

    An entire dual-mode transceiver capable of both the conventional GFSK-modulated Bluetooth and the Medium-Rate π/4-DQPSK-modulated Bluetooth has been investigated and reported. The transmitter introduces a novel two-point-modulated polar-loop technique without the global feedback to realize reduced power consumption, small chip area and also high modulation accuracy. The receiver shares all the circuits for both operating modes except the demodulators and also features a newly-proposed cancellation technique of the carrier-frequency offset. The transceiver has been confirmed by system or circuit simulations to meet all the dual-mode Bluetooth specifications. The simulation results show that the transmitting power can be larger than 10 dBm while achieving the total power efficiency above 30% and also RMS DEVM of 0.050. It was also confirmed by simulation that the receiver is expected to attain the sensitivity of -85 dBm in both modes while satisfying the image-rejection and the blocker-suppression specifications. The proposed transceiver will provide a low-cost, low-power single-chip RF-IC solution for the next-generation Bluetooth communication.

  • A Fast Computational Optimization Method: Univariate Dynamic Encoding Algorithm for Searches (uDEAS)

    Jong-Wook KIM  Sang Woo KIM  

     
    PAPER-Numerical Analysis and Optimization

      Page(s):
    1679-1689

    This paper proposes a new computational optimization method modified from the dynamic encoding algorithm for searches (DEAS). Despite the successful optimization performance of DEAS for both benchmark functions and parameter identification, the problem of exponential computation time becomes serious as problem dimension increases. The proposed optimization method named univariate DEAS (uDEAS) is especially implemented to reduce the computation time using a univariate local search scheme. To verify the algorithmic feasibility for global optimization, several test functions are optimized as benchmark. Despite the simpler structure and shorter code length, function optimization performance show that uDEAS is capable of fast and reliable global search for even high dimensional problems.

  • Comparison of Maude and SAL by Conducting Case Studies Model Checking a Distributed Algorithm

    Kazuhiro OGATA  Kokichi FUTATSUGI  

     
    PAPER-Concurrent Systems

      Page(s):
    1690-1703

    SAL is a toolkit for analyzing transition systems, providing several different tools. Among the tools are a BDD-based symbolic model checker (SMC) and an SMT-based infinite bounded model checker (infBMC). The unique functionality provided by SAL is k-induction, which is supported by infBMC. Given appropriate lemmas, infBMC can prove automatically by k-induction that an infinite-state transition system satisfies invariant properties. Maude is a specification language and system based on membership equational logic and rewriting logic. Maude is equipped with an on-the-fly explicit state model checker. The unique functionality provided by the Maude model checker supports inductive data types. We make a comparison of SAL (especially SMC and infBMC) and the Maude model checker by conducting case studies in which the Suzuki-Kasami distributed mutual exclusion algorithm is analyzed. The purpose of the comparison is to clarify some of the two tools' functionalities, especially the unique ones, through the case studies.

  • Delayed Perturbation Bounds for Receding Horizon Controls

    ChoonKi AHN  PyungSoo KIM  

     
    LETTER-Systems and Control

      Page(s):
    1704-1706

    This letter presents delayed perturbation bounds (DPBs) for receding horizon controls (RHCs) of continuous-time systems. The proposed DPBs are obtained easily by solving convex problems represented by linear matrix inequalities (LMIs). We show, by numerical examples, that the RHCs have larger DPBs than conventional linear quadratic regulators (LQRs).

  • Separatrix Conception for Trajectory Analysis of Analog Networks Design in Minimal Time

    Alexander M. ZEMLIAK  

     
    LETTER-VLSI Design Technology and CAD

      Page(s):
    1707-1712

    Various trajectories of design, arising from the new methodology of analog network design, are analyzed. Several major criteria suggested for optimal selection of initial approximation to the design process permit the minimization of computer time. The initial approximation point is selected with regard to the previously revealed effect of acceleration of the design process. The concept of separatrix is defined making it possible to determine the optimal position of the initial approximation. The numerical results obtained for passive and active networks prove the possibility of optimal choice of the initial point in design process.

  • Theoretical Investigation on Required Number of Bits for Monochrome Density Images on High-Luminance Electronic Display

    Junji SUZUKI  Isao FURUKAWA  

     
    LETTER-Image

      Page(s):
    1713-1716

    This paper proposes a design method for representing monochrome medical X-ray images on an electronic display. The required quantizing resolution of the input density and output voltage are theoretically clarified. The proposed method makes it easier to determine the required quantizing resolution which is important in a X-ray diagnostic system.

  • Lossless Data Hiding Based on Companding Technique and Difference Expansion of Triplets

    ShaoWei WENG  Yao ZHAO  Jeng-Shyang PAN  

     
    LETTER-Image

      Page(s):
    1717-1718

    A reversible data hiding scheme based on the companding technique and the difference expansion (DE) of triplets is proposed in this paper. The companding technique is employed to increase the number of the expandable triplets. The capacity consumed by the location map recording the expanded positions is largely decreased. As a result, the hiding capacity is considerably increased. The experimental results reveal that high hiding capacity can be achieved at low embedding distortion.

  • Building Systolic Messy Arrays for Infinite Iterative Algorithms

    Makio ISHIHARA  

     
    LETTER-General Fundamentals and Boundaries

      Page(s):
    1719-1723

    The size-dependent array problem is a problem with systolic arrays such that the size of systolic arrays limits the size of calculations, which in a do-loop structure controls how many times it is repeated and how deep the nesting loops are. A systolic array cannot deal with larger calculations. For the size-dependent array problem, a spiral systolic array has been studied so far. It has non-adjacent connections between PEs, such as loop paths for sending data back so that data flows over the array independently of its own size. This paper takes an approach to the problem without non-adjacent connections. This paper discusses systolic messy arrays for infinite iterative algorithms so that they are independent from the size of calculations. First a systolic messy array called two-square shape is introduced then the properties of two-square shape are summarized: memory function, cyclic addition, and cyclic multiplication. Finally a way of building systolic messy arrays that calculate infinite iterative algorithms is illustrated with concrete examples such as an arithmetic progression, a geometric progression, N factorial, and Fibonacci numbers.