The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.40

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E96-A No.8  (Publication Date:2013/08/01)

    Regular Section
  • Two Dimensional M-Channel Non-separable Filter Banks Based on Cosine Modulated Filter Banks with Diagonal Shifts

    Taichi YOSHIDA  Seisuke KYOCHI  Masaaki IKEHARA  

     
    PAPER-Digital Signal Processing

      Page(s):
    1685-1694

    In this paper, we propose a new class of two dimensional (2D) M-channel (M-ch) non-separable filter banks (FBs) based on cosine modulated filter banks (CMFBs) via a new diagonally modulation scheme. Until now, many researchers have proposed 2D non-separable CMFBs. Nevertheless, efficient direction-selective CMFBs have not been yet. Thanks to our new modulations with diagonal shifts, proposed CMFBs have several frequency supports including direction-selective ones which cannot be realized by conventional ones. In a simulation, we show design examples of proposed CMFBs and their various directional frequency supports.

  • Stochastic Asymptotic Stabilizers for Deterministic Input-Affine Systems Based on Stochastic Control Lyapunov Functions

    Yuki NISHIMURA  Kanya TANAKA  Yuji WAKASA  Yuh YAMASHITA  

     
    PAPER-Systems and Control

      Page(s):
    1695-1702

    In this paper, a stochastic asymptotic stabilization method is proposed for deterministic input-affine control systems, which are randomized by including Gaussian white noises in control inputs. The sufficient condition is derived for the diffusion coefficients so that there exist stochastic control Lyapunov functions for the systems. To illustrate the usefulness of the sufficient condition, the authors propose the stochastic continuous feedback law, which makes the origin of the Brockett integrator become globally asymptotically stable in probability.

  • Physical Architecture and Model-Based Evaluation of Electric Power System with Multiple Homes

    Yoshihiko SUSUKI  Ryoya KAZAOKA  Takashi HIKIHARA  

     
    PAPER-Nonlinear Problems

      Page(s):
    1703-1711

    This paper proposes the physical architecture of an electric power system with multiple homes. The notion of home is a unit of small-scale power system that includes local energy source, energy storage, load, power conversion circuits, and control systems. An entire power system consists of multiple homes that are interconnected via a distribution network and that are connected to the commercial power grid. The interconnection is autonomously achieved with a recently developed technology of grid-connected inverters. A mathematical model of slow dynamics of the power system is also developed in this paper. The developed model enables the evaluation of steady and transient characteristics of power systems.

  • Heuristic and Exact Resource Binding Algorithms for Storage Optimization Using Flip-Flops and Latches

    Keisuke INOUE  Mineo KANEKO  

     
    PAPER-VLSI Design Technology and CAD

      Page(s):
    1712-1722

    A mixed storage-type design using flip-flops and latches (FF/latch-based design) has advantages on such as area and power compared to single storage-type design (only flip-flops or latches). Considering FF/latch-based design at high-level synthesis is necessary, because resource binding process significantly affects the quality of resulting circuits. One of the fundamental aspects in FF/latch-based design is that different resource binding solutions could lead to the different numbers of latch-replacable registers. Therefore, as a first step, this paper addresses a datapath design problem in which resource binding and selecting storage-types of registers are simultaneously optimized for datapath area minimization (i.e., latch replacement maximization). An efficient algorithm based on the compatibility path decomposition and an integer linear programming-based exact approach are presented. Experiments confirm the effectiveness of the proposed approaches.

  • A Fast Power Estimation Method for Content Addressable Memory by Using SystemC Simulation Environment

    Kun-Lin TSAI  I-Jui TUNG  Feipei LAI  

     
    PAPER-VLSI Design Technology and CAD

      Page(s):
    1723-1729

    Content addressable memory is widely used for fast lookup table data searching, but it often consumes considerable power. Moreover, designing the suitable content addressable memory architecture for a specific application also consumes lots of time, since the behavioral simulation is often done in the transistor level. SystemC is a system-level modeling language and simulation platform, providing high simulation efficiency for hardware software co-design. Unfortunately, SystemC does not provide the function for estimating power dissipation of a structure design. In this paper, a SystemC-based fast content addressable memory power estimation method is presented for estimating the power dissipation of the match-line circuit, the search-line circuit, and the storage cell array of content addressable memory in the early design stage. The mathematical equations and behavioral patterns are used as the inputs of power estimation model. The simulation results based on 10 Mibench benchmarks show that the simulation time of the proposed method is in average 1233 times faster than that of HSPICE simulator with only 3.51% error rate.

  • Dynamic Fault Tree Analysis for Systems with Nonexponential Failure Components

    Tetsushi YUGE  Shigeru YANAGI  

     
    PAPER-Reliability, Maintainability and Safety Analysis

      Page(s):
    1730-1736

    A method of calculating the top event probability of a fault tree, where dynamic gates and repeated events are included and the occurrences of basic events follow nonexponential distributions, is proposed. The method is on the basis of the Bayesian network formulation for a DFT proposed by Yuge and Yanagi [1]. The formulation had a difficulty in calculating a sequence probability if components have nonexponential failure distributions. We propose an alternative method to obtain the sequence probability in this paper. First, a method in the case of the Erlang distribution is discussed. Then, Tijms's fitting procedure is applied to deal with a general distribution. The procedure gives a mixture of two Erlang distributions as an approximate distribution for a general distribution given the mean and standard deviation. A numerical example shows that our method works well for complex systems.

  • Scalar Linear Solvability of Matroidal Error Correction Network

    Hang ZHOU  Xubo ZHAO  Xiaoyuan YANG  

     
    PAPER-Coding Theory

      Page(s):
    1737-1743

    In this paper, we further study linear network error correction code on a multicast network and attempt to establish a connection between linear network error correction codes and representable matroids. We propose a similar but more accurate definition of matroidal error correction network which has been introduced by K. Prasad et al. Moreover, we extend this concept to a more general situation when the given linear network error correction codes have different error correcting capacity at different sinks. More importantly, using a different method, we show that a multicast error correction network is scalar-linearly solvable if and only if it is a matroidal error correction network.

  • Proximity Based Object Segmentation in Natural Color Images Using the Level Set Method

    Tran Lan Anh NGUYEN  Gueesang LEE  

     
    PAPER-Image

      Page(s):
    1744-1751

    Segmenting indicated objects from natural color images remains a challenging problem for researches of image processing. In this paper, a novel level set approach is presented, to address this issue. In this segmentation algorithm, a contour that lies inside a particular region of the concerned object is first initialized by a user. The level set model is then applied, to extract the object of arbitrary shape and size containing this initial region. Constrained on the position of the initial contour, our proposed framework combines two particular energy terms, namely local and global energy, in its energy functional, to control movement of the contour toward object boundaries. These energy terms are mainly based on graph partitioning active contour models and Bhattacharyya flow, respectively. Its flow describes dissimilarities, measuring correlative relationships between the region of interest and surroundings. The experimental results obtained from our image collection show that the suggested method yields accurate and good performance, or better than a number of segmentation algorithms, when applied to various natural images.

  • Separate Color Correction for Tone Compression in HDR Image Rendering

    Hwi-Gang KIM  Sung-Hak LEE  

     
    PAPER-Image

      Page(s):
    1752-1758

    Many High-Dynamic-Range (HDR) rendering techniques have been developed. Of these, the image color appearance model, iCAM, is a typical HDR image rendering algorithm. HDR rendering methods normally require a tone compression process and include many color space transformations from the RGB signal of an input image to the RGB signal of output devices for the realistic depiction of a captured image. The iCAM06, which is a refined iCAM, also contains a tone compression step and several color space conversions for HDR image reproduction. On the other hand, the tone compression and frequent color space changes in the iCAM06 cause color distortion, such as a hue shift and saturation reduction of the output image. To solve these problems, this paper proposes a separate color correction method that has no effect on the output luminance values by controlling only the saturation and hue of the color attributes. The color saturation of the output image was compensated for using the compensation gain and the hue shift was corrected using the rotation matrix. The separate color correction method reduces the existing color changes in iCAM06. The compensation gain and rotation matrix for the color correction were formulated based on the relationship between the input and output tristimulus values through the tone compression. The experimental results show that the revised iCAM06 with the proposed method has better performance than the default iCAM06.

  • Comparative Study on Required Bit Depth of Gamma Quantization for Digital Cinema Using Contrast and Color Difference Sensitivities

    Junji SUZUKI  Isao FURUKAWA  

     
    PAPER-Image

      Page(s):
    1759-1767

    A specification for digital cinema systems which deal with movies digitally from production to delivery as well as projection on the screens is recommended by DCI (Digital Cinema Initiative), and the systems based on this specification have already been developed and installed in theaters. The parameters of the systems that play an important role in determining image quality include image resolution, quantization bit depth, color space, gamma characteristics, and data compression methods. This paper comparatively discusses a relation between required bit depth and gamma quantization using both of a human visual system for grayscale images and two color difference models for color images. The required bit depth obtained from a contrast sensitivity function against grayscale images monotonically decreases as the gamma value increases, while it has a minimum value when the gamma is 2.9 to 3.0 from both of the CIE 1976 L* a* b* and CIEDE2000 color difference models. It is also shown that the bit depth derived from the contrast sensitivity function is one bit greater than that derived from the color difference models at the gamma value of 2.6. Moreover, a comparison between the color differences computed with the CIE 1976 L* a* b* and CIEDE2000 leads to a same result from the view point of the required bit depth for digital cinema systems.

  • Test-Retest Reliability and Criterion-Related Validity of the Implicit Association Test for Measuring Shyness

    Tsutomu FUJII  Takafumi SAWAUMI  Atsushi AIKAWA  

     
    PAPER-Human Communications

      Page(s):
    1768-1774

    This study investigated the test-retest reliability and the criterion-related validity of the Implicit Association Test (IAT [1]) that was developed for measuring shyness among Japanese people. The IAT has been used to measure implicit stereotypes, as well as self-concepts, such as implicit shyness and implicit self-esteem. We administered the shyness IAT and the self-esteem IAT to participants (N = 59) on two occasions over a one-week interval (Time 1 and Time 2) and examined the test-retest reliability by correlating shyness IATs between the two time points. We also assessed the criterion-related validity by calculating the correlation between implicit shyness and implicit self-esteem. The results indicated a sufficient positive correlation coefficient between the scores of implicit shyness over the one-week interval (r = .67, p < .01). Moreover, a strong negative correlation coefficient was indicated between implicit shyness and implicit self-esteem (r = -.72, p < .01). These results confirmed the test-retest reliability and the criterion-related validity of the Japanese version of the shyness IAT, which is indicative of the validity of the test for assessing implicit shyness.

  • Learning of Simple Dynamic Binary Neural Networks

    Ryota KOUZUKI  Toshimichi SAITO  

     
    PAPER-Neural Networks and Bioengineering

      Page(s):
    1775-1782

    This paper studies the simple dynamic binary neural network characterized by the signum activation function, ternary weighting parameters and integer threshold parameters. The network can be regarded as a digital version of the recurrent neural network and can output a variety of binary periodic orbits. The network dynamics can be simplified into a return map, from a set of lattice points, to itself. In order to store a desired periodic orbit, we present two learning algorithms based on the correlation learning and the genetic algorithm. The algorithms are applied to three examples: a periodic orbit corresponding to the switching signal of the dc-ac inverter and artificial periodic orbit. Using the return map, we have investigated the storage of the periodic orbits and stability of the stored periodic orbits.

  • The Liveness of WS3PR: Complexity and Decision

    GuanJun LIU  ChangJun JIANG  MengChu ZHOU  Atsushi OHTA  

     
    PAPER-Concurrent Systems

      Page(s):
    1783-1793

    Petri nets are a kind of formal language that are widely applied in concurrent systems associated with resource allocation due to their abilities of the natural description on resource allocation and the precise characterization on deadlock. Weighted System of Simple Sequential Processes with Resources (WS3PR) is an important subclass of Petri nets that can model many resource allocation systems in which 1) multiple processes may run in parallel and 2) each execution step of each process may use multiple units from a single resource type but cannot use multiple resource types. We first prove that the liveness problem of WS3PR is co-NP-hard on the basis of the partition problem. Furthermore, we present a necessary and sufficient condition for the liveness of WS3PR based on two new concepts called Structurally Circular Wait (SCW) and Blocking Marking (BM), i.e., a WS3PR is live iff each SCW has no BM. A sufficient condition is also proposed to guarantee that an SCW has no BM. Additionally, we show some advantages of using SCW to analyze the deadlock problem compared to other siphon-based ones, and discuss the relation between SCW and siphon. These results are valuable to the further research on the deadlock prevention or avoidance for WS3PR.

  • A Control Method of Dynamic Selfish Routing Based on a State-Dependent Tax

    Takafumi KANAZAWA  Takurou MISAKA  Toshimitsu USHIO  

     
    PAPER-Concurrent Systems

      Page(s):
    1794-1802

    A selfish routing game is a simple model of selfish behaviors in networks. It is called that Braess's paradox occurs in the selfish routing game if an equilibrium flow achieved by players' selfish behaviors is not the optimal minimum latency flow. In order to make the minimum latency flow a Nash equilibrium, a marginal cost tax has been proposed. Braess graphs have also been proposed to discuss Braess's paradox. In a large population of selfish players, conflicts between purposes of each player and the population causes social dilemmas. In game theory, to resolve the social dilemmas, a capitation tax and/or a subsidy has been introduced, and players' dynamical behaviors have been formulated by replicator dynamics. In this paper, we formulate replicator dynamics in the Braess graphs and investigate stability of the minimum latency flow with and without the marginal cost tax. An additional latency caused by the marginal cost tax is also shown. To resolve the problem of the additional latency, we extend the capitation tax and the subsidy to a state-dependent tax and apply it to the stabilization problem of the minimum latency flow.

  • Coherent Doppler Processing Using Interpolated Doppler Data in Bistatic Radar

    Jaehyuk YOUN  Hoongee YANG  Yongseek CHUNG  Wonzoo CHUNG  Myungdeuk JEONG  

     
    LETTER-Digital Signal Processing

      Page(s):
    1803-1807

    In order to execute coherent Doppler processing in a high range-rate scenario, whether it is for detection, estimation or imaging, range walk embedded in target return should be compensated first. In case of a bistatic radar geometry where a transmitter, a receiver and a target can be all moving, the extent of range walk depends on their relative positions and velocities. This paper presents a coherent Doppler processing algorithm to achieve target detection and Doppler frequency estimation of a target under a bistatic radar geometry. This algorithm is based on the assumption that a target has constant Doppler frequency during a coherent processing interval (CPI). Thus, we first show under what condition the assumption could be valid. We next develop an algorithm, along with its implementation procedures where the region of range walk, called a window, is manipulated. Finally, the performance of a proposed algorithm is examined through simulations.

  • Basic Dynamics of the Digital Logistic Map

    Akio MATOBA  Narutoshi HORIMOTO  Toshimichi SAITO  

     
    LETTER-Nonlinear Problems

      Page(s):
    1808-1811

    This letter studies a digital return map that is a mapping from a set of lattice points to itself. The digital map can exhibit various periodic orbits. As a typical example, we present the digital logistic map based on the logistic map. Two fundamental results are shown. When the logistic map has a unique periodic orbit, the digital map can have plural periodic orbits. When the logistic map has an unstable period-3 orbit that causes chaos, the digital map can have a stable period-3 orbit with various domain of attractions.

  • A Memory Access Decreased Decoding Scheme for Double Binary Convolutional Turbo Code

    Ming ZHAN  Jun WU  Liang ZHOU  Zhenyu ZHOU  

     
    LETTER-Coding Theory

      Page(s):
    1812-1816

    To decrease memory access of the decoder for double binary convolutional turbo code (DB CTC), an iterative decoding scheme is proposed. Instead of accessing all of the backward state metrics from the state metric cache (SMC), a part of them is computed by the recalculation unit (RU) in the forward direction. By analysis and simulations, both the amount of memory access and the size of SMC are reduced by about 45%. Moreover, combined with the scaling technique, the proposed scheme gets decoding performance near to that of the well-known Log-MAP algorithm.

  • Bit-Plane Coding of Lattice Codevectors

    Wisarn PATCHOO  Thomas R. FISCHER  

     
    LETTER-Coding Theory

      Page(s):
    1817-1820

    In a sign-magnitude representation of binary lattice codevectors, only a few least significant bit-planes are constrained due to the structure of the lattice, while there is no restriction on other more significant bit-planes. Hence, any convenient bit-plane coding method can be used to encode the lattice codevectors, with modification required only for the lattice-defining, least-significant bit-planes. Simple encoding methods for the lattice-defining bit-planes of the D4, RE8, and Barnes-Wall 16-dimensional lattices are described. Simulation results for the encoding of a uniform source show that standard bit-plane coding together with the proposed encoding provide about the same performance as integer lattice vector quantization when the bit-stream is truncated. When the entire bit-stream is fully decoded, the granular gain of the lattice is realized.

  • Pixel-Wise Noise Level Estimation for Images

    Yusuke AMANO  Gosuke OHASHI  Yoshifumi SHIMODAIRA  

     
    LETTER-Image

      Page(s):
    1821-1823

    The purpose of this study is to estimate the noise level of every pixel in a single noisy image, that is superimposed independent and non-identically distributed random variables with normal distribution. The method makes a set of similar pixels in the local region to the interest pixel using the approximate function of noise variance, and estimates with regard to the noise level. As the results show, the proposed method is effective in estimation of noise level of every pixel for any images.

  • Quality Evaluation of Decimated Images Using Visual Difference Predictor

    Ryo MATSUOKA  Takao JINNO  Masahiro OKUDA  

     
    LETTER-Image

      Page(s):
    1824-1827

    This paper proposes a method for evaluating visual differences caused by decimation. In many applications it is important to evaluate visual differences of two different images. There exist many image assessment methods that utilize the model of the human visual system (HVS), such as the visual difference predictor (VDP) and the Sarnoff visual discrimination model. In this paper, we extend and elaborate on the conventional image assessment method for the purpose of evaluating the visual difference caused by the image decimation. Our method matches actual human evaluation more and requires less computational complexity than the conventional method.