The search functionality is under construction.

IEICE TRANSACTIONS on Fundamentals

  • Impact Factor

    0.48

  • Eigenfactor

    0.003

  • article influence

    0.1

  • Cite Score

    1.1

Advance publication (published online immediately after acceptance)

Volume E96-A No.6  (Publication Date:2013/06/01)

    Special Section on Discrete Mathematics and Its Applications
  • FOREWORD

    Hisashi KOGA  

     
    FOREWORD

      Page(s):
    1023-1023
  • On the Zeta Function of a Periodic-Finite-Type Shift

    Akiko MANADA  Navin KASHYAP  

     
    PAPER

      Page(s):
    1024-1031

    Periodic-finite-type shifts (PFT's) are sofic shifts which forbid the appearance of finitely many pre-specified words in a periodic manner. The class of PFT's strictly includes the class of shifts of finite type (SFT's). The zeta function of a PFT is a generating function for the number of periodic sequences in the shift. For a general sofic shift, there exists a formula, attributed to Manning and Bowen, which computes the zeta function of the shift from certain auxiliary graphs constructed from a presentation of the shift. In this paper, we derive an interesting alternative formula computable from certain “word-based graphs” constructed from the periodically-forbidden word description of the PFT. The advantages of our formula over the Manning-Bowen formula are discussed.

  • A Compact Encoding of Rectangular Drawings with Edge Lengths

    Shin-ichi NAKANO  Katsuhisa YAMANAKA  

     
    PAPER

      Page(s):
    1032-1035

    A rectangular drawing is a plane drawing of a graph in which every face is a rectangle. Rectangular drawings have an application for floorplans, which may have a huge number of faces, so compact code to store the drawings is desired. The most compact code for rectangular drawings needs at most 4f-4 bits, where f is the number of inner faces of the drawing. The code stores only the graph structure of rectangular drawings, so the length of each edge is not encoded. A grid rectangular drawing is a rectangular drawing in which each vertex has integer coordinates. To store grid rectangular drawings, we need to store some information for lengths or coordinates. One can store a grid rectangular drawing by the code for rectangular drawings and the width and height of each inner face. Such a code needs 4f-4 + f⌈log W⌉ + f⌈log H⌉ + o(f) + o(W) + o(H) bits*, where W and H are the maximum width and the maximum height of inner faces, respectively. In this paper we design a simple and compact code for grid rectangular drawings. The code needs 4f-4 + (f+1)⌈log L⌉ + o(f) + o(L) bits for each grid rectangular drawing, where L is the maximum length of edges in the drawing. Note that L ≤ max{W,H} holds. Our encoding and decoding algorithms run in O(f) time.

  • Partitioning Trees with Supply, Demand and Edge-Capacity

    Masaki KAWABATA  Takao NISHIZEKI  

     
    PAPER

      Page(s):
    1036-1043

    Let T be a given tree. Each vertex of T is either a supply vertex or a demand vertex, and is assigned a positive number, called the supply or demand. Each demand vertex v must be supplied an amount of “power,” equal to the demand of v, from exactly one supply vertex through edges in T. Each edge is assigned a positive number called the capacity. One wishes to partition T into subtrees by deleting edges from T so that each subtree contains exactly one supply vertex whose supply is no less than the sum of all demands in the subtree and the power flow through each edge is no more than capacity of the edge. The “partition problem” is a decision problem to ask whether T has such a partition. The “maximum partition problem” is an optimization version of the partition problem. In this paper, we give three algorithms for the problems. First is a linear-time algorithm for the partition problem. Second is a pseudo-polynomial-time algorithm for the maximum partition problem. Third is a fully polynomial-time approximation scheme (FPTAS) for the maximum partition problem.

  • A Small-Space Algorithm for Removing Small Connected Components from a Binary Image

    Tetsuo ASANO  Revant KUMAR  

     
    PAPER

      Page(s):
    1044-1050

    Given a binary image I and a threshold t, the size-thresholded binary image I(t) defined by I and t is the binary image after removing all connected components consisting of at most t pixels. This paper presents space-efficient algorithms for computing a size-thresholded binary image for a binary image of n pixels, assuming that the image is stored in a read-only array with random-access. With regard to the problem, there are two cases depending on how large the threshold t is, namely, Relatively large threshold where t = Ω(), and Relatively small threshold where t = O(). In this paper, a new algorithmic framework for the problem is presented. From an algorithmic point of view, the problem can be solved in O() time and O() work space. We propose new algorithms for both the above cases which compute the size-threshold binary image for any binary image of n pixels in O(nlog n) time using only O() work space.

  • A Linear-Time Algorithm for Constructing a Spanning Tree on Circular Trapezoid Graphs

    Hirotoshi HONMA  Yoko NAKAJIMA  Haruka AOSHIMA  Shigeru MASUYAMA  

     
    PAPER

      Page(s):
    1051-1058

    Given a simple connected graph G with n vertices, the spanning tree problem involves finding a tree that connects all the vertices of G. Solutions to this problem have applications in electrical power provision, computer network design, circuit analysis, among others. It is known that highly efficient sequential or parallel algorithms can be developed by restricting classes of graphs. Circular trapezoid graphs are proper superclasses of trapezoid graphs. In this paper, we propose an O(n) time algorithm for the spanning tree problem on a circular trapezoid graph. Moreover, this algorithm can be implemented in O(log n) time with O(n/log n) processors on EREW PRAM computation model.

  • Ranking and Unranking of Non-regular Trees in Gray-Code Order

    Ro-Yu WU  Jou-Ming CHANG  An-Hang CHEN  Ming-Tat KO  

     
    PAPER

      Page(s):
    1059-1065

    A non-regular tree T with a prescribed branching sequence (s1,s2,...,sn) is a rooted and ordered tree such that its internal nodes are numbered from 1 to n in preorder and every internal node i in T has si children. Recently, Wu et al. (2010) introduced a concise representation called RD-sequences to represent all non-regular trees and proposed a loopless algorithm for generating all non-regular trees in a Gray-code order. In this paper, based on such a Gray-code order, we present efficient ranking and unranking algorithms of non-regular trees with n internal nodes. Moreover, we show that the ranking algorithm and the unranking algorithm can be run in O(n2) time and O(n2+nSn-1) time, respectively, provided a preprocessing takes O(n2Sn-1) time and space in advance, where .

  • Reporting All Segment Intersections Using an Arbitrary Sized Work Space

    Matsuo KONAGAYA  Tetsuo ASANO  

     
    PAPER

      Page(s):
    1066-1071

    This paper presents an efficient algorithm for reporting all intersections among n given segments in the plane using work space of arbitrarily given size. More exactly, given a parameter s which is between Ω(1) and O(n) specifying the size of work space, the algorithm reports all the segment intersections in roughly O(n2/+ K) time using O(s) words of O(log n) bits, where K is the total number of intersecting pairs. The time complexity can be improved to O((n2/s) log s + K) when input segments have only some number of different slopes.

  • Time-Optimal Gathering Algorithm of Mobile Robots with Local Weak Multiplicity Detection in Rings

    Tomoko IZUMI  Taisuke IZUMI  Sayaka KAMEI  Fukuhito OOSHITA  

     
    PAPER

      Page(s):
    1072-1080

    The gathering problem of anonymous and oblivious mobile robots is one of the fundamental problems in the theoretical mobile robotics. We consider the gathering problem in unoriented and anonymous rings, which requires that all robots eventually keep their positions at a common non-predefined node. Since the gathering problem cannot be solved without any additional capability to robots, all the previous results assume some capability of robots, such as the agreement of local view. In this paper, we focus on the multiplicity detection capability. This paper presents a deterministic gathering algorithm with local-weak multiplicity detection, which provides a robot with information about whether its current node has more than one robot or not. This assumption is strictly weaker than that in previous works. Our algorithm achieves the gathering from an aperiodic and asymmetric configuration with 2 < k < n/2 robots, where n is the number of nodes. We also show that our algorithm is asymptotically time-optimal one, i.e., the time complexity of our algorithm is O(n). Interestingly, despite the weaker assumption, it achieves significant improvement compared to the previous algorithm, which takes O(kn) time for k robots.

  • Root Computation in Finite Fields

    Ryuichi HARASAWA  Yutaka SUEYOSHI  Aichi KUDO  

     
    PAPER

      Page(s):
    1081-1087

    We consider the computation of r-th roots in finite fields. For the computation of square roots (i.e., the case of r=2), there are two typical methods: the Tonelli-Shanks method [7],[10] and the Cipolla-Lehmer method [3],[5]. The former method can be extended to the case of r-th roots with r prime, which is called the Adleman-Manders-Miller method [1]. In this paper, we generalize the Cipolla-Lehmer method to the case of r-th roots in Fq with r prime satisfying r | q-1, and provide an efficient computational procedure of our method. Furthermore, we implement our method and the Adleman-Manders-Miller method, and compare the results.

  • Characterization of Strongly Secure Authenticated Key Exchanges without NAXOS Technique

    Atsushi FUJIOKA  

     
    PAPER

      Page(s):
    1088-1099

    This paper examines two-pass authenticated key exchange (AKE) protocols that are secure without the NAXOS technique under the gap Diffie-Hellman assumption in the random oracle model: FHMQV [18], KFU1 [21], SMEN- [13], and UP [17]. We introduce two protocol, biclique DH protocol and multiplied biclique DH protocol, to analyze the subject protocols, and show that the subject protocols use the multiplied biclique DH protocol as internal protocols. The biclique DH protocol is secure, however, the multiplied biclique DH protocol is insecure. We show the relations between the subject protocols from the viewpoint of how they overcome the insecurity of the multiplied biclique DH protocol:
    FHMQV virtually executes two multiplied biclique DH protocols in sequence with the same ephemeral key on two randomized static keys.
    KFU1 executes two multiplied biclique DH protocols in parallel with the same ephemeral key.
    UP is a version of KFU1 in which one of the static public keys is generated with a random oracle.
    SMEN- can be thought of as a combined execution of two multiplied biclique DH protocols.
    In addition, this paper provides ways to characterize the AKE protocols and defines two parameters: one consists of the number of static keys, the number of ephemeral keys, and the number of shared secrets, and the other is defined as the total sum of these numbers. When an AKE protocol is constructed based on some group, these two parameters indicate the number of elements in the group, i.e., they are related to the sizes of the storage and communication data.

  • Leakage-Resilience of Stateless/Stateful Public-Key Encryption from Hash Proofs

    Manh Ha NGUYEN  Kenji YASUNAGA  Keisuke TANAKA  

     
    PAPER

      Page(s):
    1100-1111

    We consider the problem of constructing public-key encryption (PKE) schemes that are resilient to a-posteriori chosen-ciphertext and key-leakage attacks (LR-CCA2). In CTYPTO'09, Naor and Segev proved that the Naor-Yung generic construction of PKE which is secure against chosen-ciphertext attack (CCA2) is also secure against key-leakage attacks. They also presented a variant of the Cramer-Shoup cryptosystem, and showed that this PKE scheme is LR-CCA2-secure under the decisional Diffie-Hellman assumption. In this paper, we apply the generic construction of “Universal Hash Proofs and a Paradigm for Adaptive Chosen Ciphertext Secure Public-Key Encryption” (EUROCRYPT'02) to generalize the above work of Naor-Segev. In comparing to the first construction of Naor-Segev, ours is more efficient because of not using simulation-sound NIZK. We also extend it to stateful PKE schemes. Concretely, we present the notion of LR-CCA2 attack in the case of stateful PKE, and a generic construction of stateful PKE that is secure against this attack.

  • Generic Construction of Two-Party Round-Optimal Attribute-Based Authenticated Key Exchange without Random Oracles

    Kazuki YONEYAMA  

     
    PAPER

      Page(s):
    1112-1123

    In this paper, we propose a generic construction of one-round attribute-based (implicitly) authenticated key exchange (ABAKE). The construction is based on a chosen-ciphertext (CCA) secure attribute-based KEM and the decisional Diffie-Hellman (DDH) assumption. If an underlying attribute-based KEM scheme allows expressive access controls and is secure in the standard model (StdM), an instantiated ABAKE scheme also achieves them. Our scheme enjoys the best of both worlds: efficiency and security. The number of rounds is one (optimal) while the known secure scheme in the StdM is not one-round protocol. Our scheme is comparable in communication complexity with the most efficient known scheme that is not proved in the StdM. Also, our scheme is proved to satisfy security against advanced attacks like key compromise impersonation.

  • One-Round Authenticated Key Exchange with Strong Forward Secrecy in the Standard Model against Constrained Adversary

    Kazuki YONEYAMA  

     
    PAPER

      Page(s):
    1124-1138

    Forward secrecy (FS) is a central security requirement of authenticated key exchange (AKE). Especially, strong FS (sFS) is desirable because it can guarantee security against a very realistic attack scenario that an adversary is allowed to be active in the target session. However, most of AKE schemes cannot achieve sFS, and currently known schemes with sFS are only proved in the random oracle model. In this paper, we propose a generic construction of AKE protocol with sFS in the standard model against a constrained adversary. The constraint is that session-specific intermediate computation results (i.e., session state) cannot be revealed to the adversary for achieving sFS, that is shown to be inevitable by Boyd and González Nieto. However, our scheme maintains weak FS (wFS) if session state is available to the adversary. Thus, our scheme satisfies one of strongest security definitions, the CK+ model, which includes wFS and session state reveal. The main idea to achieve sFS is to use signcryption KEM while the previous CK+ secure construction uses ordinary KEM. We show a possible instantiation of our construction from Diffie-Hellman problems.

  • id-eCK Secure ID-Based Authenticated Key Exchange on Symmetric and Asymmetric Pairing

    Atsushi FUJIOKA  Fumitaka HOSHINO  Tetsutaro KOBAYASHI  Koutarou SUZUKI  Berkant USTAOLU  Kazuki YONEYAMA  

     
    PAPER

      Page(s):
    1139-1155

    In this paper, we propose an identity-based authenticated key exchange (ID-AKE) protocol that is secure in the identity-based extended Canetti-Krawczyk (id-eCK) model in the random oracle model under the gap Bilinear Diffie-Hellman assumption. The proposed ID-AKE protocol is the most efficient among the existing ID-AKE protocols that is id-eCK secure, and it can be extended to use in asymmetric pairing.

  • Methods for Restricting Message Space in Public-Key Encryption

    Yusuke SAKAI  Keita EMURA  Goichiro HANAOKA  Yutaka KAWAI  Kazumasa OMOTE  

     
    PAPER

      Page(s):
    1156-1168

    This paper proposes methods for “restricting the message space” of public-key encryption, by allowing a third party to verify whether a given ciphertext does not encrypt some message which is previously specified as a “bad” (or “problematic”) message. Public-key encryption schemes are normally designed not to leak even partial information of encrypted plaintexts, but it would be problematic in some circumstances. This higher level of confidentiality could be abused, as some malicious parties could communicate with each other, or could talk about some illegal topics, using an ordinary public key encryption scheme with help of the public-key infrastructure. It would be undesirable considering the public nature of PKI. The primitive of restrictive public key encryption will help this situation, by allowing a trusted authority to specify a set of “bad” plaintexts, and allowing every third party to detect ciphertexts that encrypts some of the specified “bad” plaintext. The primitive also provides strong confidentiality (of indistinguishability type) of the plaintext when it is not specified as “bad.” In this way, a third party (possible a gateway node of the network) can examine a ciphertext (which comes from the network) includes an allowable content or not, and only when the ciphertext does not contain forbidden message, the gateway transfers the ciphertext to a next node. In this paper, we formalize the above requirements and provide two constructions that satisfied the formalization. The first construction is based on the techniques of Teranishi et al. (IEICE Trans. Fundamentals E92-A, 2009), Boudot (EUROCRYPT 2000), and Nakanishi et al. (IEICE Trans. Fundamentals E93-A, 2010), which are developed in the context of (revocation of) group signature. The other construction is based on the OR-proof technique. The first construction has better performance when very few messages are specified as bad, while the other does when almost all of messages are specified as bad (and only very few messages are allowed to encrypt).

  • On the Security of the Verifiably Encrypted Signature Scheme of Boneh, Gentry, Lynn and Shacham Revisited

    Bennian DOU  

     
    LETTER

      Page(s):
    1169-1170

    At Eurocrypt'03, Boneh, Gentry, Lynn and Shacham proposed a pairing based verifiably encrypted signature scheme (the BGLS-VES scheme). In 2004, Hess mounted an efficient rogue-key attack on the BGLS-VES scheme in the plain public-key model. In this letter, we show that the BGLS-VES scheme is not secure in the proof of possession (POP) model.

  • Message and Key Substitution Attacks on Verifiably Encrypted Signature Schemes

    Bennian DOU  

     
    LETTER

      Page(s):
    1171-1172

    In 2004, Menezes and Smart left an open problem that is whether there exists a realistic scenario where message and key substitution (MKS) attacks can have damaging consequences. In this letter, we show that MKS attacks can have damaging consequences in practice, by pointing out that a verifiably encrypted signature (VES) scheme is not opaque if MKS attacks are possible.

  • Special Section on Circuit, System, and Computer Technologies
  • FOREWORD

    Qi-Wei GE  

     
    FOREWORD

      Page(s):
    1173-1173
  • Floorplanning and Topology Synthesis for Application-Specific Network-on-Chips

    Wei ZHONG  Song CHEN  Bo HUANG  Takeshi YOSHIMURA  Satoshi GOTO  

     
    PAPER

      Page(s):
    1174-1184

    Application-Specific Network-on-Chips (ASNoCs) have been proposed as a more promising solution than regular NoCs to the global communication challenges for particular applications in nanoscale System-on-Chip (SoC) designs. In ASNoC Design, one of the key challenges is to generate the most suitable and power efficient NoC topology under the constraints of the application specification. In this work, we present a two-step floorplanning (TSF) algorithm, integrating topology synthesis into floorplanning phase, to automate the synthesis of such ASNoC topologies. At the first-step floorplanning, during the simulated annealing, we explore the optimal positions and clustering of cores and implement an incremental path allocation algorithm to predictively evaluate the power consumption of the generated NoC topology. At the second-step floorplanning, we explore the optimal positions of switches and network interfaces on the floorplan. A power and timing aware path allocation algorithm is also integrated into this step to determine the connectivity across different switches. Experimental results on a variety of benchmarks show that our algorithm can produce greatly improved solutions over the latest works.

  • LDR Image to HDR Image Mapping with Overexposure Preprocessing

    Yongqing HUO  Fan YANG  Vincent BROST  Bo GU  

     
    PAPER

      Page(s):
    1185-1194

    Due to the growing popularity of High Dynamic Range (HDR) images and HDR displays, a large amount of existing Low Dynamic Range (LDR) images are required to be converted to HDR format to benefit HDR advantages, which give rise to some LDR to HDR algorithms. Most of these algorithms especially tackle overexposed areas during expanding, which is the potential to make the image quality worse than that before processing and introduces artifacts. To dispel these problems, we present a new LDR to HDR approach, unlike the existing techniques, it focuses on avoiding sophisticated treatment to overexposed areas in dynamic range expansion step. Based on a separating principle, firstly, according to the familiar types of overexposure, the overexposed areas are classified into two categories which are removed and corrected respectively by two kinds of techniques. Secondly, for maintaining color consistency, color recovery is carried out to the preprocessed images. Finally, the LDR image is expanded to HDR. Experiments show that the proposed approach performs well and produced images become more favorable and suitable for applications. The image quality metric also illustrates that we can reveal more details without causing artifacts introduced by other algorithms.

  • Joint Feature Based Rain Detection and Removal from Videos

    Xinwei XUE  Xin JIN  Chenyuan ZHANG  Satoshi GOTO  

     
    PAPER

      Page(s):
    1195-1203

    Adverse weather, such as rain or snow, can cause difficulties in the processing of video streams. Because the appearance of raindrops can affect the performance of human tracking and reduce the efficiency of video compression, the detection and removal of rain is a challenging problem in outdoor surveillance systems. In this paper, we propose a new algorithm for rain detection and removal based on both spatial and wavelet domain features. Our system involves fewer frames during detection and removal, and is robust to moving objects in the rain. Experimental results demonstrate that the proposed algorithm outperforms existing approaches in terms of subjective and objective quality.

  • Bidirectional Local Template Patterns: An Effective and Discriminative Feature for Pedestrian Detection

    Jiu XU  Ning JIANG  Satoshi GOTO  

     
    PAPER

      Page(s):
    1204-1213

    In this paper, a novel feature named bidirectional local template patterns (B-LTP) is proposed for use in pedestrian detection in still images. B-LTP is a combination and modification of two features, histogram of templates (HOT) and center-symmetric local binary patterns (CS-LBP). For each pixel, B-LTP defines four templates, each of which contains the pixel itself and two neighboring center-symmetric pixels. For each template, it then calculates information from the relationships among these three pixels and from the two directional transitions across these pixels. Moreover, because the feature length of B-LTP is small, it consumes less memory and computational power. Experimental results on an INRIA dataset show that the speed and detection rate of our proposed B-LTP feature outperform those of other features such as histogram of orientated gradient (HOG), HOT, and covariance matrix (COV).

  • A Method of Data Embedding and Extracting for Information Retrieval Considering Mobile Devices

    Mitsuji MUNEYASU  Hiroshi KUDO  Takafumi SHONO  Yoshiko HANADA  

     
    PAPER

      Page(s):
    1214-1221

    In this paper, we propose an improved data embedding and extraction method for information retrieval considering the use of mobile devices. Although the conventional method has demonstrated good results for images captured by cellular phones, some problems remain with this method. One problem is the lack of consideration of the construction of the code grouping in the code grouping method. In this paper, a new construction method for code grouping is proposed, and it is shown that a suitable grouping of the codes can be found. Another problem is the correction method of lens distortion, which is time-consuming. Therefore, to improve the processing speed, the golden section search method is adopted to estimate the distortion coefficients. In addition, a new tuning algorithm for the gain coefficient in the embedding process is also proposed. Experimental results show an increase in the detection rate for embedding data and a reduction of the processing time.

  • A Design of High Performance Parallel Architecture and Communication for Multi-ASIP Based Image Processing Engine

    Hsuan-Chun LIAO  Mochamad ASRI  Tsuyoshi ISSHIKI  Dongju LI  Hiroaki KUNIEDA  

     
    PAPER

      Page(s):
    1222-1235

    Image processing engine is crucial for generating high quality images in video system. As Application Specific Integrated Circuit (ASIC) is dedicated for specific standards, Application Specific Instruction-set Processor (ASIP) which provides high flexibility and high performance seems to have more advantages in supporting the nonstandard pre/post image processing in video system. In our previous work, we have designed some ASIPs that can perform several image processing algorithms with a reconfigurable datapath. ASIP is as efficient as DSP, but its area is considerably smaller than DSP. As the resolution of image and the complexity of processing increase, the performance requirement also increases accordingly. In this paper, we presents a novel multi ASIP based image processing unit (IPU) which can provide sufficient performance for the emerging very-high-resolution applications. In order to provide a high performance image processing engine, we propose several new techniques and architecture such as multi block-pipes architecture, pixel direct transmission and boundary pixel write-through. Multi block-pipes architecture has flexible scalability in supporting a various ranges of resolution, which ranges from low resolution to high resolution. The boundary pixel write-through technique provides high efficient parallel processing, and pixel direct transmission technique is implemented in each ASIP to further reduce the data transmission time. Cycle-accurate SystemC simulations are performed, and the experimental results show that the maximum bandwidth of the proposed communication approach can achieve up to 1580 Mbyte/s at 400 MHz. Moreover, communication overhead can be reduced about a maximum of 88% compared to our previous works.

  • Sensor Scheduling Algorithms for Extending Battery Life in a Sensor Node

    Qian ZHAO  Yukikazu NAKAMOTO  Shimpei YAMADA  Koutaro YAMAMURA  Makoto IWATA  Masayoshi KAI  

     
    PAPER

      Page(s):
    1236-1244

    Wireless sensor nodes are becoming more and more common in various settings and require a long battery life for better maintainability. Since most sensor nodes are powered by batteries, energy efficiency is a critical problem. In an experiment, we observed that when peak power consumption is high, battery voltage drops quickly, and the sensor stops working even though some useful charge remains in the battery. We propose three off-line algorithms that extend battery life by scheduling sensors' execution time that is able to reduce peak power consumption as much as possible under a deadline constraint. We also developed a simulator to evaluate the effectiveness of these algorithms. The simulation results showed that one of the three algorithms dramatically can extend battery life approximately three time as long as in simultaneous sensor activation.

  • An Image Trading System Using Amplitude-Only Images for Privacy- and Copyright-Protection

    Shenchuan LIU  Masaaki FUJIYOSHI  Hitoshi KIYA  

     
    PAPER

      Page(s):
    1245-1252

    This paper introduces amplitude-only images to image trading systems in which not only the copyright of images but also the privacy of consumers are protected. In the latest framework for image trading systems, an image is divided into an unrecognizable piece and a recognizable but distorted piece to simultaneously protect the privacy of a consumer and the copyright of the image. The proposed scheme uses amplitude-only images which are completely unrecognizable as the former piece, whereas the conventional schemes leave recognizable parts to the piece which degrades privacy protection performance. Moreover, the proposed scheme improves the robustness against copyright violation regardless of the used digital fingerprinting technique, because an amplitude-only image is larger than the piece in the conventional scheme. In addition, phase-only image is used as the second piece in the proposed scheme, the consumer can confirm what he/she bought. Experimental results show the effectiveness of the proposed scheme.

  • A Drift-Constrained Frequency-Domain Ultra-Low-Delay H.264/SVC to H.264/AVC Transcoder with Medium-Grain Quality Scalability for Videoconferencing

    Lei SUN  Zhenyu LIU  Takeshi IKENAGA  

     
    PAPER

      Page(s):
    1253-1263

    Scalable Video Coding (SVC) is an extension of H.264/AVC, aiming to provide the ability to adapt to heterogeneous networks or requirements. It offers great flexibility for bitstream adaptation in multi-point applications such as videoconferencing. However, transcoding between SVC and AVC is necessary due to the existence of legacy AVC-based systems. The straightforward re-encoding method requires great computational cost, and delay-sensitive applications like videoconferencing require much faster transcoding scheme. This paper proposes an ultra-low-delay SVC-to-AVC MGS (Medium-Grain quality Scalability) transcoder for videoconferencing applications. Transcoding is performed in pure frequency domain with partial decoding/encoding in order to achieve significant speed-up. Three fast transcoding methods in frequency domain are proposed for macroblocks with different coding modes in non-KEY pictures. KEY pictures are transcoded by reusing the base layer motion data, and error propagation is constrained between KEY pictures. Simulation results show that proposed transcoder achieves averagely 38.5 times speed-up compared with the re-encoding method, while introducing merely 0.71 dB BDPSNR coding quality loss for videoconferencing sequences as compared with the re-encoding algorithm.

  • Write Control Method for Nonvolatile Flip-Flops Based on State Transition Analysis

    Naoya OKADA  Yuichi NAKAMURA  Shinji KIMURA  

     
    PAPER

      Page(s):
    1264-1272

    Nonvolatile flip-flop enables leakage power reduction in logic circuits and quick return from standby mode. However, it has limited write endurance, and its power consumption for writing is larger than that of conventional D flip-flop (DFF). For this reason, it is important to reduce the number of write operations. The write operations can be reduced by stopping the clock signal to synchronous flip-flops because write operations are executed only when the clock is applied to the flip-flops. In such clock gating, a method using Exclusive OR (XOR) of the current value and the new value as the control signal is well known. The XOR based method is effective, but there are several cases where the write operations can be reduced even if the current value and the new value are different. The paper proposes a method to detect such unnecessary write operations based on state transition analysis, and proposes a write control method to save power consumption of nonvolatile flip-flops. In the method, redundant bits are detected to reduce the number of write operations. If the next state and the outputs do not depend on some current bit, the bit is redundant and not necessary to write. The method is based on Binary Decision Diagram (BDD) calculation. We construct write control circuits to stop the clock signal by converting BDDs representing a set of states where write operations are unnecessary. Proposed method can be combined with the XOR based method and reduce the total write operations. We apply combined method to some benchmark circuits and estimate the power consumption with Synopsys NanoSim. On average, 15.0% power consumption can be reduced compared with only the XOR based method.

  • Content-Aware Write Reduction Mechanism of 3D Stacked Phase-Change RAM Based Frame Store in H.264 Video Codec System

    Sanchuan GUO  Zhenyu LIU  Guohong LI  Takeshi IKENAGA  Dongsheng WANG  

     
    PAPER

      Page(s):
    1273-1282

    H.264 video codec system requires big capacity and high bandwidth of Frame Store (FS) for buffering reference frames. The up-to-date three dimensional (3D) stacked Phase change Random Access Memory (PRAM) is the promising approach for on-chip caching the reference signals, as 3D stacking offers high memory bandwidth, while PRAM possesses the advantages in terms of high density and low leakage power. However, the write endurance problem, that is a PRAM cell can only tolerant limited number of write operations, becomes the main barrier in practical applications. This paper studies the wear reduction techniques of PRAM based FS in H.264 codec system. On the basis of rate-distortion theory, the content oriented selective writing mechanisms are proposed to reduce bit updates in the reference frame buffers. With the proposed control parameter a, our methods make the quantitative trade off between the quality degradation and the PRAM lifetime prolongation. Specifically, taking a in the range of [0.2,2], experimental results demonstrate that, our methods averagely save 29.9–35.5% bit-wise write operations and reduce 52–57% power, at the cost of 12.95–20.57% BDBR bit-rate increase accordingly.

  • A High-Speed Trace-Driven Cache Configuration Simulator for Dual-Core Processor L1 Caches

    Masashi TAWADA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER

      Page(s):
    1283-1292

    Recently, multi-core processors are used in embedded systems very often. Since application programs is much limited running on embedded systems, there must exists an optimal cache memory configuration in terms of power and area. Simulating application programs on various cache configurations is one of the best options to determine the optimal one. Multi-core cache configuration simulation, however, is much more complicated and takes much more time than single-core cache configuration simulation. In this paper, we propose a very fast dual-core L1 cache configuration simulation algorithm. We first propose a new data structure where just a single data structure represents two or more multi-core cache configurations with different cache associativities. After that, we propose a new multi-core cache configuration simulation algorithm using our new data structure associated with new theorems. Experimental results demonstrate that our algorithm obtains exact simulation results but runs 20 times faster than a conventional approach.

  • Bayesian Theory Based Adaptive Proximity Data Accessing for CMP Caches

    Guohong LI  Zhenyu LIU  Sanchuan GUO  Dongsheng WANG  

     
    PAPER

      Page(s):
    1293-1305

    As the number of cores and the working sets of parallel workloads increase, shared L2 caches exhibit fewer misses than private L2 caches by making a better use of the total available cache capacity, but they induce higher overall L1 miss latencies because of the longer average distance between the requestor and the home node, and the potential congestions at certain nodes. We observed that there is a high probability that the target data of an L1 miss resides in the L1 cache of a neighbor node. In such cases, these long-distance accesses to the home nodes can be potentially avoided. In order to leverage the aforementioned property, we propose Bayesian Theory based Adaptive Proximity Data Accessing (APDA). In our proposal, we organize the multi-core into clusters of 2x2 nodes, and introduce the Proximity Data Prober (PDP) to detect whether an L1 miss can be served by one of the cluster L1 caches. Furthermore, we devise the Bayesian Decision Classifier (BDC) to adaptively select the remote L2 cache or the neighboring L1 node as the server according to the minimum miss cost. We evaluate this approach on a 64-node multi-core using SPLASH-2 and PARSEC benchmarks, and we find that the APDA can reduce the execution time by 20% and reduce the energy by 14% compared to a standard multi-core with a shared L2. The experimental results demonstrate that our proposal outperforms the up-to-date mechanisms, such as ASR, DCC and RNUCA.

  • An Integrated Hole-Filling Algorithm for View Synthesis

    Wenxin YU  Weichen WANG  Minghui WANG  Satoshi GOTO  

     
    PAPER

      Page(s):
    1306-1314

    Multi-view video can provide users with three-dimensional (3-D) and virtual reality perception through multiple viewing angles. In recent years, depth image-based rendering (DIBR) has been generally used to synthesize virtual view images in free viewpoint television (FTV) and 3-D video. To conceal the zero-region more accurately and improve the quality of a virtual view synthesized frame, an integrated hole-filling algorithm for view synthesis is proposed in this paper. The proposed algorithm contains five parts: an algorithm for distinguishing different regions, foreground and background boundary detection, texture image isophotes detection, a textural and structural isophote prediction algorithm, and an in-painting algorithm with gradient priority order. Based on the texture isophote prediction with a geometrical principle and the in-painting algorithm with a gradient priority order, the boundary information of the foreground is considerably clearer and the texture information in the zero-region can be concealed much more accurately than in previous works. The vision quality mainly depends on the distortion of the structural information. Experimental results indicate that the proposed algorithm improves not only the objective quality of the virtual image, but also its subjective quality considerably; human vision is also clearly improved based on the subjective results. In particular, the algorithm ensures the boundary contours of the foreground objects and the textural and structural information.

  • Facial Image Super-Resolution Reconstruction Based on Separated Frequency Components

    Hyunduk KIM  Sang-Heon LEE  Myoung-Kyu SOHN  Dong-Ju KIM  Byungmin KIM  

     
    PAPER

      Page(s):
    1315-1322

    Super resolution (SR) reconstruction is the process of fusing a sequence of low-resolution images into one high-resolution image. Many researchers have introduced various SR reconstruction methods. However, these traditional methods are limited in the extent to which they allow recovery of high-frequency information. Moreover, due to the self-similarity of face images, most of the facial SR algorithms are machine learning based. In this paper, we introduce a facial SR algorithm that combines learning-based and regularized SR image reconstruction algorithms. Our conception involves two main ideas. First, we employ separated frequency components to reconstruct high-resolution images. In addition, we separate the region of the training face image. These approaches can help to recover high-frequency information. In our experiments, we demonstrate the effectiveness of these ideas.

  • A Generation Method of Amplitude-Only Images with Low Intensity Ranges

    Wannida SAE-TANG  Masaaki FUJIYOSHI  Hitoshi KIYA  

     
    PAPER

      Page(s):
    1323-1330

    In this paper, 1) it is shown that amplitude-only images (AOIs) have quite wide intensity ranges (IRs), and 2) an IR reduction method for AOIs is proposed. An AOI is the inversely transformed amplitude spectra of an image, and it is used in the privacy- and copyright-protected image trading system because of its invisibility. Since an AOI is the coherent summation of cosine waves with the same phase, the IR of the AOI is too large to be stored and/or transmitted. In the proposed method, random signs are applied to discrete Fourier transformed amplitude coefficients to obtained AOIs with significantly lower IRs without distortion while keeping the invisibility of images. With reasonable processing time, high correct watermark extracting rates, inversely quantized AOIs with low mean squared errors, and reconstructed images with high peak signal-to-noise ratios are obtained by a linear quantizer in the proposed method.

  • Computing-Based Performance Analysis of Approximation Algorithms for the Minimum Weight Vertex Cover Problem of Graphs

    Satoshi TAOKA  Daisuke TAKAFUJI  Toshimasa WATANABE  

     
    PAPER

      Page(s):
    1331-1339

    A vertex cover of a given graph G = (V,E) is a subset N of V such that N contains either u or v for any edge (u,v) of E. The minimum weight vertex cover problem (MWVC for short) is the problem of finding a vertex cover N of any given graph G = (V,E), with weight w(v) for each vertex v of V, such that the sum w(N) of w(v) over all v of N is minimum. In this paper, we consider MWVC with w(v) of any v of V being a positive integer. We propose simple procedures as postprocessing of algorithms for MWVC. Furthremore, five existing approximation algorithms with/without the proposed procedures incorporated are implemented, and they are evaluated through computing experiment.

  • Novel THP Scheme with Minimum Noise Enhancement for Multi-User MIMO Systems

    Shogo FUJITA  Leonardo LANANTE Jr.  Yuhei NAGAO  Masayuki KUROSAKI  Hiroshi OCHI  

     
    PAPER

      Page(s):
    1340-1347

    In this paper, we propose a modified Tomlinson Harashima precoding (THP) method with less increase of computational complexity for the multi-user MIMO downlink system. The proposed THP scheme minimizes the influence of noise enhancement at the receivers by placing the diagonal weighted filters at both transmitter side and receiver side with square root. Compared to previously proposed non-linear precoding methods including vector perturbation (VP), the proposed THP achieves high BER performance. Furthermore, we show that the proposed THP method is implemented with lower computational complexity than that of existing modified THP and VP in literature.

  • An Effective Overlap Removable Objective for Analytical Placement

    Syota KUWABARA  Yukihide KOHIRA  Yasuhiro TAKASHIMA  

     
    PAPER

      Page(s):
    1348-1356

    In the recent LSI design, it is difficult to obtain a placement which satisfies both design constraints and specifications due to the increase of the circuit size, the progress of the manufacturing technology, and the speed-up of the circuit performance. Analytical placement methods are promising to obtain the placement which satisfies both design constraints and specifications. Although existing analytical placement methods obtain the placement with the short wire length, the obtained placement has overlap. In this paper, we propose Overlap Removable Area as an overlap evaluation method for an analytical placement method. Experiments show that the proposed evaluation method is effective for removing overlap in the analytical placement method.

  • Concurrent Detection and Recognition of Individual Object Based on Colour and p-SIFT Features

    Jienan ZHANG  Shouyi YIN  Peng OUYANG  Leibo LIU  Shaojun WEI  

     
    PAPER

      Page(s):
    1357-1365

    In this paper we propose a method to use features of an individual object to locate and recognize this object concurrently in a static image with Multi-feature fusion based on multiple objects sample library. This method is proposed based on the observation that lots of previous works focuses on category recognition and takes advantage of common characters of special category to detect the existence of it. However, these algorithms cease to be effective if we search existence of individual objects instead of categories in complex background. To solve this problem, we abandon the concept of category and propose an effective way to use directly features of an individual object as clues to detection and recognition. In our system, we import multi-feature fusion method based on colour histogram and prominent SIFT (p-SIFT) feature to improve detection and recognition accuracy rate. p-SIFT feature is an improved SIFT feature acquired by further feature extraction of correlation information based on Feature Matrix aiming at low computation complexity with good matching rate that is proposed by ourselves. In process of detecting object, we abandon conventional methods and instead take full use of multi-feature to start with a simple but effective way-using colour feature to reduce amounts of patches of interest (POI). Our method is evaluated on several publicly available datasets including Pascal VOC 2005 dataset, Objects101 and datasets provided by Achanta et al.

  • A Dual-Mode Deblocking Filter Design for HEVC and H.264/AVC

    Muchen LI  Jinjia ZHOU  Dajiang ZHOU  Xiao PENG  Satoshi GOTO  

     
    PAPER

      Page(s):
    1366-1375

    As the successive video compression standard of H.264/AVC, High Efficiency Video Codec (HEVC) will play an important role in video coding area. In the deblocking filter part, HEVC inherits the basic property of H.264/AVC and gives some new features. Based on this variation, this paper introduces a novel dual-mode deblocking filter architecture which could support both of the HEVC and H.264/AVC standards. For HEVC standard, the proposed symmetric unified-cross unit (SUCU) based filtering scheme greatly reduces the design complexity. As a result, processing a 1616 block needs 24 clock cycles. For H.264/AVC standard, it takes 48 clock cycles for a 1616 macro-block (MB). In synthesis result, the proposed architecture occupies 41.6k equivalent gate count at frequency of 200 MHz in SMIC 65 nm library, which could satisfy the throughput requirement of super hi-vision (SHV) on 60 fps. With filter reusing scheme, the universal design for the two standards saves 30% gate counts than the dedicated ones in filter part. In addition, the total power consumption could be reduced by 57.2% with skipping mode when the edges need not be filtered.

  • Low Complexity Keypoint Extraction Based on SIFT Descriptor and Its Hardware Implementation for Full-HD 60 fps Video

    Takahiro SUZUKI  Takeshi IKENAGA  

     
    PAPER

      Page(s):
    1376-1383

    Scale-Invariant Feature Transform (SIFT) has lately attracted attention in computer vision as a robust keypoint detection algorithm which is invariant for scale, rotation and illumination changes. However, its computational complexity is too high to apply in practical real-time applications. This paper proposes a low complexity keypoint extraction algorithm based on SIFT descriptor and utilization of the database, and its real-time hardware implementation for Full-HD resolution video. The proposed algorithm computes SIFT descriptor on the keypoint obtained by corner detection and selects a scale from the database. It is possible to parallelize the keypoint detection and descriptor computation modules in the hardware. These modules do not depend on each other in the proposed algorithm in contrast with SIFT that computes a scale. The processing time of descriptor computation in this hardware is independent of the number of keypoints because its descriptor generation is pipelining structure of pixel. Evaluation results show that the proposed algorithm on software is 12 times faster than SIFT. Moreover, the proposed hardware on FPGA is 427 times faster than SIFT and 61 times faster than the proposed algorithm on software. The proposed hardware performs keypoint extraction and matching at 60 fps for Full-HD video.

  • Parameterization of All Stabilizing Two-Degrees-of-Freedom Simple Repetitive Controllers with Specified Frequency Characteristics

    Tatsuya SAKANUSHI  Jie HU  Kou YAMADA  

     
    PAPER

      Page(s):
    1384-1392

    The simple repetitive control system proposed by Yamada et al. is a type of servomechanism for periodic reference inputs. This system follows a periodic reference input with a small steady-state error, even if there is periodic disturbance or uncertainty in the plant. In addition, simple repetitive control systems ensure that transfer functions from the periodic reference input to the output and from the disturbance to the output have finite numbers of poles. Yamada et al. clarified the parameterization of all stabilizing simple repetitive controllers. Recently, Yamada et al. proposed the parameterization of all stabilizing two-degrees-of-freedom (TDOF) simple repetitive controllers that can specify the input-output characteristic and the disturbance attenuation characteristic separately. However, when using the method of Yamada et al., it is complex to specify the low-pass filter in the internal model for the periodic reference input that specifies the frequency characteristics. This paper extends the results of Yamada et al. and proposes the parameterization of all stabilizing TDOF simple repetitive controllers with specified frequency characteristics in which the low-pass filter can be specified beforehand.

  • Parallelization of Computing-Intensive Tasks of SIFT Algorithm on a Reconfigurable Architecture System

    Peng OUYANG  Shouyi YIN  Hui GAO  Leibo LIU  Shaojun WEI  

     
    PAPER

      Page(s):
    1393-1402

    Scale Invariant Feature Transform (SIFT) algorithm is a very excellent approach for feature detection. It is characterized by data intensive computation. The current studies of accelerating SIFT algorithm are mainly reflected in three aspects: optimizing the parallel parts of the algorithm based on general-purpose multi-core processors, designing the customized multi-core processor dedicated for SIFT, and implementing it based on the FPGA platform. The real-time performance of SIFT has been highly improved. However, the factors such as the input image size, the number of octaves and scale factors in the SIFT algorithm are restricted for some solutions, the flexibility that ensures the high execution performance under variable factors should be improved. This paper proposes a reconfigurable solution to solve this problem. We fully exploit the algorithm and adopt several techniques, such as full parallel execution, block computation and CORDIC transformation, etc., to improve the execution efficiency on a REconfigurable MUltimedia System called REMUS. Experimental results show that the execution performance of the SIFT is improved by 33%, 50% and 8 times comparing with that executed in the multi-core platform, FPGA and ASIC separately. The scheme of dynamic reconfiguration in this work can configure the circuits to meet the computation requirements under different input image size, different number of octaves and scale factors in the process of computing.

  • A Low Power Tone Recognition for Automatic Tonal Speech Recognizer

    Jirabhorn CHAIWONGSAI  Werapon CHIRACHARIT  Kosin CHAMNONGTHAI  Yoshikazu MIYANAGA  Kohji HIGUCHI  

     
    PAPER

      Page(s):
    1403-1411

    This paper proposes a low power tone recognition suitable for automatic tonal speech recognizer (ATSR). The tone recognition estimates fundamental frequency (F0) only from vowels by using a new magnitude difference function (MDF), called vowel-MDF. Accordingly, the number of operations is considerably reduced. In order to apply the tone recognition in portable electronic equipment, the tone recognition is designed using parallel and pipeline architecture. Due to the pipeline and parallel computations, the architecture achieves high throughput and consumes low power. In addition, the architecture is able to reduce the number of input frames depending on vowels, making it more adaptable depending on the maximum number of frames. The proposed architecture is evaluated with words selected from voice activation for GPS systems, phone dialing options, and words having the same phoneme but different tones. In comparison with the autocorrelation method, the experimental results show 35.7% reduction in power consumption and 27.1% improvement of tone recognition accuracy (110 words comprising 187 syllables). In comparison with ATSR without the tone recognition, the speech recognition accuracy indicates 25.0% improvement of ATSR with tone recogntion (2,250 training data and 45 testing words).

  • Regular Section
  • Recovery of Missing Samples from Oversampled Bandpass Signals and Its Stability

    Sinuk KANG  Kil Hyun KWON  Dae Gwan LEE  

     
    PAPER-Digital Signal Processing

      Page(s):
    1412-1420

    We present a multi-channel sampling expansion for signals with selectively tiled band-region. From this we derive an oversampling expansion for any bandpass signal, and show that any finitely many missing samples from two-channel oversampling expansion can always be uniquely recovered. In addition, we find a sufficient condition under which some infinitely many missing samples can be recovered. Numerical stability of the recovery process is also discussed in terms of the oversampling rate and distribution of the missing samples.

  • High Precision Analog Data Acquisition System with Signal Transformer Isolation Technique

    Yoshihiro AKEBOSHI  Seiichi SAITO  Hideyuki OHASHI  

     
    PAPER-Analog Signal Processing

      Page(s):
    1421-1428

    In the field of Factory Automation (FA), process control, and Supervisory Control and Data Acquisition (SCADA), an analog data acquisition system using isolation transformers is commonly used to measure and record analog signals through isolated inputs. In order to improve the input precision of the acquisition system, circuit techniques and a design method of the analog frontend circuit with the signal transformers are proposed in this paper. A circuit technique to compensate for the droop of the pulse signal due to the characteristics of the signal transformer is employed. Also, a numerical analysis of a non-linear circuit equation, which represents a behavior of the core saturation of the signal transformer, is performed in order to determine the parameters of the circuit. Using a small signal transformer, dedicatedly developed for this acquisition system, the performance of the precision achieved for the linearity error is experimentally confirmed within +0.0204%/-0.0215%.

  • Relaxed Stability Condition for T-S Fuzzy Systems Using a New Fuzzy Lyapunov Function

    Sangsu YEH  Sangchul WON  

     
    PAPER-Systems and Control

      Page(s):
    1429-1436

    This paper presents the stability analysis for continuous-time Takagi-Sugeno fuzzy systems using a fuzzy Lyapunov function. The proposed fuzzy Lyapunov function involves the time derivatives of states to include new free matrices in the LMI stability conditions. These free matrices extend the solution space for Linear Matrix Inequalities (LMIs) problems. Numerical examples illustrate the effectiveness of the proposed methods.

  • Improved Key Recovery Attack on the BEAN Stream Cipher

    Hui WANG  Martin HELL  Thomas JOHANSSON  Martin ÅGREN  

     
    PAPER-Cryptography and Information Security

      Page(s):
    1437-1444

    BEAN is a newly proposed lightweight stream cipher adopting Fibonacci FCSRs. It is designed for very constrained environments and aims at providing a balance between security, efficiency and cost. A weakness in BEAN was first found by Å gren and Hell in 2011, resulting in a key recovery attack slightly better than brute force. In this paper, we present new correlations between state and keystream with large statistical advantage, leading to a much more efficient key recovery attack. The time and data complexities of this attack are 257.53 and 259.94, respectively. Moreover, two new output functions are provided as alternatives, which are more efficent than the function used in BEAN and are immune to all attacks proposed on the cipher. Also, suggestions for improving the FCSRs are given.

  • Lower Bounds on the Aperiodic Hamming Correlations of Frequency Hopping Sequences

    Xing LIU  Daiyuan PENG  Xianhua NIU  Fang LIU  

     
    PAPER-Spread Spectrum Technologies and Applications

      Page(s):
    1445-1450

    In order to evaluate the goodness of frequency hopping (FH) sequence design, the periodic Hamming correlation function is used as an important measure. But aperiodic Hamming correlation of FH sequences matters in real applications, while it received little attraction in the literature compared with periodic Hamming correlation. In this paper, the new aperiodic Hamming correlation lower bounds for FH sequences, with respect to the size of the frequency slot set, the sequence length, the family size, the maximum aperiodic Hamming autocorrelation and the maximum aperiodic Hamming crosscorrelation are established. The new aperiodic bounds are tighter than the Peng-Fan bounds. In addition, the new bounds include the second powers of the maximum aperiodic Hamming autocorrelation and the maximum aperiodic Hamming crosscorrelation but the Peng-Fan bounds do not include them. For the given sequence length, the family size and the frequency slot set size, the values of the maximum aperiodic Hamming autocorrelation and the maximum aperiodic Hamming crosscorrelation are inside of an ellipse which is given by the new aperiodic bounds.

  • Object Detection Using RSSI with Road Surface Reflection Model for Intersection Safety

    Shoma HISAKA  Shunsuke KAMIJO  

     
    PAPER-Intelligent Transport System

      Page(s):
    1451-1459

    We have developed a dedicated onboard “sensor” utilizing wireless communication devices for collision avoidance around road intersections. The “sensor” estimates the positions of transmitters on traffic participants by comparing the strengths of signals received by four ZigBee receivers installed at the four corners of a vehicle. On-board sensors involving cameras cannot detect objects in non line-of-sight (NLOS) area caused by buildings and other vehicles. Although infrastructure sensors for vehicle-to-infrastructure (V2I) cooperative systems can detect such hidden objects, they are substantially more expensive than on-board sensors. The on-board wireless “sensor” developed in this work would function as an alternative tool for collision avoidance around intersections. Herein, we extend our previous work by considering a road surface reflection model to improve the estimation accuracy. By using this model, we succeeded in reducing the error mismatches between the observed data and the calibration data of the estimation algorithm. The proposed system will be realized on the basis of these enhancements.

  • A Modified Pulse Coupled Neural Network with Anisotropic Synaptic Weight Matrix for Image Edge Detection

    Zhan SHI  Jinglu HU  

     
    PAPER-Image

      Page(s):
    1460-1467

    Pulse coupled neural network (PCNN) is a new type of artificial neural network specific for image processing applications. It is a single layer, two dimensional network with neurons which have 1:1 correspondence to the pixels of an input image. It is convenient to process the intensities and spatial locations of image pixels simultaneously by applying a PCNN. Therefore, we propose a modified PCNN with anisotropic synaptic weight matrix for image edge detection from the aspect of intensity similarities of pixels to their neighborhoods. By applying the anisotropic synaptic weight matrix, the interconnections are only established between the central neuron and the neighboring neurons corresponding to pixels with similar intensity values in a 3 by 3 neighborhood. Neurons corresponding to edge pixels and non-edge pixels will receive different input signal from the neighboring neurons. By setting appropriate threshold conditions, image step edges can be detected effectively. Comparing with conventional PCNN based edge detection methods, the proposed modified PCNN is much easier to control, and the optimal result can be achieved instantly after all neurons pulsed. Furthermore, the proposed method is shown to be able to distinguish the isolated pixels from step edge pixels better than derivative edge detectors.

  • An Adaptation Method for Morphological Opening Filters with a Smoothness Penalty on Structuring Elements

    Makoto NAKASHIZUKA  Yu ASHIHARA  Youji IIGUNI  

     
    PAPER-Image

      Page(s):
    1468-1477

    This paper proposes an adaptation method for structuring elements of morphological filters. A structuring element of a morphological filter specifies a shape of local structures that is eliminated or preserved in the output. The adaptation of the structuring element is hence a crucial problem for image denoising using morphological filters. Existing adaptation methods for structuring elements require preliminary training using example images. We propose an adaptation method for structuring elements of morphological opening filters that does not require such training. In our approach, the opening filter is interpreted as an approximation method with the union of the structuring elements. In order to eliminate noise components, a penalty defined from an assumption of image smoothness is imposed on the structuring element. Image denoising is achieved through decreasing the objective function, which is the sum of an approximation error term and the penalty function. In experiments, we use the proposed method to demonstrate positive impulsive noise reduction from images.

  • Adaptive Feedback Cancellation on Improved DCD Algorithms

    Chao DONG  Li GAO  Ying HONG  Chengpeng HAO  

     
    LETTER-Digital Signal Processing

      Page(s):
    1478-1481

    Dichotomous coordinate descent (DCD) iterations method has been proposed for adaptive feedback cancellation, which uses a fixed number of iterations and a fixed amplitude range. In this paper, improved DCD algorithms are proposed, which substitute the constant number of iterations and the amplitude range with a variable number of iterations(VI) and/or a variable amplitude range(VA). Thus VI-DCD, VA-DCD and VIA-DCD algorithms are obtained. Computer simulations are used to compare the performance of the proposed algorithms against original DCD algorithm, and simulation results demonstrate that significant improvements are achieved in the convergence speed and accuracy. Another notable conclusion by further simulations is that the proposed algorithms achieve superior performance with a real speech segment as the input.

  • Partial-Update Normalized Sign LMS Algorithm Employing Sparse Updates

    Seong-Eun KIM  Young-Seok CHOI  Jae-Woo LEE  Woo-Jin SONG  

     
    LETTER-Digital Signal Processing

      Page(s):
    1482-1487

    This paper provides a novel normalized sign least-mean square (NSLMS) algorithm which updates only a part of the filter coefficients and simultaneously performs sparse updates with the goal of reducing computational complexity. A combination of the partial-update scheme and the set-membership framework is incorporated into the context of L-norm adaptive filtering, thus yielding computational efficiency. For the stabilized convergence, we formulate a robust update recursion by imposing an upper bound of a step size. Furthermore, we analyzed a mean-square stability of the proposed algorithm for white input signals. Experimental results show that the proposed low-complexity NSLMS algorithm has similar convergence performance with greatly reduced computational complexity compared to the partial-update NSLMS, and is comparable to the set-membership partial-update NLMS.

  • Iterative Learning Control with Advanced Output Data Using an Estimation of the Impulse Response

    Gu-Min JEONG  Sang-Hoon JI  

     
    LETTER-Systems and Control

      Page(s):
    1488-1491

    This letter proposes an iterative learning control with advanced output data (ADILC) scheme using an estimation of the impulse response for non-minimum phase (NMP) systems, whose model is unknown, except for the relative degree and the number of NMP zeros. Although the ADILC has a simple learning structure that can be applied to both minimum phase and NMP systems, at least a partial model should be known in order to apply ADILC. Considering this fact, in this letter, we propose a new ADILC method based on the estimation of the impulse response for NMP systems whose model is unknown. An estimation method for the learning matrix and an ADILC scheme are presented for NMP systems.

  • An Object Based Cooperative Spectrum Sensing Scheme with Best Relay

    Meiling LI  Anhong WANG  

     
    LETTER-Numerical Analysis and Optimization

      Page(s):
    1492-1495

    The performance of cooperative spectrum sensing (CSS) is limited not only by the imperfect sensing channels but also by the imperfect reporting channels. In order to improve the transmission reliability of the reporting channels, an object based cooperative spectrum sensing scheme with best relay (Pe-BRCS) is proposed, in which the best relay is selected by minimizing the total reporting error probability to improve the sensing performance. Numerical results show that, the reduced total reporting error probability and the improved sensing performance can be achieved by the Pe-BRCS scheme.

  • MacWilliams Type Identity for M-Spotty Rosenbloom-Tsfasman Weight Enumerator of Linear Codes over Finite Ring

    Jianzhang CHEN  Wenguang LONG  Bo FU  

     
    LETTER-Coding Theory

      Page(s):
    1496-1500

    Nowadays, error control codes have become an essential technique to improve the reliability of various digital systems. A new type error control codes called m-spotty byte error control codes are applied to computer memory systems. These codes are essential to make the memory systems reliable. Here, we introduce the m-spotty Rosenbloom-Tsfasman weights and m-spotty Rosenbloom-Tsfasman weight enumerator of linear codes over Fq[u]/(uk) with uk=0. We also derive a MacWilliams type identity for m-spotty Rosenbloom-Tsfasman weight enumerator.

  • Geometric Predicted Unscented Kalman Filtering in Rotate Magnetic Ranging

    Chao ZHANG  Keke PANG  Yaxin ZHANG  

     
    LETTER-Measurement Technology

      Page(s):
    1501-1504

    Rotate magnetic field can be used for ranging, especially the environment where electronic filed suffers a deep fading and attenuation, such as drilling underground. However, magnetic field is still affected by the ferromagnetic materials, e.g., oil casing pipe. The measurement error is not endurable for single measurement. In this paper, the Geometric Predicted Unscented Kalman Filtering (GP-UKF) algorithm is developed for rotate magnetic ranging system underground. With GP-UKF, the Root Mean Square Error (RMSE) can be suppressed. It is really important in a long range detection by magnetic field, i.e., more than 50 meters.