The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E91-D No.7  (Publication Date:2008/07/01)

    Special Section on Machine Vision and its Applications
  • FOREWORD Open Access

    Hiroshi SAKO  

     
    FOREWORD

      Page(s):
    1847-1847
  • View Invariant Human Action Recognition Based on Factorization and HMMs

    Xi LI  Kazuhiro FUKUI  

     
    PAPER

      Page(s):
    1848-1854

    This paper addresses the problem of view invariant action recognition using 2D trajectories of landmark points on human body. It is a challenging task since for a specific action category, the 2D observations of different instances might be extremely different due to varying viewpoint and changes in speed. By assuming that the execution of an action can be approximated by dynamic linear combination of a set of basis shapes, a novel view invariant human action recognition method is proposed based on non-rigid matrix factorization and Hidden Markov Models (HMMs). We show that the low dimensional weight coefficients of basis shapes by measurement matrix non-rigid factorization contain the key information for action recognition regardless of the viewpoint changing. Based on the extracted discriminative features, the HMMs is used for temporal dynamic modeling and robust action classification. The proposed method is tested using real life sequences and promising performance is achieved.

  • A Single Camera Motion Capture System for Human-Computer Interaction

    Ryuzo OKADA  Bjorn STENGER  

     
    PAPER

      Page(s):
    1855-1862

    This paper presents a method for markerless human motion capture using a single camera. It uses tree-based filtering to efficiently propagate a probability distribution over poses of a 3D body model. The pose vectors and associated shapes are arranged in a tree, which is constructed by hierarchical pairwise clustering, in order to efficiently evaluate the likelihood in each frame. A new likelihood function based on silhouette matching is proposed that improves the pose estimation of thinner body parts, i.e. the limbs. The dynamic model takes self-occlusion into account by increasing the variance of occluded body-parts, thus allowing for recovery when the body part reappears. We present two applications of our method that work in real-time on a Cell Broadband EngineTM: a computer game and a virtual clothing application.

  • Study of Facial Features Combination Using a Novel Adaptive Fuzzy Integral Fusion Model

    M. Mahdi GHAZAEI ARDAKANI  Shahriar BARADARAN SHOKOUHI  

     
    PAPER

      Page(s):
    1863-1870

    A new adaptive model based on fuzzy integrals has been presented and used for combining three well-known methods, Eigenface, Fisherface and SOMface, for face classification. After training the competence estimation functions, the adaptive mechanism enables our system the filtering of unsure judgments of classifiers for a specific input. Comparison with classical and non-adaptive approaches proves the superiority of this model. Also we examined how these features contribute to the combined result and whether they can together establish a more robust feature.

  • An Efficient 3D Geometrical Consistency Criterion for Detection of a Set of Facial Feature Points

    Mayumi YUASA  Tatsuo KOZAKAYA  Osamu YAMAGUCHI  

     
    PAPER

      Page(s):
    1871-1877

    We propose a novel efficient three-dimensional geometrical consistency criterion for detection of a set of facial feature points. Many face recognition methods employing a single image require localization of particular facial feature points and their performance is highly dependent on localization accuracy in detecting these feature points. The proposed method is able to calculate alignment error of a point set rapidly because calculation is not iterative. Also the method does not depend on the type of point detection method used and no learning is needed. Independently detected point sets are evaluated through matching to a three-dimensional generic face model. Correspondence error is defined by the distance between the feature points defined in the model and those detected. The proposed criterion is evaluated through experiment using various facial feature point sets on face images.

  • Face Recognition Based on Mutual Projection of Feature Distributions

    Akira INOUE  Atsushi SATO  

     
    PAPER

      Page(s):
    1878-1884

    This paper proposes a new face recognition method based on mutual projection of feature distributions. The proposed method introduces a new robust measurement between two feature distributions. This measurement is computed by a harmonic mean of two distance values obtained by projection of each mean value into the opposite feature distribution. The proposed method does not require eigenvalue analysis of the two subspaces. This method was applied to face recognition task of temporal image sequence. Experimental results demonstrate that the computational cost was improved without degradation of identification performance in comparison with the conventional method.

  • 3D Precise Inspection of Terminal Lead for Electronic Devices by Single Camera Stereo Vision

    Takashi WATANABE  Akira KUSANO  Takayuki FUJIWARA  Hiroyasu KOSHIMIZU  

     
    PAPER

      Page(s):
    1885-1892

    It is very important to guarantee the quality of the industrial products by means of visual inspection. In order to reduce the soldering defect with terminal deformation and terminal burr in the manufacturing process, this paper proposes a 3D visual inspection system based on a stereo vision with single camera. It is technically noted that the base line of this single camera stereo was precisely calibrated by the image processing procedure. Also to extract the measuring point coordinates for computing disparity; the error is reduced with original algorithm. Comparing its performance with that of human inspection using industrial microscope, the proposed 3D inspection could be an alternative in precision and in processing cost. Since the practical specification in 3D precision is less than 1 pixel and the experimental performance was around the same, it was demonstrated by the proposed system that the soldering defect with terminal deformation and terminal burr in inspection, especially in 3D inspection, was decreased. In order to realize the inline inspection, this paper will suggest how the human inspection of the products could be modeled and be implemented by the computer system especially in manufacturing process.

  • Precise Top View Image Generation without Global Metric Information

    Hiroshi KANO  Keisuke ASARI  Yohei ISHII  Hitoshi HONGO  

     
    PAPER

      Page(s):
    1893-1898

    We describe a practical and precise calibration method for generating a top view image that is transformed so that a planar object such as the road can be observed from a direction perpendicular to its surface. The geometric relation between the input and output images is described by a 3 3 homography matrix. Conventional methods use large planar calibration patterns to achieve precise transformations. The proposed method uses much smaller element patterns that are placed in arbitrary positions within the view of the camera. One of the patterns is used to obtain an initial homography. Then, the information from all of the patterns is used by a non-linear optimization scheme to reach a global optimum homography. The experiment done to evaluate the method showed that the precision of the proposed method is comparable to that of the conventional method where a large calibration pattern is used, making it more practical for automotive applications.

  • Overtaking Vehicle Detection Method and Its Implementation Using IMAPCAR Highly Parallel Image Processor

    Kazuyuki SAKURAI  Shorin KYO  Shin'ichiro OKAZAKI  

     
    PAPER

      Page(s):
    1899-1905

    This paper describes the real-time implementation of a vision-based overtaking vehicle detection method for driver assistance systems using IMAPCAR, a highly parallel SIMD linear array processor. The implemented overtaking vehicle detection method is based on optical flows detected by block matching using SAD and detection of the flows' vanishing point. The implementation is done efficiently by taking advantage of the parallel SIMD architecture of IMAPCAR. As a result, video-rate (33 frames/s) implementation could be achieved.

  • Recognition of Plain Objects Using Local Region Matching

    Al MANSUR  Katsutoshi SAKATA  Dipankar DAS  Yoshinori KUNO  

     
    PAPER

      Page(s):
    1906-1913

    Conventional interest point based matching requires computationally expensive patch preprocessing and is not appropriate for recognition of plain objects with negligible detail. This paper presents a method for extracting distinctive interest regions from images that can be used to perform reliable matching between different views of plain objects or scene. We formulate the correspondence problem in a Naive Bayesian classification framework and a simple correlation based matching, which makes our system fast, simple, efficient, and robust. To facilitate the matching using a very small number of interest regions, we also propose a method to reduce the search area inside a test scene. Using this method, it is possible to robustly identify objects among clutter and occlusion while achieving near real-time performance. Our system performs remarkably well on plain objects where some state-of-the art methods fail. Since our system is particularly suitable for the recognition of plain object, we refer to it as Simple Plane Object Recognizer (SPOR).

  • An Intelligent Active Video Surveillance System Based on the Integration of Virtual Neural Sensors and BDI Agents

    Massimo DE GREGORIO  

     
    PAPER

      Page(s):
    1914-1921

    In this paper we present an intelligent active video surveillance system currently adopted in two different application domains: railway tunnels and outdoor storage areas. The system takes advantages of the integration of Artificial Neural Networks (ANN) and symbolic Artificial Intelligence (AI). This hybrid system is formed by virtual neural sensors (implemented as WiSARD-like systems) and BDI agents. The coupling of virtual neural sensors with symbolic reasoning for interpreting their outputs, makes this approach both very light from a computational and hardware point of view, and rather robust in performances. The system works on different scenarios and in difficult light conditions.

  • Robust Small-Object Detection for Outdoor Wide-Area Surveillance

    Daisuke ABE  Eigo SEGAWA  Osafumi NAKAYAMA  Morito SHIOHARA  Shigeru SASAKI  Nobuyuki SUGANO  Hajime KANNO  

     
    PAPER

      Page(s):
    1922-1928

    In this paper, we present a robust small-object detection method, which we call "Frequency Pattern Emphasis Subtraction (FPES)", for wide-area surveillance such as that of harbors, rivers, and plant premises. For achieving robust detection under changes in environmental conditions, such as illuminance level, weather, and camera vibration, our method distinguishes target objects from background and noise based on the differences in frequency components between them. The evaluation results demonstrate that our method detected more than 95% of target objects in the images of large surveillance areas ranging from 30-75 meters at their center.

  • Estimating Anomality of the Video Sequences for Surveillance Using 1-Class SVM

    Kyoko SUDO  Tatsuya OSAWA  Kaoru WAKABAYASHI  Hideki KOIKE  Kenichi ARAKAWA  

     
    PAPER

      Page(s):
    1929-1936

    We have proposed a method to detect and quantitatively extract anomalies from surveillance videos. Using our method, anomalies are detected as patterns based on spatio-temporal features that are outliers in new feature space. Conventional anomaly detection methods use features such as tracks or local spatio-temporal features, both of which provide insufficient timing information. Using our method, the principal components of spatio-temporal features of change are extracted from the frames of video sequences of several seconds duration. This enables anomalies based on movement irregularity, both position and speed, to be determined and thus permits the automatic detection of anomal events in sequences of constant length without regard to their start and end. We used a 1-class SVM, which is a non-supervised outlier detection method. The output from the SVM indicates the distance between the outlier and the concentrated base pattern. We demonstrated that the anomalies extracted using our method subjectively matched perceived irregularities in the pattern of movements. Our method is useful in surveillance services because the captured images can be shown in the order of anomality, which significantly reduces the time needed.

  • Random Texture Defect Detection Using 1-D Hidden Markov Models Based on Local Binary Patterns

    Hadi HADIZADEH  Shahriar BARADARAN SHOKOUHI  

     
    PAPER

      Page(s):
    1937-1945

    In this paper a novel method for the purpose of random texture defect detection using a collection of 1-D HMMs is presented. The sound textural content of a sample of training texture images is first encoded by a compressed LBP histogram and then the local patterns of the input training textures are learned, in a multiscale framework, through a series of HMMs according to the LBP codes which belong to each bin of this compressed LBP histogram. The hidden states of these HMMs at different scales are used as a texture descriptor that can model the normal behavior of the local texture units inside the training images. The optimal number of these HMMs (models) is determined in an unsupervised manner as a model selection problem. Finally, at the testing stage, the local patterns of the input test image are first predicted by the trained HMMs and a prediction error is calculated for each pixel position in order to obtain a defect map at each scale. The detection results are then merged by an inter-scale post fusion method for novelty detection. The proposed method is tested with a database of grayscale ceramic tile images.

  • Image Enhancement by Analysis on Embedded Surfaces of Images and a New Framework for Enhancement Evaluation

    Li TIAN  Sei-ichiro KAMATA  

     
    PAPER

      Page(s):
    1946-1954

    Image enhancement plays an important role in many machine vision applications on images captured in low contrast and low illumination conditions. In this study, we propose a new method for image enhancement based on analysis on embedded surfaces of images. The proposed method gives an insight into the relationship between the image intensity and image enhancement. In our method, scaled surface area and the surface volume are proposed and used to reconstruct the image iteratively for contrast enhancement, and the illumination of the reconstructed image can also be adjusted simultaneously. On the other hand, the most common methods for measuring the quality of enhanced images are Mean Square Error (MSE) or Peak Signal-to-Noise-Ratio (PSNR) in conventional works. The two measures have been recognized as inadequate ones because they do not evaluate the result in the way that the human vision system does. This paper also presents a new framework for evaluating image enhancement using both objective and subjective measures. This framework can also be used for other image quality evaluations such as denoising evaluation. We compare our enhancement method with some well-known enhancement algorithms, including wavelet and curvelet methods, using the new evaluation framework. The results show that our method can give better performance in most objective and subjective criteria than the conventional methods.

  • Automatic Cell Segmentation Using a Shape-Classification Model in Immunohistochemically Stained Cytological Images

    Shishir SHAH  

     
    PAPER

      Page(s):
    1955-1962

    This paper presents a segmentation method for detecting cells in immunohistochemically stained cytological images. A two-phase approach to segmentation is used where an unsupervised clustering approach coupled with cluster merging based on a fitness function is used as the first phase to obtain a first approximation of the cell locations. A joint segmentation-classification approach incorporating ellipse as a shape model is used as the second phase to detect the final cell contour. The segmentation model estimates a multivariate density function of low-level image features from training samples and uses it as a measure of how likely each image pixel is to be a cell. This estimate is constrained by the zero level set, which is obtained as a solution to an implicit representation of an ellipse. Results of segmentation are presented and compared to ground truth measurements.

  • Regular Section
  • Quantum-Behaved Particle Swarm Optimization with Chaotic Search

    Kaiqiao YANG  Hirosato NOMURA  

     
    PAPER-Algorithm Theory

      Page(s):
    1963-1970

    The chaotic search is introduced into Quantum-behaved Particle Swarm Optimization (QPSO) to increase the diversity of the swarm in the latter period of the search, so as to help the system escape from local optima. Taking full advantages of the characteristics of ergodicity and randomicity of chaotic variables, the chaotic search is carried out in the neighborhoods of the particles which are trapped into local optima. The experimental results on test functions show that QPSO with chaotic search outperforms the Particle Swarm Optimization (PSO) and QPSO.

  • An Efficient Index Dissemination in Unstructured Peer-to-Peer Networks

    Yusuke TAKAHASHI  Taisuke IZUMI  Hirotsugu KAKUGAWA  Toshimitsu MASUZAWA  

     
    PAPER-Algorithm Theory

      Page(s):
    1971-1981

    Using Bloom filters is one of the most popular and efficient lookup methods in P2P networks. A Bloom filter is a representation of data item indices, which achieves small memory requirement by allowing one-sided errors (false positive). In the lookup scheme besed on the Bloom filter, each peer disseminates a Bloom filter representing indices of the data items it owns in advance. Using the information of disseminated Bloom filters as a clue, each query can find a short path to its destination. In this paper, we propose an efficient extension of the Bloom filter, called a Deterministic Decay Bloom Filter (DDBF) and an index dissemination method based on it. While the index dissemination based on a standard Bloom filter suffers performance degradation by containing information of too many data items when its dissemination radius is large, the DDBF can circumvent such degradation by limiting information according to the distance between the filter holder and the items holders, i.e., a DDBF contains less information for faraway items and more information for nearby items. Interestingly, the construction of DDBFs requires no extra cost above that of standard filters. We also show by simulation that our method can achieve better lookup performance than existing ones.

  • An Effective GML Documents Compressor

    Jihong GUAN  Shuigeng ZHOU  Yan CHEN  

     
    PAPER-Database

      Page(s):
    1982-1990

    As GML is becoming the de facto standard for geographic data storage, transmission and exchange, more and more geographic data exists in GML format. In applications, GML documents are usually very large in size because they contain a large number of verbose markup tags and a large amount of spatial coordinate data. In order to speedup data transmission and reduce network cost, it is essential to develop effective and efficient GML compression tools. Although GML is a special case of XML, current XML compressors are not effective if directly applied to GML, because these compressors have been designed for general XML data. In this paper, we propose GPress, a compressor for effectively compressing GML documents. To the best of our knowledge, GPress is the first compressor specifically for GML documents compression. GPress exploits the unique characteristics of GML documents to achieve good performance. Extensive experiments over real-world GML documents show that GPress evidently outperforms XMill (one of the best existing XML compressors) in compression ratio, while its compression efficiency is comparable to the existing XML compressors.

  • A New Dimension Analysis on Blocking Behavior in Banyan-Based Optical Switching Networks

    Chen YU  Yasushi INOGUCHI  Susumu HORIGUCHI  

     
    PAPER-Networks

      Page(s):
    1991-1998

    Vertically stacked optical banyan (VSOB) is an attractive architecture for constructing banyan-based optical switches. Blocking behaviors analysis is an effective approach to studying network performance and finding a graceful compromise among hardware costs, blocking probability and crosstalk tolerance; however, little has been done on analyzing the blocking behavior of VSOB networks under crosstalk constraint which adds a new dimension to the switching performance. In this paper, we study the overall blocking behavior of a VSOB network under various degree of crosstalk, where an upper bound on the blocking probability of the network is developed. The upper bound depicts accurately the overall blocking behavior of a VSOB network as verified by extensive simulation results and it agrees with the strictly nonblocking condition of the network. The derived upper bound is significant because it reveals the inherent relationship between blocking probability and network hardware cost, by which a desirable tradeoff can be made between them under various degree of crosstalk constraint. Also, the upper bound shows how crosstalk adds a new dimension to the theory of switching systems.

  • On NoC Bandwidth Sharing for the Optimization of Area Cost and Test Application Time

    Fawnizu Azmadi HUSSIN  Tomokazu YONEDA  Hideo FUJIWARA  

     
    PAPER-Dependable Computing

      Page(s):
    1999-2007

    Current NoC test scheduling methodologies in the literature are based on a dedicated path approach; a physical path through the NoC routers and interconnects are allocated for the transportation of test data from an external tester to a single core during the whole duration of the core test. This approach unnecessarily limits test concurrency of the embedded cores because a physical channel bandwidth is typically larger than the scan rate of any core-under-test. We are proposing a bandwidth sharing approach that divides the physical channel bandwidth into multiple smaller virtual channel bandwidths. The test scheduling is performed under the objective of co-optimizing the wrapper area cost and the resulting test application time using two complementary NoC wrappers. Experimental results showed that the area overhead can be optimized (to an extent) without compromising the test application time. Compared to other NoC scheduling approaches based on dedicated paths, our bandwidth sharing approach can reduce the test application time by up to 75.4%.

  • NoC-Compatible Wrapper Design and Optimization under Channel-Bandwidth and Test-Time Constraints

    Fawnizu Azmadi HUSSIN  Tomokazu YONEDA  Hideo FUJIWARA  

     
    PAPER-Dependable Computing

      Page(s):
    2008-2017

    The IEEE 1500 standard wrapper requires that its inputs and outputs be interfaced directly to the chip's primary inputs and outputs for controllability and observability. This is typically achieved by providing a dedicated Test Access Mechanism (TAM) between the wrapper and the primary inputs and outputs. However, when reusing the embedded Network-on-Chip (NoC) interconnect instead of the dedicated TAM, the standard wrapper cannot be used as is because of the packet-based transfer mechanism and other functional requirements by the NoC. In this paper, we describe two NoC-compatible wrappers, which overcome these limitations of the 1500 wrapper. The wrappers (Type 1 and Type 2) complement each other to optimize NoC bandwidth utilization while minimizing the area overhead. The Type 2 wrapper uses larger area overhead to increase bandwidth efficiency, while Type 1 takes advantage of some special configurations which may not require a complex and high-cost wrapper. Two wrapper optimization algorithms are applied to both wrapper designs under channel-bandwidth and test-time constraints, resulting in very little or no increase in the test application time compared to conventional dedicated TAM approaches.

  • Mobile 3D Game Contents Watermarking Based on Buyer-Seller Watermarking Protocol

    Seong-Geun KWON  Suk-Hwan LEE  Ki-Ryong KWON  Eung-Joo LEE  Soo-Yol OK  Sung-Ho BAE  

     
    PAPER-Application Information Security

      Page(s):
    2018-2026

    This paper presents a watermarking method for the copyright protection and the prevention of illegal copying of mobile 3D contents. The proposed method embeds the copyright information and user's phone number into the spatial and encryption domains of the mobile animation data using the Buyer-Seller watermarking protocol. In addition, a user operation key is also inserted, so only the authorized user can play the 3D animation game on the mobile device. The proposed method was implemented using a mobile animation tool, and experimental results verified that the proposed method was capable of copyright protection and preventing illegal copying, as the watermarks were also accurately extracted in the case of geometrical attacks, such as noise addition, data accuracy variation, and data up/down scaling.

  • Robust Object-Based Watermarking Using Feature Matching

    Viet-Quoc PHAM  Takashi MIYAKI  Toshihiko YAMASAKI  Kiyoharu AIZAWA  

     
    PAPER-Application Information Security

      Page(s):
    2027-2034

    We present a robust object-based watermarking algorithm using the scale-invariant feature transform (SIFT) in conjunction with a data embedding method based on Discrete Cosine Transform (DCT). The message is embedded in the DCT domain of randomly generated blocks in the selected object region. To recognize the object region after being distorted, its SIFT features are registered in advance. In the detection scheme, we extract SIFT features from the distorted image and match them with the registered ones. Then we recover the distorted object region based on the transformation parameters obtained from the matching result using SIFT, and the watermarked message can be detected. Experimental results demonstrated that our proposed algorithm is very robust to distortions such as JPEG compression, scaling, rotation, shearing, aspect ratio change, and image filtering.

  • Fast Searching Algorithm for Vector Quantization Based on Subvector Technique

    ShanXue CHEN  FangWei LI  WeiLe ZHU  TianQi ZHANG  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    2035-2040

    A fast algorithm to speed up the search process of vector quantization encoding is presented. Using the sum and the partial norms of a vector, some eliminating inequalities are constructeded. First the inequality based on the sum is used for determining the bounds of searching candidate codeword. Then, using an inequality based on subvector norm and another inequality combining the partial distance with subvector norm, more unnecessary codewords are eliminated without the full distance calculation. The proposed algorithm can reject a lot of codewords, while introducing no extra distortion compared to the conventional full search algorithm. Experimental results show that the proposed algorithm outperforms the existing state-of-the-art search algorithms in reducing the computational complexity and the number of distortion calculation.

  • Real-Time Tracking Error Estimation for Augmented Reality for Registration with Linecode Markers

    Zhiqiang BIAN  Hirotake ISHII  Hiroshi SHIMODA  Masanori IZUMI  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    2041-2050

    Augmented reality tasks require a high-reliability tracking method. Large tracking error causes many problems during AR applications. Tracking error estimation should be integrated with them to improve the reliability of tracking methods. Although some tracking error estimation methods have been developed, they are not feasible to be integrated because of computational speed and accuracy. For this study, a tracking error estimation algorithm with screen error estimation based on the characteristic of linecode marker was applied. It can rapidly estimate tracking error. An evaluation experiment was conducted to compare the estimated tracking error and the actual measured tracking error. Results show that the algorithm is reliable and sufficiently fast to be used for real-time tracking error warning or tracking accuracy improvement methods.

  • Introducing a Translation Dictionary into Phrase-Based SMT

    Hideo OKUMA  Hirofumi YAMAMOTO  Eiichiro SUMITA  

     
    PAPER-Natural Language Processing

      Page(s):
    2051-2057

    This paper presents a method to effectively introduce a translation dictionary into phrase-based SMT. Though SMT systems can be built with only a parallel corpus, translation dictionaries are more widely available and have many more entries than parallel corpora. A simple and low-cost method to introduce a translation dictionary is to attach a dictionary entry into a phrase table. This, however, does not work well. Target word order and even whole target sentences are often incorrect. To solve this problem, the proposed method uses high-frequency words in the training corpus. The high-frequency words may already be trained well; in other words, they may appear in the phrase table and therefore be translated with correct word order. Experimental results show the proposed method as far superior to simply attaching dictionary entries into phrase tables.

  • Efficient VLSI Design of Residue-to-Binary Converter for the Moduli Set (2n, 2n+1 - 1, 2n - 1)

    Su-Hon LIN  Ming-Hwa SHEU  Chao-Hsiang WANG  

     
    LETTER-Computer Systems

      Page(s):
    2058-2060

    The moduli set (2n, 2n+1-1, 2n-1) which is free of (2n+1)-type modulus is profitable to construct a high-performance residue number system (RNS). In this paper, we derive a reduced-complexity residue-to-binary conversion algorithm for the moduli set (2n, 2n+1-1, 2n-1) by using New Chinese Remainder Theorem (CRT). The resulting converter architecture mainly consists of simple adder and multiplexer (MUX) which is suitable to realize an efficient VLSI implementation. For the various dynamic range (DR) requirements, the experimental results show that the proposed converter can significantly achieve at least 23.3% average Area-Time (AT) saving when comparing with the latest designs. Based on UMC 0.18 µm CMOS cell-based technology, the chip area for 16-bit residue-to-binary converter is 931931 µm2 and its working frequency is about 135 MHz including I/O pad.

  • Indexing of Continuously Moving Objects on Road Networks

    Kyoung Soo BOK  Ho Won YOON  Dong Min SEO  Myoung Ho KIM  Jae Soo YOO  

     
    LETTER-Database

      Page(s):
    2061-2064

    In this paper, a new access method is proposed for current positions of moving objects on road networks in order to efficiently update their positions. In the existing index structures, the connectivity of edges is lost because the intersection points in which three or more edges are split. The proposed index structure preserves the network connectivity, which uses intersection oriented network model by not splitting intersection nodes that three or more edges meet for preserving the connectivity of adjacent road segments. The data node stores not only the positions of moving object but also the connectivity of networks.

  • Fuzzy Adaptive Partitioning Method for the Statistical Filtering

    Sang Ryul KIM  Hae Young LEE  Tae Ho CHO  

     
    LETTER-Networks

      Page(s):
    2065-2067

    This paper presents a fuzzy partitioning method that adaptively divides a global key pool into multiple partitions by a fuzzy logic in the statistical filtering-based sensor networks. Compared to the original statistical filtering scheme, the proposed method is more resilient against node compromise.

  • Reversible Watermarking Method for JPEG Images

    Akira SHIOZAKI  Motoi IWATA  Akio OGIHARA  

     
    LETTER-Application Information Security

      Page(s):
    2068-2071

    In this letter, we propose a simple reversible watermarking method for JPEG images using the characteristics of JPEG compression. The method embeds a watermark into a JPEG image, and it extracts the watermark from the watermarked JPEG image and at the same time can recover the watermarked JPEG image to an original unwatermarked JPEG image. Moreover we investigate the number of embeddable blocks, the quality of watermarked images, and the increase of file-size by embedding a watermark.

  • Intelligent Extraction of a Digital Watermark from a Distorted Image

    Asifullah KHAN  Syed Fahad TAHIR  Tae-Sun CHOI  

     
    LETTER-Application Information Security

      Page(s):
    2072-2075

    We present a novel approach to developing Machine Learning (ML) based decoding models for extracting a watermark in the presence of attacks. Statistical characterization of the components of various frequency bands is exploited to allow blind extraction of the watermark. Experimental results show that the proposed ML based decoding scheme can adapt to suit the watermark application by learning the alterations in the feature space incurred by the attack employed.

  • Executable Code Recognition in Network Flows Using Instruction Transition Probabilities

    Ikkyun KIM  Koohong KANG  Yangseo CHOI  Daewon KIM  Jintae OH  Jongsoo JANG  Kijun HAN  

     
    LETTER-Application Information Security

      Page(s):
    2076-2078

    The ability to recognize quickly inside network flows to be executable is prerequisite for malware detection. For this purpose, we introduce an instruction transition probability matrix (ITPX) which is comprised of the IA-32 instruction sets and reveals the characteristics of executable code's instruction transition patterns. And then, we propose a simple algorithm to detect executable code inside network flows using a reference ITPX which is learned from the known Windows Portable Executable files. We have tested the algorithm with more than thousands of executable and non-executable codes. The results show that it is very promising enough to use in real world.

  • Fast Fine Granularity Scalability Decoding Scheme for Low-Delay Scalable Video Coding Applications

    Nae-Ri PARK  Joo-Hee MOON  Jong-Ki HAN  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    2079-2082

    The Fine Grain Scalability (FGS) technique used in SVC codec encodes and decodes the quantization error of QBL (quality base layer) along the cyclic scanning path. The FGS technique provides the scalability property to the compressed bit stream. However, the cyclic scanning procedure of FGS method may require a huge computing time. In this paper, we propose a fast FGS decoding scheme, which has a lower decoding complexity without sacrificing image quality.

  • A Novel Hardware Architecture of Intra-Predictor Generator for H.264/AVC Codec

    Sanghoon KWAK  Jinwook KIM  Dongsoo HAR  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    2083-2086

    The intra-prediction unit is an essential part of H.264 codec, since it reduces the amount of data to be encoded by predicting pixel values (luminance and chrominance) from their neighboring blocks. A dedicated hardware implementation for the intra-prediction unit is required for real-time encoding and decoding of high resolution video data. To develop a cost-effective intra-prediction unit this paper proposes a novel architecture of intra-predictor generator, the core part of intra-prediction unit. The proposed intra-predictor generator enables the intra-prediction unit to achieve significant clock cycle reduction with approximately the same gate count, as compared to Huang's work [3].

  • Adaptively Combining Local with Global Information for Natural Scenes Categorization

    Shuoyan LIU  De XU  Xu YANG  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    2087-2090

    This paper proposes the Extended Bag-of-Visterms (EBOV) to represent semantic scenes. In previous methods, most representations are bag-of-visterms (BOV), where visterms referred to the quantized local texture information. Our new representation is built by introducing global texture information to extend standard bag-of-visterms. In particular we apply the adaptive weight to fuse the local and global information together in order to provide a better visterm representation. Given these representations, scene classification can be performed by pLSA (probabilistic Latent Semantic Analysis) model. The experiment results show that the appropriate use of global information improves the performance of scene classification, as compared with BOV representation that only takes the local information into account.

  • Color Constancy Based on Effective Regions

    Rui LU  De XU  Xinbin YANG  Bing LI  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    2091-2094

    None of the existing color constancy algorithms can be considered universal. Furthermore, they use all the image pixels, although actually not all of the pixels are effective in illumination estimation. Consequently, how to select a proper color constancy algorithm from existing algorithms and how to select effective(or useful) pixels from an image are two most important problems for natural images color constancy. In this paper, a novel Color Constancy method using Effective Regions (CCER) is proposed, which consists of the proper algorithm selection and effective regions selection. For a given image, the most proper algorithm is selected according to its Weilbull distribution while its effective regions are chosen based on image similarity. The experiments show promising results compared with the state-of-the-art methods.

  • Midpoint-Validation Method for Support Vector Machine Classification

    Hiroki TAMURA  Koichi TANNO  

     
    LETTER-Biocybernetics, Neurocomputing

      Page(s):
    2095-2098

    In this paper, we propose a midpoint-validation method which improves the generalization of Support Vector Machine. The proposed method creates midpoint data, as well as a turning adjustment parameter of Support Vector Machine using midpoint data and previous training data. We compare its performance with the original Support Vector Machine, Multilayer Perceptron, Radial Basis Function Neural Network and also tested our proposed method on several benchmark problems. The results obtained from the simulation shows the effectiveness of the proposed method.