The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E98-D No.3  (Publication Date:2015/03/01)

    Special Section on Foundations of Computer Science---New Spirits in Theory of Computation and Algorithm---
  • FOREWORD Open Access

    Yushi UNO  

     
    FOREWORD

      Page(s):
    485-485
  • Secure Sets and Defensive Alliances in Graphs: A Faster Algorithm and Improved Bounds

    Kazuyuki AMANO  Kyaw May OO  Yota OTACHI  Ryuhei UEHARA  

     
    PAPER

      Page(s):
    486-489

    Secure sets and defensive alliances in graphs are studied. They are sets of vertices that are safe in some senses. In this paper, we first present a fixed-parameter algorithm for finding a small secure set, whose running time is much faster than the previously known one. We then present improved bound on the smallest sizes of defensive alliances and secure sets for hypercubes. These results settle some open problems paused recently.

  • Faster Enumeration of All Maximal Cliques in Unit Disk Graphs Using Geometric Structure

    Taisuke IZUMI  Daisuke SUZUKI  

     
    PAPER

      Page(s):
    490-496

    This paper considers the problem of enumerating all maximal cliques in unit disk graphs, which is a plausible setting for applications of finding similar data groups. Our primary interest is to develop a faster algorithm using the geometric structure about the metric space where the input unit disk graph is embedded. Assuming that the distance between any two vertices is available, we propose a new algorithm based on two well-known algorithms called Bron-Kerbosch and Tomita-Tanaka-Takahashi. The key idea of our algorithm is to find a good pivot quickly using geometric proximity. We validate the practical impact of our algorithm via experimental evaluations.

  • The Biclique Cover Problem and the Modified Galois Lattice

    Hideaki OTSUKI  Tomio HIRATA  

     
    PAPER

      Page(s):
    497-502

    The minimum biclique cover problem is known to be NP-hard for general bipartite graphs. It can be solved in polynomial time for C4-free bipartite graphs, bipartite distance hereditary graphs and bipartite domino-free graphs. In this paper, we define the modified Galois lattice Gm(B) for a bipartite graph B and introduce the redundant parameter R(B). We show that R(B)=0 if and only if B is domino-free. Furthermore, for an input graph such that R(B)=1, we show that the minimum biclique cover problem can be solved in polynomial time.

  • Some Reduction Procedure for Computing Pathwidth of Undirected Graphs

    Masataka IKEDA  Hiroshi NAGAMOCHI  

     
    PAPER

      Page(s):
    503-511

    Computing an invariant of a graph such as treewidth and pathwidth is one of the fundamental problems in graph algorithms. In general, determining the pathwidth of a graph is NP-hard. In this paper, we propose several reduction methods for decreasing the instance size without changing the pathwidth, and implemented the methods together with an exact algorithm for computing pathwidth of graphs. Our experimental results show that the number of vertices in all chemical graphs in NCI database decreases by our reduction methods by 53.81% in average.

  • Node Query Preservation for Deterministic Linear Top-Down Tree Transducers

    Kazuki MIYAHARA  Kenji HASHIMOTO  Hiroyuki SEKI  

     
    PAPER

      Page(s):
    512-523

    This paper discusses the decidability of node query preservation problems for tree transducers. We assume a transformation given by a deterministic linear top-down data tree transducer (abbreviated as DLTV) and an n-ary query based on runs of a tree automaton. We say that a DLTV Tr strongly preserves a query Q if there is a query Q' such that for every tree t, the answer set of Q' for Tr(t) is equal to the answer set of Q for t. We also say that Tr weakly preserves Q if there is a query Q' such that for every t, the answer set of Q' for Tr(t) includes the answer set of Q for t. We show that the weak preservation problem is coNP-complete and the strong preservation problem is in 2-EXPTIME. We also show that the problems are decidable when a given transducer is a functional extended linear top-down data tree transducer with regular look-ahead, which is a more expressive transducer than DLTV.

  • Candidate Boolean Functions towards Super-Quadratic Formula Size

    Kenya UENO  

     
    PAPER

      Page(s):
    524-531

    In this paper, we explore possibilities and difficulties to prove super-quadratic formula size lower bounds from the following aspects. First, we consider recursive Boolean functions and prove their general formula size upper bounds. We also discuss recursive Boolean functions based on exact 2-bit functions. We show that their formula complexity are at least Ω(n2). Hence they can be candidate Boolean functions to prove super-quadratic formula size lower bounds. Next, we consider the reason of the difficulty of resolving the formula complexity of the majority function in contrast with the parity function. In particular, we discuss the structure of an optimal protocol partition for the Karchmer-Wigderson communication game.

  • A Fourier-Analytic Approach to List-Decoding for Sparse Random Linear Codes

    Akinori KAWACHI  Ikko YAMANE  

     
    PAPER

      Page(s):
    532-540

    It is widely known that decoding problems for random linear codes are computationally hard in general. Surprisingly, Kopparty and Saraf proved query-efficient list-decodability of sparse random linear codes by showing a reduction from a decoding problem for sparse random linear codes to that for the Hadamard code with small number of queries even under high error rate [11]. In this paper, we show a more direct list-decoding algorithm for sparse random linear codes with small number of queries from a Fourier-analytic approach.

  • Computational Complexity of Generalized Golf Solitaire

    Chuzo IWAMOTO  

     
    LETTER

      Page(s):
    541-544

    Golf is a solitaire game, where the object is to move all cards from a 5×8 rectangular layout of cards to the foundation. A top card in each column may be moved to the foundation if it is either one rank higher or lower than the top card of the foundation. If no cards may be moved, then the top card of the stock may be moved to the foundation. We prove that the generalized version of Golf Solitaire is NP-complete.

  • Special Section on the Architectures, Protocols, and Applications for the Future Internet
  • FOREWORD Open Access

    Yusheng JI  

     
    FOREWORD

      Page(s):
    545-545
  • New Directions for a Japanese Academic Backbone Network Open Access

    Shigeo URUSHIDANI  Shunji ABE  Kenjiro YAMANAKA  Kento AIDA  Shigetoshi YOKOYAMA  Hiroshi YAMADA  Motonori NAKAMURA  Kensuke FUKUDA  Michihiro KOIBUCHI  Shigeki YAMADA  

     
    INVITED PAPER

      Pubricized:
    2014/12/11
      Page(s):
    546-556

    This paper describes an architectural design and related services of a new Japanese academic backbone network, called SINET5, which will be launched in April 2016. The network will cover all 47 prefectures with 100-Gigabit Ethernet technology and connect each pair of prefectures with a minimized latency. This will enable users to leverage evolving cloud-computing powers as well as draw on a high-performance platform for data-intensive applications. The transmission layer will form a fully meshed, SDN-friendly, and reliable network. The services will evolve to be more dynamic and cloud-oriented in response to user demands. Cyber-security measures for the backbone network and tools for performance acceleration and visualization are also discussed.

  • Local Tree Hunting: Finding Closest Contents from In-Network Cache

    Hiroshi SHIMIZU  Hitoshi ASAEDA  Masahiro JIBIKI  Nozomu NISHINAGA  

     
    PAPER-Internet Architecture and Protocols

      Pubricized:
    2014/12/11
      Page(s):
    557-564

    How to retrieve the closest content from an in-network cache is one of the most important issues in Information-Centric Networking (ICN). This paper proposes a novel content discovery scheme called Local Tree Hunting (LTH). By adding branch-cast functionality to a local tree for content requests to a Content-Centric Network (CCN) response node, the discovery area for caching nodes expands. Since the location of such a branch-casting node moves closer to the request node when the content is more widely cached, the discovery range, i.e. the branch size of the local tree, becomes smaller. Thus, the discovery area is autonomously adjusted depending on the content dissemination. With this feature, LTH is able to find the “almost true closest” caching node without checking all the caching nodes in the in-network cache. The performance analysis employed in Zipf's law content distribution model and which uses the Least Recently Used eviction rule shows the superiority of LTH with respect to identifying the almost exact closest cache.

  • Adaptive TTL Control to Minimize Resource Cost in Hierarchical Caching Networks

    Satoshi IMAI  Kenji LEIBNITZ  Masayuki MURATA  

     
    PAPER-Internet Architecture and Protocols

      Pubricized:
    2014/12/11
      Page(s):
    565-577

    Content caching networks like Information-Centric Networking (ICN) are beneficial to reduce the network traffic by storing content data on routers near to users. In ICN, it becomes an important issue to manage system resources, such as storage and network bandwidth, which are influenced by cache characteristics of each cache node. Meanwhile, cache aging techniques based on Time-To-Live (TTL) of content facilitate analyzing cache characteristics and can realize appropriate resource management by setting efficient TTLs. However, it is difficult to search for the efficient TTLs in a distributed cache system connected by multiple cache nodes. Therefore, we propose an adaptive control mechanism of the TTL value of content in distributed cache systems by using predictive models which can estimate the impact of the TTL values on network resources and cache performance. Furthermore, we show the effectiveness of the proposed mechanism.

  • Improved Resilience through Extended KVS-Based Messaging System

    Masafumi KINOSHITA  Osamu TAKADA  Izumi MIZUTANI  Takafumi KOIKE  Kenji LEIBNITZ  Masayuki MURATA  

     
    PAPER-Internet Operation and Management

      Pubricized:
    2014/12/11
      Page(s):
    578-587

    In the big data era, messaging systems are required to process large volumes of message traffic with high scalability and availability. However, conventional systems have two issues regarding availability. The first issue is that failover processing itself has a risk of failure. The second issue is to find a trade-off between consistency and availability. We propose a resilient messaging system based on a distributed in-memory key-value store (KVS). Its servers are interconnected with each other and messages are distributed to multiple servers in normal processing state. This architecture can continue messaging services wherever in the messaging system server/process failures occur without using failover processing. Furthermore, we propose two methods for improved resilience: the round-robin method with a slowdown KVS exclusion and the two logical KVS counter-rotating rings to provide short-term-availability in the messaging system. Evaluation results demonstrate that the proposed system can continue service without failover processing. Compared with the conventional method, our proposed distribution method reduced 92% of error responses to clients caused by server failures.

  • Detecting Anomalies in Massive Traffic Streams Based on S-Transform Analysis of Summarized Traffic Entropies

    Sirikarn PUKKAWANNA  Hiroaki HAZEYAMA  Youki KADOBAYASHI  Suguru YAMAGUCHI  

     
    PAPER-Internet Operation and Management

      Pubricized:
    2014/12/11
      Page(s):
    588-595

    Detecting traffic anomalies is an indispensable component of overall security architecture. As Internet and traffic data with more sophisticated attacks grow exponentially, preserving security with signature-based traffic analyzers or analyzers that do not support massive traffic are not sufficient. In this paper, we propose a novel method based on combined sketch technique and S-transform analysis for detecting anomalies in massive traffic streams. The method does not require any prior knowledge such as attack patterns and models representing normal traffic behavior. To detect anomalies, we summarize the entropy of traffic data over time and maintain the summarized data in sketches. The entropy fluctuation of the traffic data aggregated to the same bucket is observed by S-transform to detect spectral changes referred to as anomalies in this work. We evaluated the performance of the method with real-world backbone traffic collected at the United States and Japan transit link in terms of both accuracy and false positive rates. We also explored the method parameters' influence on detection performance. Furthermore, we compared the performance of our method to S-transform-based and Wavelet-based methods. The results demonstrated that our method was capable of detecting anomalies and overcame both methods. We also found that our method was not sensitive to its parameter settings.

  • Dual-Band Sensor Network for Accurate Device-Free Localization in Indoor Environment with WiFi Interference

    Manyi WANG  Zhonglei WANG  Enjie DING  Yun YANG  

     
    PAPER-Network Computing and Applications

      Pubricized:
    2014/12/11
      Page(s):
    596-606

    Radio Frequency based Device-Free Localization (RFDFL) is an emerging localization technique without requirements of attaching any electronic device to a target. The target can be localized by means of measuring the shadowing of received signal strength caused by the target. However, the accuracy of RFDFL deteriorates seriously in environment with WiFi interference. State-of-the-art methods do not efficiently solve this problem. In this paper, we propose a dual-band method to improve the accuracy of RFDFL in environment without/with severe WiFi interference. We introduce an algorithm of fusing dual-band images in order to obtain an enhanced image inferring more precise location and propose a timestamp-based synchronization method to associate the dual-band images to ensure their one-one correspondence. With real-world experiments, we show that our method outperforms traditional single-band localization methods and improves the localization accuracy by up to 40.4% in real indoor environment with high WiFi interference.

  • Regular Section
  • Pseudo Polynomial Time Algorithms for Optimal Longcut Route Selection

    Yuichi SUDO  Toshimitsu MASUZAWA  Gen MOTOYOSHI  Tutomu MURASE  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2014/11/25
      Page(s):
    607-616

    Users of wireless mobile devices need Internet access not only when they stay at home or office, but also when they travel. It may be desirable for such users to select a "longcut route" from their current location to his/her destination that has longer travel time than the shortest route, but provides a better mobile wireless environment. In this paper, we formulate the above situation as the optimization problem of “optimal longcut route selection”, which requires us to find the best route concerning the wireless environment subject to a travel time constraint. For this new problem, we show NP-hardness, propose two pseudo-polynomial time algorithms, and experimental evaluation of the algorithms.

  • A Scenario-Based Reliability Analysis Approach for Component-Based Software

    Chunyan HOU  Chen CHEN  Jinsong WANG  Kai SHI  

     
    PAPER-Software Engineering

      Pubricized:
    2014/12/04
      Page(s):
    617-626

    With the rise of component-based software development, its reliability has attracted much attention from both academic and industry communities. Component-based software development focuses on architecture design, and thus it is important for reliability analysis to emphasize software architecture. Existing approaches to architecture-based software reliability analysis don't model the usage profile explicitly, and they ignore the difference between the testing profile and the practical profile of components, which limits their applicability and accuracy. In response to these issues, a new reliability modeling and prediction approach is introduced. The approach considers reliability-related architecture factors by explicitly modeling the system usage profile, and transforms the testing profile into the practical usage profile of components by representing the profile with input sub-domains. Finally, the evaluation experiment shows the potential of the approach.

  • Method Verb Recommendation Using Association Rule Mining in a Set of Existing Projects

    Yuki KASHIWABARA  Takashi ISHIO  Hideaki HATA  Katsuro INOUE  

     
    PAPER-Software Engineering

      Pubricized:
    2014/12/16
      Page(s):
    627-636

    It is well-known that program readability is important for maintenance tasks. Method names are important identifiers for program readability because they are used for understanding the behavior of methods without reading a part of the program. Although developers can create a method name by arbitrarily choosing a verb and objects, the names are expected to represent the behavior consistently. However, it is not easy for developers to choose verbs and objects consistently since each developer may have a different notion of a suitable lexicon for method names. In this paper, we propose a technique to recommend candidate verbs for a method name so that developers can use various verbs consistently. We recommend candidate verbs likely to be used as a part of a method name, using association rules extracted from existing methods. To evaluate our technique, we have extracted rules from 445 open source projects written in Java and confirmed the accuracy of our approach by applying the extracted rules to several open source applications. As a result, we found that 84.9% of the considered methods in four projects are recommended the existing verb. Moreover, we found that 73.2% of the actual renamed methods in six projects are recommended the correct verb.

  • A Quantitative Model for Evaluating the Efficiency of Proactive and Reactive Security Countermeasures

    Yoon-Ho CHOI  Han-You JEONG  Seung-Woo SEO  

     
    PAPER-Information Network

      Page(s):
    637-648

    During the investment process for enhancing the level of IT security, organizations typically rely on two kinds of security countermeasures, i.e., proactive security countermeasures (PSCs) and reactive security countermeasures (RSCs). The PSCs are known to prevent security incidents before their occurrence, while the RSCs identify security incidents and recover the damaged hardware and software during or after their occurrence. Some researchers studied the effect of the integration of PSCs and RSCs, and showed that the integration can control unwanted incidents better than a single type of security countermeasure. However, the studies were made mostly in a qualitative manner, not in a quantitative manner. In this paper, we focus on deriving a quantitative model that analyzes the influence of different conditions on the efficiency of the integrated security countermeasures. Using the proposed model, we analyze for the first time how vulnerability and the potential exploits resulting from such vulnerability can affect the efficiency of the integrated security countermeasures; furthermore, we analytically verify that as the efficiency of PSCs increases, the burden of RSCs decreases, and vice versa. Also, we describe how to select possibly optimal configurations of the integrated security countermeasures.

  • A Novel Statistical Approach to Detect Card Frauds Using Transaction Patterns

    Chae Chang LEE  Ji Won YOON  

     
    PAPER-Information Network

      Page(s):
    649-660

    In this paper, we present new methods for learning the individual patterns of a card user's transaction amount and the region in which he or she uses the card, for a given period, and for determining whether the specified transaction is allowable in accordance with these learned user transaction patterns. Then, we classify legitimate transactions and fraudulent transactions by setting thresholds based on the learned individual patterns.

  • The Case for Network Coding for Collective Communication on HPC Interconnection Networks Open Access

    Ahmed SHALABY  Ikki FUJIWARA  Michihiro KOIBUCHI  

     
    PAPER-Information Network

      Pubricized:
    2014/12/11
      Page(s):
    661-670

    Recently network bandwidth becomes a performance concern particularly for collective communication since bisection bandwidths of supercomputers become far less than their full bisection bandwidths. In this context we propose the use of a network coding technique to reduce the number of unicasts and the size of data transferred in latency-sensitive collective communications in supercomputers. Our proposed network coding scheme has a hierarchical multicasting structure with intra-group and inter-group unicasts. Quantitative analysis show that the aggregate path hop counts by our hierarchical network coding decrease as much as 94% when compared to conventional unicast-based multicasts. We validate these results by cycle-accurate network simulations. In 1,024-switch networks, the network reduces the execution time of collective communications as much as 70%. We also show that our hierarchical network coding is beneficial for any packet size.

  • Multiple Binary Codes for Fast Approximate Similarity Search

    Shinichi SHIRAKAWA  

     
    PAPER-Pattern Recognition

      Pubricized:
    2014/12/11
      Page(s):
    671-680

    One of the fast approximate similarity search techniques is a binary hashing method that transforms a real-valued vector into a binary code. The similarity between two binary codes is measured by their Hamming distance. In this method, a hash table is often used when undertaking a constant-time similarity search. The number of accesses to the hash table, however, increases when the number of bits lengthens. In this paper, we consider a method that does not access data with a long Hamming radius by using multiple binary codes. Further, we attempt to integrate the proposed approach and the existing multi-index hashing (MIH) method to accelerate the performance of the similarity search in the Hamming space. Then, we propose a learning method of the binary hash functions for multiple binary codes. We conduct an experiment on similarity search utilizing a dataset of up to 50 million items and show that our proposed method achieves a faster similarity search than that possible with the conventional linear scan and hash table search.

  • Analysis of Noteworthy Issues in Illumination Processing for Face Recognition

    Min YAO  Hiroshi NAGAHASHI  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    681-691

    Face recognition under variable illumination conditions is a challenging task. Numbers of approaches have been developed for solving the illumination problem. In this paper, we summarize and analyze some noteworthy issues in illumination processing for face recognition by reviewing various representative approaches. These issues include a principle that associates various approaches with a commonly used reflectance model and the shared considerations like contribution of basic processing methods, processing domain, feature scale, and a common problem. We also address a more essential question-what to actually normalize. Through the discussion on these issues, we also provide suggestions on potential directions for future research. In addition, we conduct evaluation experiments on 1) contribution of fundamental illumination correction to illumination insensitive face recognition and 2) comparative performance of various approaches. Experimental results show that the approaches with fundamental illumination correction methods are more insensitive to extreme illumination than without them. Tan and Triggs' method (TT) using L1 norm achieves the best results among nine tested approaches.

  • Extraction of Blood Vessels in Retinal Images Using Resampling High-Order Background Estimation

    Sukritta PARIPURANA  Werapon CHIRACHARIT  Kosin CHAMNONGTHAI  Hideo SAITO  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2014/12/12
      Page(s):
    692-703

    In retinal blood vessel extraction through background removal, the vessels in a fundus image which appear in a higher illumination variance area are often missing after the background is removed. This is because the intensity values of the vessel and the background are nearly the same. Thus, the estimated background should be robust to changes of the illumination intensity. This paper proposes retinal blood vessel extraction using background estimation. The estimated background is calculated by using a weight surface fitting method with a high degree polynomial. Bright pixels are defined as unwanted data and are set as zero in a weight matrix. To fit a retinal surface with a higher degree polynomial, fundus images are reduced in size by different scaling parameters in order to reduce the processing time and complexity in calculation. The estimated background is then removed from the original image. The candidate vessel pixels are extracted from the image by using the local threshold values. To identify the true vessel region, the candidate vessel pixels are dilated from the candidate. After that, the active contour without edge method is applied. The experimental results show that the efficiency of the proposed method is higher than the conventional low-pass filter and the conventional surface fitting method. Moreover, rescaling an image down using the scaling parameter at 0.25 before background estimation provides as good a result as a non-rescaled image does. The correlation value between the non-rescaled image and the rescaled image is 0.99. The results of the proposed method in the sensitivity, the specificity, the accuracy, the area under the receiver operating characteristic (ROC) curve (AUC) and the processing time per image are 0.7994, 0.9717, 0.9543, 0.9676 and 1.8320 seconds for the DRIVE database respectively.

  • Discriminating Unknown Objects from Known Objects Using Image and Speech Information

    Yuko OZASA  Mikio NAKANO  Yasuo ARIKI  Naoto IWAHASHI  

     
    PAPER-Multimedia Pattern Processing

      Pubricized:
    2014/12/16
      Page(s):
    704-711

    This paper deals with a problem where a robot identifies an object that a human asks it to bring by voice when there is a set of objects that the human and the robot can see. When the robot knows the requested object, it must identify the object and when it does not know the object, it must say it does not. This paper presents a new method for discriminating unknown objects from known objects using object images and human speech. It uses a confidence measure that integrates image recognition confidences and speech recognition confidences based on logistic regression.

  • Split-Jaccard Distance of Hierarchical Decompositions for Software Architecture

    Ki-Seong LEE  Byung-Woo HONG  Youngmin KIM  Jaeyeop AHN  Chan-Gun LEE  

     
    LETTER-Software Engineering

      Pubricized:
    2014/11/20
      Page(s):
    712-716

    Most previous approaches on comparing the results for software architecture recovery are designed to handle only flat decompositions. In this paper, we propose a novel distance called Split-Jaccard Distance of Hierarchical Decompositions. It extends the Jaccard coefficient and incorporates the concept of the splits of leaves in a hierarchical decomposition. We analyze the proposed distance and derive its properties, including the lower-bound and the metric space.

  • Discriminative Pronunciation Modeling Using the MPE Criterion

    Meixu SONG  Jielin PAN  Qingwei ZHAO  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Pubricized:
    2014/12/02
      Page(s):
    717-720

    Introducing pronunciation models into decoding has been proven to be benefit to LVCSR. In this paper, a discriminative pronunciation modeling method is presented, within the framework of the Minimum Phone Error (MPE) training for HMM/GMM. In order to bring the pronunciation models into the MPE training, the auxiliary function is rewritten at word level and decomposes into two parts. One is for co-training the acoustic models, and the other is for discriminatively training the pronunciation models. On Mandarin conversational telephone speech recognition task, compared to the baseline using a canonical lexicon, the discriminative pronunciation models reduced the absolute Character Error Rate (CER) by 0.7% on LDC test set, and with the acoustic model co-training, 0.8% additional CER decrease had been achieved.

  • Making Joint-Histogram-Based Weighted Median Filter Much Faster

    Hanhoon PARK  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2014/12/12
      Page(s):
    721-725

    In this letter, we propose a simple framework for accelerating a state-of-the-art histogram-based weighted median filter at no expense. It is based on a process of determining the filter processing direction. The determination is achieved by measuring the local feature variation of input images. Through experiments with natural images, it is verified that, depending on input images, the filtering speed can be substantially increased by changing the filtering direction.

  • A Uniformity-Approximated Histogram Equalization Algorithm for Image Enhancement

    Pei-Chen WU  Chang Hong LIN  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2014/11/20
      Page(s):
    726-727

    In this letter, we propose a novel Uniformity-Approximated Histogram Equalization (UAHE) algorithm to enhance the image as well as to preserve the image features. First, the UAHE algorithm generates the image histogram and computes the average value of all bins as the histogram threshold. In order to approximate the uniform histogram, the bins of image histograms greater than the above threshold are clipped, and the subtracted counts are averaged and uniformly assigned to the remaining bins lower than the threshold. The approximated uniform histogram is then applied to generate the intensity transformation function for image contrast enhancement. Experimental results show that our algorithm achieves the maximum entropy as well as the feature similarity values for image contrast enhancement.

  • No-Reference Blur Strength Estimation Based on Spectral Analysis of Blurred Images

    Hanhoon PARK  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2014/12/19
      Page(s):
    728-732

    In this letter, we propose a new no-reference blur estimation method in the frequency domain. It is based on computing the cumulative distribution function (CDF) of the Fourier transform spectrum of the blurred image and analyzing the relationship between its shape and the blur strength. From the analysis, we propose and evaluate six curve-shaped analytic metrics for estimating blur strength. Also, we employ an SVM-based learning scheme to improve the accuracy and robustness of the proposed metrics. In our experiments on Gaussian blurred images, one of the six metrics outperformed the others and the standard deviation values between 0 and 6 could be estimated with an estimation error of 0.31 on average.

  • An Efficient Filtering Method for Scalable Face Image Retrieval

    Deokmin HAAM  Hyeon-Gyu KIM  Myoung-Ho KIM  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2014/12/11
      Page(s):
    733-736

    This paper presents a filtering method for efficient face image retrieval over large volume of face databases. The proposed method employs a new face image descriptor, called a cell-orientation vector (COV). It has a simple form: a 72-dimensional vector of integers from 0 to 8. Despite of its simplicity, it achieves high accuracy and efficiency. Our experimental results show that the proposed method based on COVs provides better performance than a recent approach based on identity-based quantization in terms of both accuracy and efficiency.

  • Automatic Mura Detection for Display Film Using Mask Filtering in Wavelet Transform

    Jong-Seung PARK  Seung-Ho LEE  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2014/11/21
      Page(s):
    737-740

    In this letter, we present a method for automatic mura detection for display film using the efficient decision of cut-off frequency with DCT and mask filtering with wavelet transform. First, the background image including reflected light is estimated using DCT with adaptive cut-off frequency, and DWT is applied to background-removed images for generating mura mask. Then, a mura mask is generated by separating low-frequency noise in the approximation coefficients. Lastly, mura is detected by applying mura mask filtering to the detail coefficients. According to the comparison by Semu index, the results from the proposed method are superior to those from the existing methods. This indicates that the proposed method is high in reliability.

  • Displacement Mapping with an Augmented Patch Mesh

    Sungchul JUNG  Chang Ha LEE  

     
    LETTER-Computer Graphics

      Pubricized:
    2014/11/27
      Page(s):
    741-744

    Displacement mapping has been widely used for adding geometric surface details to 3D mesh models. However, it requires sufficient tessellation of the mesh if fine details are to be represented. In this paper, we propose a method for applying the displacement mapping even on coarse models by using an augmented patch mesh. The patch mesh is a regularly tessellated flat square mesh, which is mapped onto the target area. Our method applies displacement mapping to the patch mesh for fitting it to the original mesh as well as for adding surface details. We generate a patch map, which stores three-dimensional displacements from the patch mesh to the original mesh. A displacement map is also provided for defining the new surface feature. The target area in the original mesh is then replaced with the patch mesh, and the patch mesh reconstructs the original shape using the patch map and the new surface detail is added using the displacement map. Our results show that our method conveniently adds surface features to various models. The proposed method is particularly useful if the surface features change dynamically since the original mesh is preserved and the separate patch mesh overwrites the target area at runtime.

  • Detection of S1/S2 Components with Extraction of Murmurs from Phonocardiogram

    Xingri QUAN  Jongwon SEOK  Keunsung BAE  

     
    LETTER-Biological Engineering

      Pubricized:
    2014/11/25
      Page(s):
    745-748

    The simplicity is a type of measurement that represents visual simplicity of a signal, regardless of its amplitude and frequency variation. We propose an algorithm that can detect major components of heart sound using Gaussian regression to the smoothed simplicity profile of a heart sound signal. The weight and spread of the Gaussians are used as features to discriminate cardiac murmurs from major components of a heart sound signal. Experimental results show that the proposed method is very promising for robust and accurate detection of major heart sound components as well as cardiac murmurs.