The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E91-D No.11  (Publication Date:2008/11/01)

    Special Section on Knowledge, Information and Creativity Support System
  • FOREWORD Open Access

    Masaki NAKAGAWA  Thanaruk THEERAMUNKONG  

     
    FOREWORD

      Page(s):
    2543-2544
  • Monotone Increasing Binary Similarity and Its Application to Automatic Document-Acquisition of a Category

    Izumi SUZUKI  Yoshiki MIKAMI  Ario OHSATO  

     
    PAPER-Knowledge Acquisition

      Page(s):
    2545-2551

    A technique that acquires documents in the same category with a given short text is introduced. Regarding the given text as a training document, the system marks up the most similar document, or sufficiently similar documents, from among the document domain (or entire Web). The system then adds the marked documents to the training set to learn the set, and this process is repeated until no more documents are marked. Setting a monotone increasing property to the similarity as it learns enables the system to 1) detect the correct timing so that no more documents remain to be marked and to 2) decide the threshold value that the classifier uses. In addition, under the condition that the normalization process is limited to what term weights are divided by a p-norm of the weights, the linear classifier in which training documents are indexed in a binary manner is the only instance that satisfies the monotone increasing property. The feasibility of the proposed technique was confirmed through an examination of binary similarity and using English and German documents randomly selected from the Web.

  • Context-Aware Users' Preference Models by Integrating Real and Supposed Situation Data

    Chihiro ONO  Yasuhiro TAKISHIMA  Yoichi MOTOMURA  Hideki ASOH  Yasuhide SHINAGAWA  Michita IMAI  Yuichiro ANZAI  

     
    PAPER-Knowledge Acquisition

      Page(s):
    2552-2559

    This paper proposes a novel approach of constructing statistical preference models for context-aware personalized applications such as recommender systems. In constructing context-aware statistical preference models, one of the most important but difficult problems is acquiring a large amount of training data in various contexts/situations. In particular, some situations require a heavy workload to set them up or to collect subjects capable of answering the inquiries under those situations. Because of this difficulty, it is usually done to simply collect a small amount of data in a real situation, or to collect a large amount of data in a supposed situation, i.e., a situation that the subject pretends that he is in the specific situation to answer inquiries. However, both approaches have problems. As for the former approach, the performance of the constructed preference model is likely to be poor because the amount of data is small. For the latter approach, the data acquired in the supposed situation may differ from that acquired in the real situation. Nevertheless, the difference has not been taken seriously in existing researches. In this paper we propose methods of obtaining a better preference model by integrating a small amount of real situation data with a large amount of supposed situation data. The methods are evaluated using data regarding food preferences. The experimental results show that the precision of the preference model can be improved significantly.

  • Analysis of Eye Movements and Linguistic Boundaries in a Text for the Investigation of Japanese Reading Processes

    Akemi TERA  Kiyoaki SHIRAI  Takaya YUIZONO  Kozo SUGIYAMA  

     
    PAPER-Knowledge Acquisition

      Page(s):
    2560-2567

    In order to investigate reading processes of Japanese language learners, we have conducted an experiment to record eye movements during Japanese text reading using an eye-tracking system. We showed that Japanese native speakers use "forward and backward jumping eye movements" frequently [13] [14]. In this paper, we analyzed further the same eye tracking data. Our goal is to examine whether Japanese learners fix their eye movements at boundaries of linguistic units such as words, phrases or clauses when they start or end "backward jumping". We consider conventional linguistic boundaries as well as boundaries empirically defined based on the entropy of the N-gram model. Another goal is to examine the relation between the entropy of the N-gram model and the depth of syntactic structures of sentences. Our analysis shows that (1) Japanese learners often fix their eyes at linguistic boundaries, (2) the average of the entropy is the greatest at the fifth depth of syntactic structures.

  • Mining Regular Patterns in Transactional Databases

    Syed Khairuzzaman TANBEER  Chowdhury Farhan AHMED  Byeong-Soo JEONG  Young-Koo LEE  

     
    PAPER-Knowledge Discovery and Data Mining

      Page(s):
    2568-2577

    The frequency of a pattern may not be a sufficient criterion for identifying meaningful patterns in a database. The temporal regularity of a pattern can be another key criterion for assessing the importance of a pattern in several applications. A pattern can be said regular if it appears at a regular user-defined interval in the database. Even though there have been some efforts to discover periodic patterns in time-series and sequential data, none of the existing studies have provided an appropriate method for discovering the patterns that occur regularly in a transactional database. Therefore, in this paper, we introduce a novel concept of mining regular patterns from transactional databases. We also devise an efficient tree-based data structure, called a Regular Pattern tree (RP-tree in short), that captures the database contents in a highly compact manner and enables a pattern growth-based mining technique to generate the complete set of regular patterns in a database for a user-defined regularity threshold. Our performance study shows that mining regular patterns with an RP-tree is time and memory efficient, as well as highly scalable.

  • Handling Dynamic Weights in Weighted Frequent Pattern Mining

    Chowdhury Farhan AHMED  Syed Khairuzzaman TANBEER  Byeong-Soo JEONG  Young-Koo LEE  

     
    PAPER-Knowledge Discovery and Data Mining

      Page(s):
    2578-2588

    Even though weighted frequent pattern (WFP) mining is more effective than traditional frequent pattern mining because it can consider different semantic significances (weights) of items, existing WFP algorithms assume that each item has a fixed weight. But in real world scenarios, the weight (price or significance) of an item can vary with time. Reflecting these changes in item weight is necessary in several mining applications, such as retail market data analysis and web click stream analysis. In this paper, we introduce the concept of a dynamic weight for each item, and propose an algorithm, DWFPM (dynamic weighted frequent pattern mining), that makes use of this concept. Our algorithm can address situations where the weight (price or significance) of an item varies dynamically. It exploits a pattern growth mining technique to avoid the level-wise candidate set generation-and-test methodology. Furthermore, it requires only one database scan, so it is eligible for use in stream data mining. An extensive performance analysis shows that our algorithm is efficient and scalable for WFP mining using dynamic weights.

  • Entity Network Prediction Using Multitype Topic Models

    Hitohiro SHIOZAKI  Koji EGUCHI  Takenao OHKAWA  

     
    PAPER-Knowledge Discovery and Data Mining

      Page(s):
    2589-2598

    Conveying information about who, what, when and where is a primary purpose of some genres of documents, typically news articles. Statistical models that capture dependencies between named entities and topics can play an important role in handling such information. Although some relationships between who and where should be mentioned in such a document, no statistical topic models explicitly address the textual interactions between a who-entity and a where-entity. This paper presents a statistical model that directly captures the dependencies between an arbitrary number of word types, such as who-entities, where-entities and topics, mentioned in each document. We show that this multitype topic model performs better at making predictions on entity networks, in which each vertex represents an entity and each edge weight represents how a pair of entities at the incident vertices is closely related, through our experiments on predictions of who-entities and links between them. We also demonstrate the scale-free property in the weighted networks of entities extracted from written mentions.

  • Anchored Map: Graph Drawing Technique to Support Network Mining

    Kazuo MISUE  

     
    PAPER-Knowledge Discovery and Data Mining

      Page(s):
    2599-2606

    Because network diagrams drawn using the spring embedder are not easy to read, this paper proposes the use of "anchored maps" in which some nodes are fixed as anchors. The readability of network diagrams is discussed, anchored maps are proposed, and a method for drawing anchored maps is explained. The method uses indices to decide the orders of anchors because those orders markedly affect the readability of the network diagrams. Examples showing the effectiveness of the anchored maps are also shown.

  • An Evaluation System for End-User Computing Capability in a Computing Business Environment

    Chui Young YOON  

     
    PAPER-Knowledge Representation

      Page(s):
    2607-2615

    We describe an evaluation system consisting of an evaluation and interpretation model to totally assess and interpret an end-user's computing capability. It includes four evaluation factors and eighteen items, the complex indicators, an evaluation process, and method. We verified the model construct was verified by factor analysis and reliability analysis through a pilot test. We confirmed the application of the developed system by applying the model to evaluating end-users in a computing business environment and presenting the results. This system contributes to developing a practical system for evaluating an end-user's computing capability and hence for improving computing capability of end-users.

  • combiSQORE: A Combinative-Ontology Retrieval System for Next Generation Semantic Web Applications

    Rachanee UNGRANGSI  Chutiporn ANUTARIYA  Vilas WUWONGSE  

     
    PAPER-Knowledge Representation

      Page(s):
    2616-2625

    In order to timely response to a user query at run-time, next generation Semantic Web applications demand a robust mechanism to dynamically select one or more existing ontologies available on the Web and combine them automatically if needed. Although existing ontology retrieval systems return a lengthy list of resultant ontologies, they cannot identify which ones can completely meet the query requirements nor determine a minimum set of resultant ontologies that can jointly satisfy the requirements if no single ontology is available to satisfy them. Therefore, this paper presents an ontology retrieval system, namely combiSQORE, which can return single or combinative ontologies that completely satisfy a submitted query when the available ontology database is adequate to answer such query. In addition, the proposed system ranks the returned results based on their semantic similarities to the given query and their modification (integration) costs. The experimental results show that combiSQORE system yields practical combinative ontologies and useful rankings.

  • Novel Topic Maps to RDF/RDF Schema Translation Method

    Shinae SHIN  Dongwon JEONG  Doo-Kwon BAIK  

     
    PAPER-Knowledge Representation

      Page(s):
    2626-2637

    We propose an enhanced method for translating Topic Maps to RDF/RDF Schema, to realize the Semantic Web. A critical issue for the Semantic Web is to efficiently and precisely describe Web information resources, i.e., Web metadata. Two representative standards, Topic Maps and RDF have been used for Web metadata. RDF-based standardization and implementation of the Semantic Web have been actively performed. Since the Semantic Web must accept and understand all Web information resources that are represented with the other methods, Topic Maps-to-RDF translation has become an issue. Even though many Topic Maps to RDF translation methods have been devised, they still have several problems (e.g. semantic loss, complex expression, etc.). Our translation method provides an improved solution to these problems. This method shows lower semantic loss than the previous methods due to extract both explicit semantics and implicit semantics. Compared to the previous methods, our method reduces the encoding complexity of resulting RDF. In addition, in terms of reversibility, the proposed method regenerates all Topic Maps constructs in an original source when is reverse translated.

  • Assisting Pictogram Selection with Categorized Semantics

    Heeryon CHO  Toru ISHIDA  Satoshi OYAMA  Rieko INABA  Toshiyuki TAKASAKI  

     
    PAPER-Knowledge Applications and Intelligent User Interfaces

      Page(s):
    2638-2646

    Since participants at both end of the communication channel must share common pictogram interpretation to communicate, the pictogram selection task must consider both participants' pictogram interpretations. Pictogram interpretation, however, can be ambiguous. To assist the selection of pictograms more likely to be interpreted as intended, we propose a categorical semantic relevance measure which calculates how relevant a pictogram is to a given interpretation in terms of a given category. The proposed measure defines similarity measurement and probability of interpretation words using pictogram interpretations and frequencies gathered from a web survey. Moreover, the proposed measure is applied to categorized pictogram interpretations to enhance pictogram retrieval performance. Five pictogram categories used for categorizing pictogram interpretations are defined based on the five first-level classifications defined in the Concept Dictionary of the EDR Electronic Dictionary. Retrieval performances among not-categorized interpretations, categorized interpretations, and categorized and weighted interpretations using semantic relevance measure were compared, and the categorized semantic relevance approaches showed more stable performances than the not-categorized approach.

  • Pen-Based Interface Using Hand Motions in the Air

    Yu SUZUKI  Kazuo MISUE  Jiro TANAKA  

     
    PAPER-Knowledge Applications and Intelligent User Interfaces

      Page(s):
    2647-2654

    A system which employs a stylus as an input device is suitable for creative activities like writing and painting. However, such a system does not always provide the user with a GUI that is easy to operate using the stylus. In addition, system usability is diminished because the stylus is not always integrated into the system in a way that takes into consideration the features of a pen. The purpose of our research is to improve the usability of a system which uses a stylus as an input device. We propose shortcut actions, which are interaction techniques for operation with a stylus that are controlled through a user's hand motions made in the air. We developed the Context Sensitive Stylus as a device to implement the shortcut actions. The Context Sensitive Stylus consists of an accelerometer and a conventional stylus. We also developed application programs to which we applied the shortcut actions; e.g., a drawing tool, a scroll supporting tool, and so on. Results from our evaluation of the shortcut actions indicate that users can concentrate better on their work when using the shortcut actions than when using conventional menu operations.

  • Regular Section
  • A Retargetable Compiler Based on Graph Representation for Dynamically Reconfigurable Processor Arrays

    Vasutan TUNBUNHENG  Hideharu AMANO  

     
    PAPER-VLSI Systems

      Page(s):
    2655-2665

    For developing design environment of various Dynamically Reconfigurable Processor Arrays (DRPAs), the Graph with Configuration Information (GCI) is proposed to represent configurable resource in the target dynamically reconfigurable architecture. The functional unit, constant unit, register, and routing resource can be represented in the graph as well as the configuration information. The restriction in the hardware is also added in the graph by limiting the possible configuration at a node controlled by the other node. A prototype compiler called Black-Diamond with GCI is now available for three different DRPAs. It translates data-flow graph from C-like front-end description, applies placement and routing by using the GCI, and generates configuration data for each element of the DRPA. Evaluation results of simple applications show that Black-Diamond can generate reasonable designs for all three different architectures. Other target architectures can be easily treated by representing many aspects of architectural property into a GCI.

  • Non-recursive Discrete Periodized Wavelet Transform Using Segment Accumulation Algorithm and Reversible Round-Off Approach

    Chin-Feng TSAI  Huan-Sheng WANG  King-Chu HUNG  Shih-Chang HSIA  

     
    PAPER-VLSI Systems

      Page(s):
    2666-2674

    Wavelet-based features with simplicity and high efficacy have been used in many pattern recognition (PR) applications. These features are usually generated from the wavelet coefficients of coarse levels (i.e., high octaves) in the discrete periodized wavelet transform (DPWT). In this paper, a new 1-D non-recursive DPWT (NRDPWT) is presented for real-time high octave decomposition. The new 1-D NRDPWT referred to as the 1-D RRO-NRDPWT can overcome the word-length-growth (WLG) effect based on two strategies, resisting error propagation and applying a reversible round-off linear transformation (RROLT) theorem. Finite precision performance analysis is also taken to study the word length suppression efficiency and the feature efficacy in breast lesion classification on ultrasonic images. For the realization of high octave decomposition, a segment accumulation algorithm (SAA) is also presented. The SAA is a new folding technique that can reduce multipliers and adders dramatically without the cost of increasing latency.

  • Configuration Sharing to Reduce Reconfiguration Overhead Using Static Partial Reconfiguration

    Sungjoon JUNG  Tag Gon KIM  

     
    PAPER-Computer Systems

      Page(s):
    2675-2684

    Reconfigurable architectures are one of the most promising solutions satisfying both performance and flexibility. However, reconfiguration overhead in those architectures makes them inappropriate for repetitive reconfigurations. In this paper, we introduce a configuration sharing technique to reduce reconfiguration overhead between similar applications using static partial reconfiguration. Compared to the traditional resource sharing that configures multiple temporal partitions simultaneously and employs a time-multiplexing technique, the proposed configuration sharing reconfigures a device incrementally as an application changes and requires a backend adaptation to reuse configurations between applications. Adopting a data-flow intermediate representation, our compiler framework extends a min-cut placer and a negotiation-based router to deal with the configuration sharing. The results report that the framework could reduce 20% of configuration time at the expense of 1.9% of computation time on average.

  • Contract Specification in Java: Classification, Characterization, and a New Marker Method

    Chien-Tsun CHEN  Yu Chin CHENG  Chin-Yun HSIEH  

     
    PAPER-Fundamentals of Software and Theory of Programs

      Page(s):
    2685-2692

    Design by Contract (DBC), originated in the Eiffel programming language, is generally accepted as a practical method for building reliable software. Currently, however, few languages have built-in support for it. In recent years, several methods have been proposed to support DBC in Java. We compare eleven DBC tools for Java by analyzing their impact on the developer's programming activities, which are characterized by seven quality attributes identified in this paper. It is shown that each of the existing tools fails to achieve some of the quality attributes. This motivates us to develop ezContract, an open source DBC tool for Java that achieves all of the seven quality attributes. ezContract achieves streamlined integration with the working environment. Notably, standard Java language is used and advanced IDE features that work for standard Java programs can also work for the contract-enabled programs. Such features include incremental compilation, automatic refactoring, and code assist.

  • A Fully Consistent Hidden Semi-Markov Model-Based Speech Recognition System

    Keiichiro OURA  Heiga ZEN  Yoshihiko NANKAKU  Akinobu LEE  Keiichi TOKUDA  

     
    PAPER-Speech and Hearing

      Page(s):
    2693-2700

    In a hidden Markov model (HMM), state duration probabilities decrease exponentially with time, which fails to adequately represent the temporal structure of speech. One of the solutions to this problem is integrating state duration probability distributions explicitly into the HMM. This form is known as a hidden semi-Markov model (HSMM). However, though a number of attempts to use HSMMs in speech recognition systems have been proposed, they are not consistent because various approximations were used in both training and decoding. By avoiding these approximations using a generalized forward-backward algorithm, a context-dependent duration modeling technique and weighted finite-state transducers (WFSTs), we construct a fully consistent HSMM-based speech recognition system. In a speaker-dependent continuous speech recognition experiment, our system achieved about 9.1% relative error reduction over the corresponding HMM-based system.

  • Fast Image Mosaicing Based on Histograms

    Akihiro MORI  Seiichi UCHIDA  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    2701-2708

    This paper introduces a fast image mosaicing technique that does not require costly search on image domain (e.g., pixel-to-pixel correspondence search on the image domain) and the iterative optimization (e.g., gradient-based optimization, iterative optimization, and random optimization) of geometric transformation parameter. The proposed technique is organized in a two-step manner. At both steps, histograms are fully utilized for high computational efficiency. At the first step, a histogram of pixel feature values is utilized to detect pairs of pixels with the same rare feature values as candidates of corresponding pixel pairs. At the second step, a histogram of transformation parameter values is utilized to determine the most reliable transformation parameter value. Experimental results showed that the proposed technique can provide reasonable mosaicing results in most cases with very feasible computations.

  • A Flexible Video CODEC System for Super High Resolution Video

    Takeshi YOSHITOME  Ken NAKAMURA  Jiro NAGANUMA  Yoshiyuki YASHIMA  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    2709-2717

    We propose a flexible video CODEC system for super-high-resolution videos such as those utilizing 4k2k pixel. It uses the spatially parallel encoding approach and has sufficient scalability for the target video resolution to be encoded. A video shift and padding function has been introduced to prevent the image quality from being degraded when different active line systems are connected. The switchable cascade multiplexing function of our system enables various super-high-resolutions to be encoded and super-high-resolution video streams to be recorded and played back using a conventional PC. A two-stage encoding method using the complexity of each divided image has been introduced to equalize encoding quality among multiple divided videos. System Time Clock (STC) sharing has also been implemented in this CODEC system to absorb the disparity in the times streams are received between channels. These functions enable highly-efficient, high-quality encoding for super-high-resolution video.

  • 3D Triangular Mesh Parameterization with Semantic Features Based on Competitive Learning Methods

    Shun MATSUI  Kota AOKI  Hiroshi NAGAHASHI  

     
    PAPER-Computer Graphics

      Page(s):
    2718-2726

    In 3D computer graphics, mesh parameterization is a key technique for digital geometry processings such as morphing, shape blending, texture mapping, re-meshing and so on. Most of the previous approaches made use of an identical primitive domain to parameterize a mesh model. In recent works of mesh parameterization, more flexible and attractive methods that can create direct mappings between two meshes have been reported. These mappings are called "cross-parameterization" and typically preserve semantic feature correspondences between target meshes. This paper proposes a novel approach for parameterizing a mesh into another one directly. The main idea of our method is to combine a competitive learning and a least-square mesh techniques. It is enough to give some semantic feature correspondences between target meshes, even if they are in different shapes or in different poses.

  • Continuous Range Query Processing over Moving Objects

    Yong Hun PARK  Kyoung Soo BOK  Jae Soo YOO  

     
    LETTER-Database

      Page(s):
    2727-2730

    In this paper, we propose a continuous range query processing method over moving objects. To efficiently process continuous range queries, we design a main-memory-based query index that uses smaller storage and significantly reduces the query processing time. We show through performance evaluation that the proposed method outperforms the existing methods.

  • Full-Index-Embedding Patchwork Algorithm for Audio Watermarking

    Hyunho KANG  Koutarou YAMAGUCHI  Brian KURKOSKI  Kazuhiko YAMAGUCHI  Kingo KOBAYASHI  

     
    LETTER-Application Information Security

      Page(s):
    2731-2734

    For the digital watermarking patchwork algorithm originally given by Bender et al., this paper proposes two improvements applicable to audio watermarking. First, the watermark embedding strength is psychoacoustically adapted, using the Bark frequency scale. Second, whereas previous approaches leave the samples that do not correspond to the data untouched, in this paper, these are modified to reduce the probability of misdetection, a method called full index embedding. In simulations, the proposed combination of these two proposed methods has higher resistance to a variety of attacks than prior algorithms.

  • Japanese 45 Single Sounds Recognition Using Intraoral Shape

    Takeshi SAITOH  Ryosuke KONISHI  

     
    LETTER-Pattern Recognition

      Page(s):
    2735-2738

    This paper describes a recognition method of Japanese single sounds for application to lip reading. Related researches investigated only five or ten sounds. In this paper, experiments were conducted for 45 Japanese single sounds by classifying them into five vowels category, ten consonants category, and 45 sounds category. We obtained recognition rates of 94.7, 30.9 and 30.0% with trajectory feature.

  • Robust Speaker Clustering Using Affinity Propagation

    Xiang ZHANG  Ping LU  Hongbin SUO  Qingwei ZHAO  Yonghong YAN  

     
    LETTER-Speech and Hearing

      Page(s):
    2739-2741

    In this letter, a recently proposed clustering algorithm named affinity propagation is introduced for the task of speaker clustering. This novel algorithm exhibits fast execution speed and finds clusters with low error. However, experiments show that the speaker purity of affinity propagation is not satisfying. Thus, we propose a hybrid approach that combines affinity propagation with agglomerative hierarchical clustering to improve the clustering performance. Experiments show that compared with traditional agglomerative hierarchical clustering, the hybrid method achieves better performance on the test corpora.

  • An OFDM-Based Speech Encryption System without Residual Intelligibility

    Der-Chang TSENG  Jung-Hui CHIU  

     
    LETTER-Speech and Hearing

      Page(s):
    2742-2745

    Since an FFT-based speech encryption system retains a considerable residual intelligibility, such as talk spurts and the original intonation in the encrypted speech, this makes it easy for eavesdroppers to deduce the information contents from the encrypted speech. In this letter, we propose a new technique based on the combination of an orthogonal frequency division multiplexing (OFDM) scheme and an appropriate QAM mapping method to remove the residual intelligibility from the encrypted speech by permuting several frequency components. In addition, the proposed OFDM-based speech encryption system needs only two FFT operations instead of the four required by the FFT-based speech encryption system. Simulation results are presented to show the effectiveness of this proposed technique.

  • Utterance Verification Using Word Voiceprint Models Based on Probabilistic Distributions of Phone-Level Log-Likelihood Ratio and Phone Duration

    Suk-Bong KWON  HoiRin KIM  

     
    LETTER-Speech and Hearing

      Page(s):
    2746-2750

    This paper suggests word voiceprint models to verify the recognition results obtained from a speech recognition system. Word voiceprint models have word-dependent information based on the distributions of phone-level log-likelihood ratio and duration. Thus, we can obtain a more reliable confidence score for a recognized word by using its word voiceprint models that represent the more proper characteristics of utterance verification for the word. Additionally, when obtaining a log-likelihood ratio-based word voiceprint score, this paper proposes a new log-scale normalization function using the distribution of the phone-level log-likelihood ratio, instead of the sigmoid function widely used in obtaining a phone-level log-likelihood ratio. This function plays a role of emphasizing a mis-recognized phone in a word. This individual information of a word is used to help achieve a more discriminative score against out-of-vocabulary words. The proposed method requires additional memory, but it shows that the relative reduction in equal error rate is 16.9% compared to the baseline system using simple phone log-likelihood ratios.

  • Histogram Equalization-Based Thresholding

    Soon Hak KWON  Hye Cheun JEONG  Suk Tae SEO  In Keun LEE  Chang Sik SON  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    2751-2753

    The thresholding results for gray level images depend greatly on the thresholding method applied. However, this letter proposes a histogram equalization-based thresholding algorithm that makes the thresholding results insensitive to the thresholding method applied. Experimental results are presented to demonstrate the effectiveness of the proposed thresholding algorithm.