The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E90-D No.10  (Publication Date:2007/10/01)

    Special Section on Knowledge, Information and Creativity Support System
  • FOREWORD Open Access

    Masaki NAKAGAWA  Thanaruk THEERAMUNKONG  

     
    FOREWORD

      Page(s):
    1491-1492
  • Qualitative, Quantitative Evaluation of Ideas in Brain Writing Groupware

    Ujjwal NEUPANE  Motoki MIURA  Tessai HAYAMA  Susumu KUNIFUJI  

     
    PAPER

      Page(s):
    1493-1500

    The problem with traditional Brain Writing (BW) is that the users are restricted from viewing all sets of ideas at one time; and they are also restricted from writing down more than three ideas at a time. In this research we describe distributed experimental environment for BW which was designed to obtain better results and can thus eliminate the problems of traditional BW technique. The actual experimental system is an integration of three BW modes with mutually different features and characters. We conducted three different tests implementing this environment, and confirmed quality and quantity of ideas generated by three different groups. It was confirmed that unrestricted inputs are effective in generating a large quantity of ideas, whereas limiting the number of sharable/viewable ideas shows better tendency in some aspects. However, qualitative evaluation results were not confirmed as different functions show variant results. The evaluation of the functions that support viewing and sharing of ideas show that synergy is not always an advantage in generating ideas. The results of number of ideas in correlation with time show that 20 minutes time was appropriate to conduct BW in distributed environment.

  • An Intrablog-Based Informal Communication Encouraging System that Seamlessly Links On-Line Communications to Off-Line Ones

    Yoshihito CHIBA  Kazushi NISHIMOTO  

     
    PAPER

      Page(s):
    1501-1508

    In this paper, we propose an intrablog-based informal communication encouraging system named "Attractiblog." It has been pointed out that daily informal communications at a shared public space play very important role in information sharing in an organization. However, in most cases, the communications are often mere chats. To make the communications more informative, it is necessary to feed some common and beneficial topics there. Attractiblog is a system that extracts some articles posted in an intrablog considering who are in the shared space, and show them on a large-sized display that is located in the space. Thus, Attractiblog attempts to seamlessly link on-line communications to off-line communications. We conducted user studies and confirmed that Attractiblog can achieve a natural correspondence between topics in face-to-face informal communications and issues related to the activities of an organization as given in its intrablog.

  • Related Word Lists Effective in Creativity Support

    Eiko YAMAMOTO  Hitoshi ISAHARA  

     
    PAPER

      Page(s):
    1509-1515

    Expansion of imagination is crucial for lively creativity. However, such expansion is sometimes rather difficult and an environment which supports creativity is required. Because people can attain higher creativity by using words with a thematic relation rather than words with a taxonomical relation, we tried to extract word lists having thematic relations among words. We first extracted word lists from domain specific documents by utilizing inclusive relations between words based on a modifiee/modifier relationship in documents. Next, from the extracted word lists, we removed the word lists having taxonomical relations so as to obtain only word lists having thematic relations. Finally, based on the assumption what kind of knowledge a person can associate when he/she looks at a set of words correlates with how the word set is effective in creativity support, we examined whether the word lists direct us to informative pages on the Web for verifying the availability of our extracted word lists.

  • Semi-Supervised Learning to Classify Evaluative Expressions from Labeled and Unlabeled Texts

    Yasuhiro SUZUKI  Hiroya TAKAMURA  Manabu OKUMURA  

     
    PAPER

      Page(s):
    1516-1522

    In this paper, we present a method to automatically acquire a large-scale vocabulary of evaluative expressions from a large corpus of blogs. For the purpose, this paper presents a semi-supervised method for classifying evaluative expressions, that is, tuples of subjects, their attributes, and evaluative words, that indicate either favorable or unfavorable opinions towards a specific subject. Due to its characteristics, our semi-supervised method can classify evaluative expressions in a corpus by their polarities, starting from a very small set of seed training examples and using contextual information in the sentences the expressions belong to. Our experimental results with real Weblog data as our corpus show that this bootstrapping approach can improve the accuracy of methods for classifying favorable and unfavorable opinions. We also show that a reasonable amount of evaluative expressions can be really acquired.

  • Mining Causality from Texts for Question Answering System

    Chaveevan PECHSIRI  Asanee KAWTRAKUL  

     
    PAPER

      Page(s):
    1523-1533

    This research aims to develop automatic knowledge mining of causality from texts for supporting an automatic question answering system (QA) in answering 'why' question, which is among the most crucial forms of questions. The out come of this research will assist people in diagnosing problems, such as in plant diseases, health, industrial and etc. While the previous works have extracted causality knowledge within only one or two adjacent EDUs (Elementary Discourse Units), this research focuses to mine causality knowledge existing within multiple EDUs which takes multiple causes and multiple effects in to consideration, where the adjacency between cause and effect is unnecessary. There are two main problems: how to identify the interesting causality events from documents, and how to identify the boundaries of the causative unit and the effective unit in term of the multiple EDUs. In addition, there are at least three main problems involved in boundaries identification: the implicit boundary delimiter, the nonadjacent cause-consequence, and the effect surrounded by causes. This research proposes using verb-pair rules learnt by comparing the Naïve Bayes classifier (NB) and Support Vector Machine (SVM) to identify causality EDUs in Thai agricultural and health news domains. The boundary identification problems are solved by utilizing verb-pair rules, Centering Theory and cue phrase set. The reason for emphasizing on using verbs to extract causality is that they explicitly make, in a certain way, the consequent events of cause-effect, e.g. 'Aphids suck the sap from rice leaves. Then leaves will shrink. Later, they will become yellow and dry.'. The outcome of the proposed methodology shown that the verb-pair rules extracted from NB outperform those extracted from SVM when the corpus contains high occurence of each verb, while the results from SVM is better than NB when the corpus contains less occurence of each verb. The verb-pair rules extracted from NB for causality extraction has the highest precision (0.88) with the recall of 0.75 from the plant disease corpus whereas from SVM has the highest precision (0.89) with the recall of 0.76 from bird flu news. For boundary determination, our methodology can handle very well with approximate 96% accuracy. In addition, the extracted causality results from this research can be generalized as laws in the Inductive-Statistical theory of Hempel's explanation theory, which will be useful for QA and reasoning.

  • Automatic Acquisition of Qualia Structure from Corpus Data

    Ichiro YAMADA  Timothy BALDWIN  Hideki SUMIYOSHI  Masahiro SHIBATA  Nobuyuki YAGI  

     
    PAPER

      Page(s):
    1534-1541

    This paper presents a method to automatically acquire a given noun's telic and agentive roles from corpus data. These relations form part of the qualia structure assumed in the generative lexicon, where the telic role represents a typical purpose of the entity and the agentive role represents the origin of the entity. Our proposed method employs a supervised machine-learning technique which makes use of template-based contextual features derived from token instances of each noun. The output of our method is a ranked list of verbs for each noun, across the different qualia roles. We also propose a variant of Spearman's rank correlation to evaluate the correlation of two top-N ranked lists. Using this correlation method, we represent the ability of the proposed method to identify qualia structure relative to a conventional template-based method.

  • Automatic Extraction of the Fine Category of Person Named Entities from Text Corpora

    Tri-Thanh NGUYEN  Akira SHIMAZU  

     
    PAPER

      Page(s):
    1542-1549

    Named entities play an important role in many Natural Language Processing applications. Currently, most named entity recognition systems rely on a small set of general named entity (NE) types. Though some efforts have been proposed to expand the hierarchy of NE types, there are still a fixed number of NE types. In real applications, such as question answering or semantic search systems, users may be interested in more diverse specific NE types. This paper proposes a method to extract categories of person named entities from text documents. Based on Dual Iterative Pattern Relation Extraction method, we develop a more suitable model for solving our problem, and explore the generation of different pattern types. A method for validating whether a category is valid or not is proposed to improve the performance, and experiments on Wall Street Journal corpus give promising results.

  • Kernel Trees for Support Vector Machines

    Ithipan METHASATE  Thanaruk THEERAMUNKONG  

     
    PAPER

      Page(s):
    1550-1556

    The support vector machines (SVMs) are one of the most effective classification techniques in several knowledge discovery and data mining applications. However, a SVM requires the user to set the form of its kernel function and parameters in the function, both of which directly affect to the performance of the classifier. This paper proposes a novel method, named a kernel-tree, the function of which is composed of multiple kernels in the form of a tree structure. The optimal kernel tree structure and its parameters is determined by genetic programming (GP). To perform a fine setting of kernel parameters, the gradient descent method is used. To evaluate the proposed method, benchmark datasets from UCI and dataset of text classification are applied. The result indicates that the method can find a better optimal solution than the grid search and the gradient search.

  • Improving Search Performance: A Lesson Learned from Evaluating Search Engines Using Thai Queries

    Shisanu TONGCHIM  Virach SORNLERTLAMVANICH  Hitoshi ISAHARA  

     
    PAPER

      Page(s):
    1557-1564

    This study initiates a systematic evaluation of web search engine performance using queries written in Thai. Statistical testing indicates that there are some significant differences in the performance of search engines. In addition to compare the search performance, an analysis of the returned results is carried out. The analysis of the returned results shows that the majority of returned results are unique to a particular search engine and each system provides quite different results. This encourages the use of metasearch techniques to combine the search results in order to improve the performance and reliability in finding relevant documents. We examine several metasearch models based on the Borda count and Condorcet voting schemes. We also propose the use of Evolutionary Programming (EP) to optimize weight vectors used by the voting algorithms. The results show that the use of metasearch approaches produces superior performance compared to any single search engine on Thai queries.

  • Statistical-Based Approach to Non-segmented Language Processing

    Virach SORNLERTLAMVANICH  Thatsanee CHAROENPORN  Shisanu TONGCHIM  Canasai KRUENGKRAI  Hitoshi ISAHARA  

     
    PAPER

      Page(s):
    1565-1573

    Several approaches have been studied to cope with the exceptional features of non-segmented languages. When there is no explicit information about the boundary of a word, segmenting an input text is a formidable task in language processing. Not only the contemporary word list, but also usages of the words have to be maintained to cover the use in the current texts. The accuracy and efficiency in higher processing do heavily rely on this word boundary identification task. In this paper, we introduce some statistical based approaches to tackle the problem due to the ambiguity in word segmentation. The word boundary identification problem is then defined as a part of others for performing the unified language processing in total. To exhibit the ability in conducting the unified language processing, we selectively study the tasks of language identification, word extraction, and dictionary-less search engine.

  • Integration of Learning Methods, Medical Literature and Expert Inspection in Medical Data Mining

    Tu Bao HO  Saori KAWASAKI  Katsuhiko TAKABAYASHI  Canh Hao NGUYEN  

     
    PAPER

      Page(s):
    1574-1581

    From lessons learned in medical data mining projects we show that integration of advanced computation techniques and human inspection is indispensable in medical data mining. We proposed an integrated approach that merges data mining and text mining methods plus visualization support for expert evaluation. We also appropriately developed temporal abstraction and text mining methods to exploit the collected data. Furthermore, our visual discovery system D2MS allowed to actively and effectively working with physicians. Significant findings in hepatitis study were obtained by the integrated approach.

  • A Model-Based Learning Process for Modeling Coarticulation of Human Speech

    Jianguo WEI  Xugang LU  Jianwu DANG  

     
    PAPER

      Page(s):
    1582-1591

    Machine learning techniques have long been applied in many fields and have gained a lot of success. The purpose of learning processes is generally to obtain a set of parameters based on a given data set by minimizing a certain objective function which can explain the data set in a maximum likelihood or minimum estimation error sense. However, most of the learned parameters are highly data dependent and rarely reflect the true physical mechanism that is involved in the observation data. In order to obtain the inherent knowledge involved in the observed data, it is necessary to combine physical models with learning process rather than only fitting the observations with a black box model. To reveal underlying properties of human speech production, we proposed a learning process based on a physiological articulatory model and a coarticulation model, where both of the models are derived from human mechanisms. A two-layer learning framework was designed to learn the parameters concerned with physiological level using the physiological articulatory model and the parameters in the motor planning level using the coarticulation model. The learning process was carried out on an articulatory database of human speech production. The learned parameters were evaluated by numerical experiments and listening tests. The phonetic targets obtained in the planning stage provided an evidence for understanding the virtual targets of human speech production. As a result, the model based learning process reveals the inherent mechanism of the human speech via the learned parameters with certain physical meaning.

  • Effects of Term Distributions on Binary Classification

    Verayuth LERTNATTEE  Thanaruk THEERAMUNKONG  

     
    PAPER

      Page(s):
    1592-1600

    In order to support decision making, text classification is an important tool. Recently, in addition to term frequency and inverse document frequency, term distributions have been shown to be useful to improve classification accuracy in multi-class classification. This paper investigates the performance of these term distributions on binary classification using a centroid-based approach. In such one-against-the-rest, there are only two classes, the positive (focused) class and the negative class. To improve the performance, a so-called hierarchical EM method is applied to cluster the negative class, which is usually much larger and more diverse than the positive one, into several homogeneous groups. The experimental results on two collections of web pages, namely Drug Information (DI) and WebKB, show the merits of term distributions and clustering on binary classification. The performance of the proposed method is also investigated using the Thai Herbal collection where the texts are written in Thai language.

  • A Model of Discourse Segmentation and Segment Title Assignment for Lecture Speech Indexing

    Kazuhiro TAKEUCHI  Yukie NAKAO  Hitoshi ISAHARA  

     
    PAPER

      Page(s):
    1601-1610

    Dividing a lecture speech into segments and providing those segments as learning objects are quite general and convenient way to construct e-learning resources. However it is difficult to assign an appropriate title to each object that reflects its content. Since there are various aspects of analyzing discourse segments, it is inevitable that researchers will face the diversity when describing the "meanings" of discourse segments. In this paper, we propose the assignment of discourse segment titles from the representation of their "meanings." In this assigning procedure, we focus on the speaker's evaluation for the event or the speech object. To verify the effectiveness of our idea, we examined identification of the segment boundaries from the titles that were described in our procedure. We confirmed that the result of the identification was more accurate than that of intuitive identification.

  • OWL/XDD Application Profiles

    Photchanan RATANAJAIPAN  Ekawit NANTAJEEWARAWAT  Vilas WUWONGSE  

     
    PAPER

      Page(s):
    1611-1620

    An application profile specifies a set of terms, drawn from one or more standard namespaces, for annotation of data, and constrains their usage and interpretations in a particular local application. An approach to representation of and reasoning with application profiles based on the OWL and OWL/XDD languages is proposed. The former is a standard Web ontology language, while the latter is a definite-clause-style rule language that employs XML expressions as its underlying data structure. Semantic constraints are defined in terms of rules, which are represented as XDD clauses. Application of the approach to defining application profiles with fine-grained semantic constraints, involving implicit properties of metadata elements, is illustrated. A prototype application profile development environment equipped with metadata validation features has been implemented based on the proposed framework.

  • Improving Employee Performance Appraisal Method through Web-Based Appraisal Support System: System Development from the Study on Thai Companies

    Shruti SHRESTHA  Junalux CHALIDABHONGSE  

     
    PAPER

      Page(s):
    1621-1629

    Employee performance appraisal is an effective way to determine the performance of the employees in an organization. A study conducted on companies in Thailand revealed that majority of the companies do not use computer-based employee appraisal system. In the traditional appraisal system, the paper-based appraisal system causes a lot of manual work, is time-consuming, not secure, not flexible, difficult to analyze the performance and see the trend of performance improvement of the employee. We have developed a web-based performance appraisal system, which provides a secure and easy way to perform the appraisal. In our system, the competencies are flexible and can be customized according to the specific job responsibility. Our system is goal-orientated as it calculates the objective scores. The system is connected to the database which is easily accessible. The first stage of our system is the 'Selection Stage' in which the managers and employees can select the competencies and objectives that they want to evaluate for performance appraisal according to the job positions. The second stage is the 'Appraisal/Evaluation Stage' where managers can rate the employees according to different priority levels of competencies and objectives. Moreover, at this stage, employees can perform self-evaluation and 360-degree evaluation for their colleagues, subordinates and managers. The final stage is the 'Development Planning Stage' where the managers and employees can compare their appraisal results, discuss and plan for future training or further steps for reaching the objectives and improving employee's competencies. From user testing, the system was found to be more efficient compared to the traditional appraisal system in the issues like: help evaluate the true abilities of employees, help employees understand organizational goals, and provide fast and effective feedback. The users found the system easy to understand and use and were more satisfied with the overall effectiveness of the system.

  • Effect of Rearrangement and Annotation in Digitized Note on Remembrance

    Yoshitugu INOUE  Motoki MIURA  Susumu KUNIFUJI  

     
    PAPER

      Page(s):
    1630-1636

    Note taking is a fundamental activity for learning, and many software tools which enable students to take digitized notes have been proposed. Digitized notes are advantageous because they can be easily edited, rearranged, and shared. Although many note-taking tools have been proposed, there has been little research to examine the effect of note annotation and rearrangement with a digitized tool in terms of knowledge acquisition. Therefore, we have investigated the effect of note annotation and rearrangement on how well lecture content is remembered by learners. By annotation, we mean adding both handwritten and typed text, and rearrangement includes moving and deleting handwritten notes. We developed a simple note-taking application specialized for explanation, and evaluated it through a laboratory experiment with eight participants. The results show that note annotation and rearrangement significantly improved how well the participants remembered lecture content. Thus, the effect of annotation and rearrangement on remembrance was confirmed with respect to digitized notes.

  • Applicability of Camera Works to Free Viewpoint Videos with Annotation and Planning

    Ryuuki SAKAMOTO  Itaru KITAHARA  Megumu TSUCHIKAWA  Kaoru TANAKA  Tomoji TORIYAMA  Kiyoshi KOGURE  

     
    PAPER

      Page(s):
    1637-1648

    This paper shows the effectiveness of a cinematographic camera for controlling 3D video by measuring its effects on viewers with several typical camera works. 3D free-viewpoint video allows us to set its virtual camera on arbitrary positions and postures in 3D space. However, there have been neither investigations on adaptability nor on dependencies between the camera parameters of the virtual camera (i.e., positions, postures, and transitions) nor the impressions of viewers. Although camera works on 3D video based on expertise seems important for making intuitively understandable video, it has not yet been considered. When applying camera works to 3D video using the planning techniques proposed in previous research, generating ideal output video is difficult because it may include defects due to image resolution limitation, calculation errors, or occlusions as well as others caused by positioning errors of the virtual camera in the planning process. Therefore, we conducted an experiment with 29 subjects with camera-worked 3D videos created using simple annotation and planning techniques to determine the virtual camera parameters. The first point of the experiment examines the effects of defects on viewer impressions. To measure such impressions, we conducted a semantic differential (SD) test. Comparisons between ground truth and 3D videos with planned camera works show that the present defects of camera work do not significantly affect viewers. The experiment's second point examines whether the cameras controlled by planning and annotations affected the subjects with intentional direction. For this purpose, we conducted a factor analysis for the SD test answers whose results indicate that the proposed virtual camera control, which exploits annotation and planning techniques, allows us to realize camera working direction on 3D video.

  • Method for Visualizing Complicated Structures Based on Unified Simplification Strategy

    Hiroki OMOTE  Kozo SUGIYAMA  

     
    PAPER

      Page(s):
    1649-1656

    In this paper, we present a novel force-directed method for automatically drawing intersecting compound mixed graphs (ICMGs) that can express complicated relations among elements such as adjacency, inclusion, and intersection. For this purpose, we take a strategy called unified simplification that can transform layout problem for an ICMG into that for an undirected graph. This method is useful for various information visualizations. We describe definitions, aesthetics, force model, algorithm, evaluation, and applications.

  • Utilizing "Wisdom of Crowds" for Handling Multimedia Contents

    Koichiro ISHIKAWA  Yoshihisa SHINOZAWA  Akito SAKURAI  

     
    PAPER

      Page(s):
    1657-1662

    We propose in this paper a SOM-like algorithm that accepts online, as inputs, starts and ends of viewing of a multimedia content by many users; a one-dimensional map is then self-organized, providing an approximation of density distribution showing how many users see a part of a multimedia content. In this way "viewing behavior of crowds" information is accumulated as experience accumulates, summarized into one SOM-like network as knowledge is extracted, and is presented to new users as the knowledge is transmitted. Accumulation of multimedia contents on the Internet increases the need for time-efficient viewing of the contents and the possibility of compiling information on many users' viewing experiences. In the circumstances, a system has been proposed that presents, in the Internet environment, a kind of summary of viewing records of many viewers of a multimedia content. The summary is expected to show that some part is seen by many users but some part is rarely seen. The function is similar to websites utilizing "wisdom of crowds" and is facilitated by our proposed algorithm.

  • Regular Section
  • A Static Bug Detector for Uninitialized Field References in Java Programs

    Sunae SEO  Youil KIM  Hyun-Goo KANG  Taisook HAN  

     
    PAPER-Software Engineering

      Page(s):
    1663-1671

    Correctness of Java programs is important because they are executed in distributed computing environments. The object initialization scheme in the Java programming language is complicated, and this complexity may lead to undesirable semantic bugs. Various tools have been developed for detecting program patterns that might cause errors during program execution. However, current tools cannot identify code patterns in which an uninitialized field is accessed when an object is initialized. We refer to such erroneous patterns as uninitialized field references. In this paper, we propose a static pattern detection algorithm for identifying uninitialized field references. We design a sound analysis for this problem and implement an analyzer using the Soot framework. In addition, we apply our algorithm to some real Java applications. From the experiments, we identify 12 suspicious field references in the applications, and among those we find two suspected errors by manual inspection.

  • An Efficient Cache Invalidation Method in Mobile Client/Server Environment

    Hakjoo LEE  Jonghyun SUH  Sungwon JUNG  

     
    PAPER-Database

      Page(s):
    1672-1677

    In mobile computing environments, cache invalidation techiniques are widely used. However, theses techniques require a large-sized invalidation report and show low cache utilization under high server update rate. In this paper, we propose a new cache-level cache invalidation technique called TTCI (Timestamp Tree-based Cache Invalidation technique) to overcome the above two problems. TTCI also supports selective tuning for a cache-level cache invalidation. We show in our experiment that our technique requires much smaller size of cache invalidation report and improves cache utilization.

  • Hiding Secret Information Using Adaptive Side-Match VQ

    Chin-Chen CHANG  Wen-Chuan WU  Chih-Chiang TSOU  

     
    PAPER-Application Information Security

      Page(s):
    1678-1686

    The major application of digital data hiding techniques is to deliver confidential data secretly via public but unreliable computer networks. Most of the existing data hiding schemes, however, exploit the raw data of cover images to perform secret communications. In this paper, a novel data hiding scheme was presented with the manipulation of images based on the compression of side-match vector quantization (SMVQ). This proposed scheme provided adaptive alternatives for modulating the quantized indices in the compressed domain so that a considerable quantity of secret data could be artfully embedded. As the experimental results demonstrated, the proposed scheme indeed provided a larger payload capacity without making noticeable distortions in comparison with schemes proposed in earlier works. Furthermore, this scheme also presented a satisfactory compression performance.

  • Detecting Mouse Movement with Repeated Visit Patterns for Retrieving Noticed Knowledge Components on Web Pages

    Chen-Chung LIU  Chen-Wei CHUNG  

     
    PAPER-Educational Technology

      Page(s):
    1687-1696

    Educational websites contain rich knowledge components on a web page. Detecting student attention on web pages fulfills the recommendation of adequate knowledge components to students based on students' current interests. Previous studies have shown the application of learner attention in intelligent learning systems. This study proposes a methodology to analyze student on-line mouse movement patterns that indicate student attentions. The methodology can be combined with learning systems that implement pedagogical models such as inquiry-based learning and problem-solving learning activities. The feasibility and effectiveness of the proposed methodology have been evaluated by student mouse movements in problem-solving scenarios.

  • Correction Method of Nonlinearity Due to Logarithm Operation for X-Ray CT Projection Data with Noise in Photon-Starved State

    Shin-ichiro IWAMOTO  Akira SHIOZAKI  

     
    PAPER-Biological Engineering

      Page(s):
    1697-1705

    In the acquisition of projection data of X-ray CT, logarithm operation is indispensable. But noise distribution is nonlinearly projected by the logarithm operation, and this deteriorates the precision of CT number. This influence becomes particularly remarkable when only a few photons are caught with a detector. It generates a strong streak artifact (SA) in a reconstructed image. Previously we have clarified the influence of the nonlinearity by statistical analysis and proposed a correction method for such nonlinearity. However, there is a problem that the compensation for clamp processing cannot be performed and that the suppression of SA is not enough in photon shortage state. In this paper, we propose a new technique for correcting the nonlinearity due to logarithm operation for noisy data by combining the previously presented method and an adaptive filtering method. The technique performs an adaptive filtering only when the number of captured photons is very few. Moreover we quantitatively evaluate the influence of noise on the reconstructed image in the proposed method by the experiment using numerical phantoms. The experimental results show that there is less influence on spatial resolution despite suppressing SA effectively and that CT number are hardly dependent on the number of the incident photons.

  • ZigBee Based Location Estimation in Home Networking Environments

    Hyunggi CHO  Myungseok KANG  Jonghoon KIM  Hagbae KIM  

     
    LETTER-Networks

      Page(s):
    1706-1708

    This paper presents a Maximum Likelihood Location Estimation (MLLE) algorithm for the home network environments. We propose a deployment of cluster-tree topology in the ZigBee networks and derive the MLE under the log-normal models for the Received Signal Strength (RSS) measurements. Experiments are also conducted to validate the effectiveness of the proposed algorithm.

  • Adaptive Transform Coefficient Scan for H.264 Intra Coding

    Jie JIA  Eun-Ku JUNG  Hae-Kwang KIM  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1709-1711

    This paper presents an adaptive transform coefficient scan method that effectively improves intra coding efficiency of H.264. Instead of applying one zig-zag scan to all transform blocks, the proposed method applies a field scan to a horizontally predicted block, a horizontal scan to a vertically predicted block, and a zig-zag scan to blocks predicted in other prediction modes. Experiments based on JM9.6 were performed using only intra coding. Results of the experiments show that the proposed method yields an average PSNR enhancement of 0.16 dB and a maximum PSNR enhancement of 0.31 dB over the current H.264 using zig-zag scan.

  • Improvement of Inter-Layer Motion Prediction in Scalable Video Coding

    Tae Meon BAE  Truong Cong THANG  Yong Man RO  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1712-1715

    In this letter, we propose an enhanced method for inter-layer motion prediction in scalable video coding (SVC). For inter-layer motion prediction, the use of refined motion data in the Fine Granular Scalability (FGS) layer is proposed instead of the conventional use of motion data in the base quality layer to reduce the inter-layer redundancy efficiently. Experimental results show that the proposed method enhances coding efficiency without increasing the computational complexity of the decoder.

  • A Visual Inpainting Method Based on the Compressed Domain

    Yi-Wei JIANG  De XU  Moon-Ho LEE  Cong-Yan LANG  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1716-1719

    Visual inpainting is an interpolation problem that restores an image or a frame with missing or damaged parts. Over the past decades, a number of computable models of visual inpainting have been developed, but most of these models are based on the pixel domain. Little theoretical and computational work of visual inpainting is based on the compressed domain. In this paper, a visual inpainting model in the discrete cosine transform (DCT) domain is proposed. DCT coefficients of the non-inpainting blocks are utilized to get block features, and those block features are propagated to the inpainting region iteratively. The experimental results with I frames of MPEG4 are presented to demonstrate the efficiency and accuracy of the proposed algorithm.

  • Decorative Character Recognition by Graph Matching

    Shinichiro OMACHI  Shunichi MEGAWA  Hirotomo ASO  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    1720-1723

    A practical optical character reader is required to deal with not only common fonts but also complex designed fonts. However, recognizing various kinds of decorative character images is still a challenging problem in the field of document image analysis. Since appearances of such decorative characters are complicated, most general character recognition systems cannot give good performances on decorative characters. In this paper, an algorithm that recognizes decorative characters by structural analysis using a graph-matching technique is proposed. Character structure is extracted by using topographical features of multi-scale images, and the extracted structure is represented by a graph. A character image is recognized by matching graphs of the input and standard patterns. Experimental results show the effectiveness of the proposed algorithm.