The search functionality is under construction.
The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.59

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E88-D No.6  (Publication Date:2005/06/01)

    Special Section on Software Engineering for Embedded Systems
  • FOREWORD

    Masayuki HIRAYAMA  

     
    FOREWORD

      Page(s):
    1103-1104
  • Highly Reliable Embedded Software Development Using Advanced Software Technologies

    Takuya KATAYAMA  Tatsuo NAKAJIMA  Taiichi YUASA  Tomoji KISHI  Shin NAKAJIMA  Shuichi OIKAWA  Masahiro YASUGI  Toshiaki AOKI  Mitsutaka OKAZAKI  Seiji UMATANI  

     
    INVITED PAPER

      Page(s):
    1105-1116

    We have launched "Highly-Reliable Embedded Software Development" Project, held as a part of e-Society Project, supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan. The aim of this project is to enable the industry to produce highly reliable and advanced software by introducing latest software technologies into embedded software development. In this paper, we introduce the overview of the projects and our activities and results so far.

  • Practical and Incremental Maintenance of Software Resources in Consumer Electronics Products

    Kazuma AIZAWA  Haruhiko KAIYA  Kenji KAIJIRI  

     
    PAPER

      Page(s):
    1117-1125

    We introduce a method, so called FC method, for maintaining software resources, such as source codes and design documents, in consumer electronics products. Because a consumer electronics product is frequently and rapidly revised, software components in such product are also revised in the same way. However, it is not so easy for software engineers to follow the revision of the product because requirements changes for the product, including the changes of its functionalities and its hardware components, are largely independent of the structure of current software resources. FC method lets software engineers to restructure software resources, especially design documents, stepwise so as to follow the requirements changes for the product easily. We report an application of this method in our company to validate it. From the application, we can confirm that the quality of software was improved about in twice, and that efficiency of development process was also improved over four times.

  • Extraction of Transformation Rules from UML Diagrams to SpecC

    Tetsuro KATAYAMA  

     
    PAPER

      Page(s):
    1126-1133

    Embedded systems are used in broad fields. They are one of the indispensable and fundamental technologies in a highly informative society in recent years. As embedded systems are large-scale and complicated, it is prosperous to design and develop a system LSI (Large Scale Integration). The structure of the system LSI has been increasing complexity every year. The degree of improvement of its design productivity has not caught up with the degree of its complexity by conventional methods or techniques. Hence, an idea for the design of a system LSI which has the flow of describing specifications of a system in UML (Unified Modeling Language) and then designing the system in a system level language has already proposed. It is important to establish how to convert from UML to a system level language in specification description or design with the idea. This paper proposes, extracts and verifies transformation rules from UML to SpecC which is one of system level languages. SpecC code has been generated actually from elements in diagrams in UML based on the rules. As an example to verify the rules, "headlights control system of a car" is adopted. SpecC code has been generated actually from elements in diagrams in UML based on the rules. It has been confirmed that the example is executed correctly in simulations. By using the transformation rules proposed in this paper, specification and implementation of a system can be connected seamlessly. Hence, it can improve the design productivity of a system LSI and the productivity of embedded systems.

  • Constructing a Bayesian Belief Network to Predict Final Quality in Embedded System Development

    Sousuke AMASAKI  Yasunari TAKAGI  Osamu MIZUNO  Tohru KIKUNO  

     
    PAPER

      Page(s):
    1134-1141

    Recently, software development projects have been required to produce highly reliable systems within a short period and with low cost. In such situation, software quality prediction helps to confirm that the software product satisfies required quality expectations. In this paper, by using a Bayesian Belief Network (BBN), we try to construct a prediction model based on relationships elicited from the embedded software development process. On the one hand, according to a characteristic of embedded software development, we especially propose to classify test and debug activities into two distinct activities on software and hardware. Then we call the proposed model "the BBN for an embedded software development process". On the other hand, we define "the BBN for a general software development process" to be a model which does not consider this classification of activity, but rather, merges them into a single activity. Finally, we conducted experimental evaluations by applying these two BBNs to actual project data. As the results of our experiments show, the BBN for the embedded software development process is superior to the BBN for the general development process and is applicable effectively for effective practical use.

  • An Effective Testing Method for Hardware Related Fault in Embedded Software

    Takeshi SUMI  Osamu MIZUNO  Tohru KIKUNO  Masayuki HIRAYAMA  

     
    PAPER

      Page(s):
    1142-1149

    According to the proliferation of ubiquitous computing, various products which contain large-size embedded software have been developed. One of most typical features of embedded software is concurrency of software and hardware factors. That is, software has connected deeply into hardware devices. The existence of various hardware make quality assurance of embedded software more difficult. In order to assure quality of embedded software more effectively, this paper discusses features of embedded software and an effective method for quality assurance for embedded software. In this paper, we first analyze a failure distribution of embedded software and discuss the effects of hardware devices on quality of embedded software. Currently, in order to reduce hardware related faults, huge effort for testing with large number of test items is required. Thus, one of the most important issues for quality assurance of embedded software is how to reduce the cost and effort of software testing. Next, focusing on hardware constraints as well as software specifications in embedded software, we propose an evaluation metrics for determinating important functions for quality of embedded software. Furthermore, by referring to the metrics, undesirable behaviors of important functions are identified as root nodes of fault tree analysis. From the result of case study applying the proposed method to actual project data, we confirmed that test items considering the property of embedded software are constructed. We also confirmed that the constructed test items are appropriate to detect hardware related faults in embedded systems.

  • Regular Section
  • Inherent Ambiguity of Languages Generated by Spine Grammars

    Ikuo KAWAHARADA  Takumi KASAI  

     
    PAPER-Automata and Formal Language Theory

      Page(s):
    1150-1158

    There have been many arguments that the underlying structure of natural languages is beyond the descriptive capacity of context-free languages. A well-known example is tree adjoining grammars; less common are spine grammars, linear indexed grammars, head grammars, and combinatory categorial grammars. It is known that these models of grammars have the same generative power of string languages and fall into the class of mildly context-sensitive grammars. For an automaton, it is known that the class of languages accepted by transfer pushdown automata is exactly the class of linear indexed languages. In this paper, deterministic transfer pushdown automata is introduced. We will show that the language accepted by a deterministic transfer pushdown automaton is generated by an unambiguous spine grammar. Moreover, we will show that there exists an inherently ambiguous language.

  • A Performance Driven Module Generator for a Dual-Rail PLA with Embedded 2-Input Logic Cells

    Ulkuhan EKINCIEL  Hiroaki YAMAOKA  Hiroaki YOSHIDA  Makoto IKEDA  Kunihiro ASADA  

     
    PAPER-Computer Components

      Page(s):
    1159-1167

    This paper describes the design and development of a module generator for a dual-rail PLA with embedded 2-input logic cells for 0.35 µm CMOS technology. In order to automatically generate logic-cell based PLA layouts from circuit specifications, a module generator as a design automation tool of logic-cell based PLA is developed with a structural improvement. This module generator is based on a timing-driven design methodology and consists of logic synthesis, transistor sizing and logic cell generation, stimulus generation, HDL model generation parts. This generator uses a design constraint to achieve a flexible transistor sizing in a logic cell generation part. In addition, generated logic cells can be easily adapted to a layout generator. The layout is generated by using 0.35 µm, 3-metal-layer CMOS technology. Moreover, an HDL model generator is developed to create delay behavior models easily and quickly with precise timing parameters. The design complexity which is becoming an important issue for VLSI circuits can be reduced partially and human caused errors are minimized by module generator. A PLA layout in GDS-II form and an HDL model behavior of a Boolean function which has 64-bit input, 1-bit output and 220 product term can be generated within 8 minutes on a SunUltraSPARC-III 900 MHz processor. A very short time is required to compile the module, and this makes it feasible for designers to try many different design configurations in order to get the better one.

  • Reducing Processor Usage on Heavily-Loaded Network Servers with POSIX Real-Time Scheduling Control

    Eiji KAWAI  Youki KADOBAYASHI  Suguru YAMAGUCHI  

     
    PAPER-System Programs

      Page(s):
    1168-1177

    Polling I/O mechanisms on the Unix platform such as select() and poll() cause high processing overhead when they are used in a heavily-loaded network server with many concurrent open sockets. Large waste of processing power incurs not only service degradation but also various troubles such as high electronic power consumption and worsened MTBF of server hosts. It is thus a serious issue especially in large-scale service providers such as an Internet data center (iDC) where a great number of heavily-loaded network servers are operated. As a solution of this problem, we propose a technique of fine-grained control on the invocation intervals of the polling I/O function. The uniqueness of this study is the utilization of POSIX real-time scheduling to enable the fine-grained execution control. Although earlier solutions such as an explicit event delivery mechanism also addressed the problem, they require major modification in the OS kernel and transition from the traditional polling I/O model to the new explicit event-notification model. On the other hand, our technique can be implemented with low cost because it just inserts a few small blocks of codes into the server program and does not require any modification in the OS kernel.

  • Extracting Components from Object-Oriented System: A Transformational Approach

    Eunjoo LEE  Woochang SHIN  Byungjeong LEE  Chisu WU  

     
    PAPER-Software Engineering

      Page(s):
    1178-1190

    The increasing complexity and shorter life cycle of software have made it necessary to reuse software. Object-oriented development has not facilitated extensive reuse of software and it has become difficult to manage and understand modern object-oriented systems which have become very extensive and complex. However, components, compared with objects, provide more advanced means of structuring, describing and developing systems, because they are more coarse grained and have more domain-specific aspects than objects. In addition, they are also suited to a current distributed environment due to their reusability, maintainability and granularity. In this paper, we present a process of extracting components from object-oriented systems. We define some static metrics and guidelines that can be applied to transform object-oriented systems into component-based systems. Our process consists of two parts. First, basic components are created based on composition and inheritance relationships between classes. Second, the intermediate system is refined into a component-based system with our proposed static metrics and guidelines.

  • Exploiting Versions for Transactional Cache Consistency

    Heum-Geun KANG  

     
    PAPER-Database

      Page(s):
    1191-1198

    The efficiency of algorithms managing data caches has a major impact on the performance of systems that utilize client-side data caching. In these systems, two versions of data can be maintained without additional overhead by exploiting the replication of data in the server's buffer and clients' caches. In this paper, we present a new cache consistency algorithm employing versions: Two Versions-Callback Locking (2V-CBL). Our experimental results indicate that 2V-CBL provides good performance, and in particular outperforms a leading cache consistency algorithm, Asynchronous Avoidance-based Cache Consistency, when some clients run only read-only transactions.

  • Parallel Image Convolution Processing with Replicas in a Network of Workstations

    Masayoshi ARITSUGI  Hiroki FUKATSU  Yoshinari KANAMORI  

     
    PAPER-Database

      Page(s):
    1199-1209

    Data accessed by many sites are replicated in distributed environments for performance and availability. In this paper, replication schemes are examined in parallel image convolution processing. This paper presents a system architecture that we have developed with CORBA (Common Object Request Broker Architecture) for the processing. Employing CORBA enables us to make use of a cluster of workstations, each of which has a different level of computing power. The paper also describes a parallel and distributed image convolution processing model using replicas stored in a network of workstations, and reports some experimental results showing that our analytical model can agree with practical situations.

  • Defect Level vs. Yield and Fault Coverage in the Presence of an Unreliable BIST

    Yoshiyuki NAKAMURA  Jacob SAVIR  Hideo FUJIWARA  

     
    PAPER-Dependable Computing

      Page(s):
    1210-1216

    Built-in self-test (BIST) hardware is included today in many chips. This hardware is used to test the chip's functional circuits. Since this BIST hardware is manufactured using the same technology as the functional circuits themselves, it is possible for it to be faulty. It is important, therefore, to assess the impact of this unreliable BIST on the product defect level after test. Williams and Brown's formula, relating the product defect level as a function of the manufacturing yield and fault coverage, is re-examined in this paper. In particular, special attention is given to the influence of an unreliable BIST on this relationship. We show that when the BIST hardware is used to screen the functional product, an unreliable BIST circuitry tends, in many cases, to reduce the effective fault coverage and increase the corresponding product defect level. The BIST unreliability impact is assessed for both early life phase, and product maturity phase.

  • Dynamic Asset Allocation for Stock Trading Optimized by Evolutionary Computation

    Jangmin O  Jongwoo LEE  Jae Won LEE  Byoung-Tak ZHANG  

     
    PAPER-e-Business Modeling

      Page(s):
    1217-1223

    Effective trading with given pattern-based multi-predictors of stock price needs an intelligent asset allocation strategy. In this paper, we study a method of dynamic asset allocation, called the meta policy, which decides how much the proportion of asset should be allocated to each recommendation for trade. The meta policy makes a decision considering both the recommending information of multi-predictors and the current ratio of stock funds over the total asset. We adopt evolutionary computation to optimize the meta policy. The experimental results on the Korean stock market show that the trading system with the proposed meta policy outperforms other systems with fixed asset allocation methods.

  • Performance Evaluation of the AV CODEC on a Low-Power SPXK5SC DSP Core

    Takahiro KUMURA  Norio KAYAMA  Shinichi SHIONOYA  Kazuo KUMAGIRI  Takao KUSANO  Makoto YOSHIDA  Masao IKEKAWA  Ichiro KURODA  Takao NISHITANI  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    1224-1230

    This paper provides a performance evaluation of our audio and video CODEC by using a method for rapidly verifying and evaluating overall performance on real-time workloads of system LSIs integrated with SPXK5SC DSP cores. The SPXK5SC have been developed as a DSP core well-suited to system LSIs. Despite the fact that it is very important to evaluate the overall performance of target LSIs on real workloads before actual LSI fabrication, software simulators are too slow to deal with real workloads and full hardware prototyping is unable to respond well to design improvements. Therefore, we have developed a hardware emulation approach to be used on system LSIs integrated with a SPXK5SC DSP core in order to evaluate the overall performance of audio/video CODEC on a target system. Our emulation system using a DSP core TEG, which has a bus interface, and an FPGA is suitable for overall system evaluation on real-time workloads as well as architectural investigation. In this paper, we discuss the use of the emulation system in evaluating performance during AV CODEC execution. In addition, an architecture design based on our emulation system is also described.

  • Real-Time Facial and Eye Gaze Tracking System

    Kang Ryoung PARK  Jaihie KIM  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    1231-1238

    The goal of gaze detection is to locate the position (on a monitor) where a user is looking. Previous researches use one wide view camera, which can capture the user's entire face. However, the image resolution is too low with such a camera and the fine movements of user's eye cannot be exactly detected. So, we propose the new gaze detection system with dual cameras (a wide and a narrow view camera). In order to locate the user's eye position accurately, the narrow-view camera has the functionalities of auto focusing/panning/tilting based on the detected 3D eye positions from the wide view camera. In addition, we use the IR-LED illuminators for wide and narrow view camera, which can ease the detecting of facial features, pupil and iris position. To overcome the problem of specular reflection on glasses by illuminator, we use dual IR-LED illuminators for wide and narrow view camera and detect the accurate eye position, which is not hidden by the specular reflection. Experimental results show that the gaze detection error between the computed positions and the real ones is about 2.89 cm of RMS error.

  • Extension of Hidden Markov Models for Multiple Candidates and Its Application to Gesture Recognition

    Yosuke SATO  Tetsuji OGAWA  Tetsunori KOBAYASHI  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    1239-1247

    We propose a modified Hidden Markov Model (HMM) with a view to improve gesture recognition using a moving camera. The conventional HMM is formulated so as to deal with only one feature candidate per frame. However, for a mobile robot, the background and the lighting conditions are always changing, and the feature extraction problem becomes difficult. It is almost impossible to extract a reliable feature vector under such conditions. In this paper, we define a new gesture recognition framework in which multiple candidates of feature vectors are generated with confidence measures and the HMM is extended to deal with these multiple feature vectors. Experimental results comparing the proposed system with feature vectors based on DCT and the method of selecting only one candidate feature point verifies the effectiveness of the proposed technique.

  • Extracting Partial Parsing Rules from Tree-Annotated Corpus: Toward Deterministic Global Parsing

    Myung-Seok CHOI  Kong-Joo LEE  Key-Sun CHOI  Gil Chang KIM  

     
    PAPER-Natural Language Processing

      Page(s):
    1248-1255

    It is not always possible to find a global parse for an input sentence owing to problems such as errors of a sentence, incompleteness of lexicon and grammar. Partial parsing is an alternative approach to respond to these problems. Partial parsing techniques try to recover syntactic information efficiently and reliably by sacrificing completeness and depth of analysis. One of the difficulties in partial parsing is how the grammar might be automatically extracted. In this paper we present a method of automatically extracting partial parsing rules from a tree-annotated corpus using the decision tree method. Our goal is deterministic global parsing using partial parsing rules, in other words, to extract partial parsing rules with higher accuracy and broader expansion. First, we define a rule template that enables to learn a subtree for a given substring, so that the resultant rules can be more specific and stricter to apply. Second, rule candidates extracted from a training corpus are enriched with contextual and lexical information using the decision tree method and verified through cross-validation. Last, we underspecify non-deterministic rules by merging substructures with ambiguity in those rules. The learned grammar is similar to phrase structure grammar with contextual and lexical information, but allows building structures of depth one or more. Thanks to automatic learning, the partial parsing rules can be consistent and domain-independent. Partial parsing with this grammar processes an input sentence deterministically using longest-match heuristics, and recursively applies rules to an input sentence. The experiments showed that the partial parser using automatically extracted rules is not only accurate and efficient but also achieves reasonable coverage for Korean.

  • Splitting Input for Machine Translation Using N-gram Language Model Together with Utterance Similarity

    Takao DOI  Eiichiro SUMITA  

     
    PAPER-Natural Language Processing

      Page(s):
    1256-1264

    In order to boost the translation quality of corpus-based MT systems for speech translation, the technique of splitting an input utterance appears promising. In previous research, many methods used word-sequence characteristics like N-gram clues among splitting positions. In this paper, to supplement splitting methods based on word-sequence characteristics, we introduce another clue using similarity based on edit-distance. In our splitting method, we generate candidates for utterance splitting based on N-grams, and select the best one by measuring the utterance similarity against a corpus. This selection is founded on the assumption that a corpus-based MT system can correctly translate an utterance that is similar to an utterance in its training corpus. We conducted experiments using three MT systems: two EBMT systems, one of which uses a phrase as a translation unit and the other of which uses an utterance, and an SMT system. The translation results under various conditions were evaluated by objective measures and a subjective measure. The experimental results demonstrate that the proposed method is valuable for the three systems. Using utterance similarity can improve the translation quality.

  • Motor Unit Activity in Biceps Brachii Muscle during Voluntary Isovelocity Elbow Flexion

    Ryuhei OKUNO  Kazuya MAEKAWA  Jun AKAZAWA  Masaki YOSHIDA  Kenzo AKAZAWA  

     
    PAPER-Biological Engineering

      Page(s):
    1265-1272

    Simultaneous recordings of eight channel surface myoelectric signals (EMGs) of the biceps brachii muscles of seven subjects were measured in isovelocity elbow flexion against constant load torque. The velocity was 10, 15, 20 and 25 degree/s and the load torque was 5-15 % of the torque obtained at the maximum voluntary contraction (MVC). Individual motor units were identified from the eight-channel surface EMG, by tracking the waveform change which originated from the change of relative position of muscle fiber and electrode. In the low-load (5 and 7% MVC) experiment, 36 examples of recruitment and 22 examples of derecruitment were measured. In the middle-load (10 and 15% MVC) experiment, most of the motor units did not show an obvious change in the firing rate with the elbow joint angle. Average of the firing rates of all the motor units measured at the elbow angle of 0 to 120 degree (13.3-14.7 Hz) did not depend on flexion velocity between 10 to 25 degree/s. It was concluded that the firing rates of the activated MUs were almost constant and that some MUs were recruited and derecruited during the isovelocity flexion movements. These are the first findings.

  • Eigen Image Recognition of Pulmonary Nodules from Thoracic CT Images by Use of Subspace Method

    Gentaro FUKANO  Yoshihiko NAKAMURA  Hotaka TAKIZAWA  Shinji MIZUNO  Shinji YAMAMOTO  Kunio DOI  Shigehiko KATSURAGAWA  Tohru MATSUMOTO  Yukio TATENO  Takeshi IINUMA  

     
    PAPER-Biological Engineering

      Page(s):
    1273-1283

    We have proposed a recognition method for pulmonary nodules based on experimentally selected feature values (such as contrast, circularity, etc.) of pathologic candidate regions detected by our Variable N-Quoit (VNQ) filter. In this paper, we propose a new recognition method for pulmonary nodules by use of not experimentally selected feature values, but each CT value itself in a region of interest (ROI) as a feature value. The proposed method has 2 phases: learning and recognition. In the learning phase, first, the pathologic candidate regions are classified into several clusters based on a principal component score. This score is calculated from a set of CT values in the ROI that are regarded as a feature vector, and then eigen vectors and eigen values are calculated for each cluster by application of principal component analysis to the cluster. The eigen vectors (we call them "eigen-images") corresponding to the S-th largest eigen values are utilized as base vectors for subspaces of the clusters in a feature space. In the recognition phase, correlations are measured between the feature vector derived from testing data and the subspace which is spanned by the eigen-images. If the correlation with the nodule subspace is large, the pathologic candidate region is determined to be a nodule, otherwise, it is determined to be a normal organ. In the experiment, first, we decide on the optimal number of subspace dimensions. Then, we demonstrated the robustness of our algorithm by using simulated nodule images.

  • A Simple Predictive Method for Discriminating Costly Classes Using Class Size Metric

    Hirohisa AMAN  Naomi MOCHIDUKI  Hiroyuki YAMADA  Matu-Tarow NODA  

     
    LETTER-Software Engineering

      Page(s):
    1284-1288

    Larger object classes often become more costly classes in the maintenance phase of object-oriented software. Consequently class would have to be constructed in a medium or small size. In order to discuss such desirable size, this paper proposes a simple method for predictively discriminating costly classes in version-upgrades, using a class size metric, Stmts. Concretely, a threshold value of class size (in Stmts) is provided through empirical studies using many Java classes. The threshold value succeeded as a predictive discriminator for about 73% of the sample Java classes.

  • Indexing Moving Objects for Future Position Retrieval on Location-Based Services

    Dong-Min SEO  Kyoung-Soo BOK  Jae-Soo YOO  

     
    LETTER-Database

      Page(s):
    1289-1293

    Due to the continuous growth of wireless communication technology and mobile equipment, the need for storing and processing data of moving objects arises in a wide range of location-based applications. In this paper, we propose a new spatio-temporal index structure for moving objects, namely the TPKDB-tree, which supports efficient retrieval of future positions and reduces the update cost.

  • Minimizing the Buffer Size in Fault-Tolerant Video Servers for VBR Streams

    Minseok SONG  Heonshik SHIN  

     
    LETTER-Dependable Computing

      Page(s):
    1294-1298

    To guarantee the high reliability of video services, video servers usually adopt parity-encoding techniques in which data blocks and their associated parity blocks form a parity group. For real-time video service, all the blocks in a parity group are prefetched in order to cope with a possible disk failure, thereby incurring a buffering overhead. In this paper, we propose a new scheme called Round-level Parity Grouping (RPG) to reduce the buffer overhead while restoring VBR video streams in the presence of a faulty disk. RPG allows variable parity group sizes so that the exact amount of data is retrieved during each round. Based on RPG, we have developed a storage allocation algorithm for effective buffer management. Experimental results show that our proposed scheme reduces the buffer requirement by 20% to 25%.

  • Speech Quality Enhancement Using Wavelet Reconstruction Filters

    Seiji HAYASHI  Masahiro SUGUIMOTO  

     
    LETTER-Speech and Hearing

      Page(s):
    1299-1303

    The present paper describes a quality enhancement of band-limited speech signals. In regular telephone communication, the quality of the received speech signal is degraded by band limitation. We propose an effective but simple scheme for obtaining narrowband speech signals in which the frequency components are estimated from band limited signals. The proposed method utilizes aliasing components generated by wavelet reconstruction filters in the inverse discrete wavelet transform. The results of enhancement have been verified by applying this method to speech samples via telephone lines to obtain a noticeable improvement in speech quality.

  • A Hybrid Method for Vascular Segmentation

    Yongqiang ZHAO  Minglu LI  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1304-1305

    This letter presents a method to extract vascular structures from magnetic resonance angiography (MRA) volumes based on the geometric variational principle. A minimal function is coupled with flux maximizing geometric flows and the geodesic active surface model while the geometrical description of vessel structure is added. Furthermore, the level set method represents the surface evolution as it is intrinsic and topologically flexible.

  • Improving Data Recovery in MPEG-4

    Liyang XU  Sunil KUMAR  Mrinal MANDAL  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    1306-1309

    In this paper, we present an MPEG-4 decoding scheme based on reversible variable length code. The scheme is purely decoder based and compliance with the standard is fully maintained. Moreover, the data recovery scheme suggested in MPEG-4 can still be used as the default scheme. Simulation results show that the proposed scheme achieves better data recovery, both in terms of PSNR and perceptual quality, from error propagation region of a corrupted video packet, as compared to existing MPEG-4 scheme.

  • Screen Pattern Removal for Character Pattern Extraction from High-Resolution Color Document Images

    Hideaki GOTO  Hirotomo ASO  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    1310-1313

    Screen pattern used in offset-printed documents has been one of great obstacles in developing document recognition systems that handle color documents. This paper proposes a selective smoothing method for filtering the screen patterns/noise in high-resolution color document images. Experimental results show that the method yields significant improvements in character pattern extraction.