Hiroaki AKUTSU Ko ARAI
Lanxi LIU Pengpeng YANG Suwen DU Sani M. ABDULLAHI
Xiaoguang TU Zhi HE Gui FU Jianhua LIU Mian ZHONG Chao ZHOU Xia LEI Juhang YIN Yi HUANG Yu WANG
Yingying LU Cheng LU Yuan ZONG Feng ZHOU Chuangao TANG
Jialong LI Takuto YAMAUCHI Takanori HIRANO Jinyu CAI Kenji TEI
Wei LEI Yue ZHANG Hanfeng XIE Zebin CHEN Zengping CHEN Weixing LI
David CLARINO Naoya ASADA Atsushi MATSUO Shigeru YAMASHITA
Takashi YOKOTA Kanemitsu OOTSU
Xiaokang Jin Benben Huang Hao Sheng Yao Wu
Tomoki MIYAMOTO
Ken WATANABE Katsuhide FUJITA
Masashi UNOKI Kai LI Anuwat CHAIWONGYEN Quoc-Huy NGUYEN Khalid ZAMAN
Takaharu TSUBOYAMA Ryota TAKAHASHI Motoi IWATA Koichi KISE
Chi ZHANG Li TAO Toshihiko YAMASAKI
Ann Jelyn TIEMPO Yong-Jin JEONG
Haruhisa KATO Yoshitaka KIDANI Kei KAWAMURA
Jiakun LI Jiajian LI Yanjun SHI Hui LIAN Haifan WU
Gyuyeong KIM
Hyun KWON Jun LEE
Fan LI Enze YANG Chao LI Shuoyan LIU Haodong WANG
Guangjin Ouyang Yong Guo Yu Lu Fang He
Yuyao LIU Qingyong LI Shi BAO Wen WANG
Cong PANG Ye NI Jia Ming CHENG Lin ZHOU Li ZHAO
Nikolay FEDOROV Yuta YAMASAKI Masateru TSUNODA Akito MONDEN Amjed TAHIR Kwabena Ebo BENNIN Koji TODA Keitaro NAKASAI
Yukasa MURAKAMI Yuta YAMASAKI Masateru TSUNODA Akito MONDEN Amjed TAHIR Kwabena Ebo BENNIN Koji TODA Keitaro NAKASAI
Kazuya KAKIZAKI Kazuto FUKUCHI Jun SAKUMA
Yitong WANG Htoo Htoo Sandi KYAW Kunihiro FUJIYOSHI Keiichi KANEKO
Waqas NAWAZ Muhammad UZAIR Kifayat ULLAH KHAN Iram FATIMA
Haeyoung Lee
Ji XI Pengxu JIANG Yue XIE Wei JIANG Hao DING
Weiwei JING Zhonghua LI
Sena LEE Chaeyoung KIM Hoorin PARK
Akira ITO Yoshiaki TAKAHASHI
Rindo NAKANISHI Yoshiaki TAKATA Hiroyuki SEKI
Chuzo IWAMOTO Ryo TAKAISHI
Chih-Ping Wang Duen-Ren Liu
Yuya TAKADA Rikuto MOCHIDA Miya NAKAJIMA Syun-suke KADOYA Daisuke SANO Tsuyoshi KATO
Yi Huo Yun Ge
Rikuto MOCHIDA Miya NAKAJIMA Haruki ONO Takahiro ANDO Tsuyoshi KATO
Koichi FUJII Tomomi MATSUI
Yaotong SONG Zhipeng LIU Zhiming ZHANG Jun TANG Zhenyu LEI Shangce GAO
Souhei TAKAGI Takuya KOJIMA Hideharu AMANO Morihiro KUGA Masahiro IIDA
Jun ZHOU Masaaki KONDO
Tetsuya MANABE Wataru UNUMA
Kazuyuki AMANO
Takumi SHIOTA Tonan KAMATA Ryuhei UEHARA
Hitoshi MURAKAMI Yutaro YAMAGUCHI
Jingjing Liu Chuanyang Liu Yiquan Wu Zuo Sun
Zhenglong YANG Weihao DENG Guozhong WANG Tao FAN Yixi LUO
Yoshiaki TAKATA Akira ONISHI Ryoma SENDA Hiroyuki SEKI
Dinesh DAULTANI Masayuki TANAKA Masatoshi OKUTOMI Kazuki ENDO
Kento KIMURA Tomohiro HARAMIISHI Kazuyuki AMANO Shin-ichi NAKANO
Ryotaro MITSUBOSHI Kohei HATANO Eiji TAKIMOTO
Genta INOUE Daiki OKONOGI Satoru JIMBO Thiem Van CHU Masato MOTOMURA Kazushi KAWAMURA
Hikaru USAMI Yusuke KAMEDA
Yinan YANG
Takumi INABA Takatsugu ONO Koji INOUE Satoshi KAWAKAMI
Fengshan ZHAO Qin LIU Takeshi IKENAGA
Naohito MATSUMOTO Kazuhiro KURITA Masashi KIYOMI
Tomohiro KOBAYASHI Tomomi MATSUI
Shin-ichi NAKANO
Ming PAN
Takuya KATAYAMA Tatsuo NAKAJIMA Taiichi YUASA Tomoji KISHI Shin NAKAJIMA Shuichi OIKAWA Masahiro YASUGI Toshiaki AOKI Mitsutaka OKAZAKI Seiji UMATANI
We have launched "Highly-Reliable Embedded Software Development" Project, held as a part of e-Society Project, supported by Ministry of Education, Culture, Sports, Science and Technology (MEXT), Japan. The aim of this project is to enable the industry to produce highly reliable and advanced software by introducing latest software technologies into embedded software development. In this paper, we introduce the overview of the projects and our activities and results so far.
Kazuma AIZAWA Haruhiko KAIYA Kenji KAIJIRI
We introduce a method, so called FC method, for maintaining software resources, such as source codes and design documents, in consumer electronics products. Because a consumer electronics product is frequently and rapidly revised, software components in such product are also revised in the same way. However, it is not so easy for software engineers to follow the revision of the product because requirements changes for the product, including the changes of its functionalities and its hardware components, are largely independent of the structure of current software resources. FC method lets software engineers to restructure software resources, especially design documents, stepwise so as to follow the requirements changes for the product easily. We report an application of this method in our company to validate it. From the application, we can confirm that the quality of software was improved about in twice, and that efficiency of development process was also improved over four times.
Embedded systems are used in broad fields. They are one of the indispensable and fundamental technologies in a highly informative society in recent years. As embedded systems are large-scale and complicated, it is prosperous to design and develop a system LSI (Large Scale Integration). The structure of the system LSI has been increasing complexity every year. The degree of improvement of its design productivity has not caught up with the degree of its complexity by conventional methods or techniques. Hence, an idea for the design of a system LSI which has the flow of describing specifications of a system in UML (Unified Modeling Language) and then designing the system in a system level language has already proposed. It is important to establish how to convert from UML to a system level language in specification description or design with the idea. This paper proposes, extracts and verifies transformation rules from UML to SpecC which is one of system level languages. SpecC code has been generated actually from elements in diagrams in UML based on the rules. As an example to verify the rules, "headlights control system of a car" is adopted. SpecC code has been generated actually from elements in diagrams in UML based on the rules. It has been confirmed that the example is executed correctly in simulations. By using the transformation rules proposed in this paper, specification and implementation of a system can be connected seamlessly. Hence, it can improve the design productivity of a system LSI and the productivity of embedded systems.
Sousuke AMASAKI Yasunari TAKAGI Osamu MIZUNO Tohru KIKUNO
Recently, software development projects have been required to produce highly reliable systems within a short period and with low cost. In such situation, software quality prediction helps to confirm that the software product satisfies required quality expectations. In this paper, by using a Bayesian Belief Network (BBN), we try to construct a prediction model based on relationships elicited from the embedded software development process. On the one hand, according to a characteristic of embedded software development, we especially propose to classify test and debug activities into two distinct activities on software and hardware. Then we call the proposed model "the BBN for an embedded software development process". On the other hand, we define "the BBN for a general software development process" to be a model which does not consider this classification of activity, but rather, merges them into a single activity. Finally, we conducted experimental evaluations by applying these two BBNs to actual project data. As the results of our experiments show, the BBN for the embedded software development process is superior to the BBN for the general development process and is applicable effectively for effective practical use.
Takeshi SUMI Osamu MIZUNO Tohru KIKUNO Masayuki HIRAYAMA
According to the proliferation of ubiquitous computing, various products which contain large-size embedded software have been developed. One of most typical features of embedded software is concurrency of software and hardware factors. That is, software has connected deeply into hardware devices. The existence of various hardware make quality assurance of embedded software more difficult. In order to assure quality of embedded software more effectively, this paper discusses features of embedded software and an effective method for quality assurance for embedded software. In this paper, we first analyze a failure distribution of embedded software and discuss the effects of hardware devices on quality of embedded software. Currently, in order to reduce hardware related faults, huge effort for testing with large number of test items is required. Thus, one of the most important issues for quality assurance of embedded software is how to reduce the cost and effort of software testing. Next, focusing on hardware constraints as well as software specifications in embedded software, we propose an evaluation metrics for determinating important functions for quality of embedded software. Furthermore, by referring to the metrics, undesirable behaviors of important functions are identified as root nodes of fault tree analysis. From the result of case study applying the proposed method to actual project data, we confirmed that test items considering the property of embedded software are constructed. We also confirmed that the constructed test items are appropriate to detect hardware related faults in embedded systems.
There have been many arguments that the underlying structure of natural languages is beyond the descriptive capacity of context-free languages. A well-known example is tree adjoining grammars; less common are spine grammars, linear indexed grammars, head grammars, and combinatory categorial grammars. It is known that these models of grammars have the same generative power of string languages and fall into the class of mildly context-sensitive grammars. For an automaton, it is known that the class of languages accepted by transfer pushdown automata is exactly the class of linear indexed languages. In this paper, deterministic transfer pushdown automata is introduced. We will show that the language accepted by a deterministic transfer pushdown automaton is generated by an unambiguous spine grammar. Moreover, we will show that there exists an inherently ambiguous language.
Ulkuhan EKINCIEL Hiroaki YAMAOKA Hiroaki YOSHIDA Makoto IKEDA Kunihiro ASADA
This paper describes the design and development of a module generator for a dual-rail PLA with embedded 2-input logic cells for 0.35 µm CMOS technology. In order to automatically generate logic-cell based PLA layouts from circuit specifications, a module generator as a design automation tool of logic-cell based PLA is developed with a structural improvement. This module generator is based on a timing-driven design methodology and consists of logic synthesis, transistor sizing and logic cell generation, stimulus generation, HDL model generation parts. This generator uses a design constraint to achieve a flexible transistor sizing in a logic cell generation part. In addition, generated logic cells can be easily adapted to a layout generator. The layout is generated by using 0.35 µm, 3-metal-layer CMOS technology. Moreover, an HDL model generator is developed to create delay behavior models easily and quickly with precise timing parameters. The design complexity which is becoming an important issue for VLSI circuits can be reduced partially and human caused errors are minimized by module generator. A PLA layout in GDS-II form and an HDL model behavior of a Boolean function which has 64-bit input, 1-bit output and 220 product term can be generated within 8 minutes on a SunUltraSPARC-III 900 MHz processor. A very short time is required to compile the module, and this makes it feasible for designers to try many different design configurations in order to get the better one.
Eiji KAWAI Youki KADOBAYASHI Suguru YAMAGUCHI
Polling I/O mechanisms on the Unix platform such as select() and poll() cause high processing overhead when they are used in a heavily-loaded network server with many concurrent open sockets. Large waste of processing power incurs not only service degradation but also various troubles such as high electronic power consumption and worsened MTBF of server hosts. It is thus a serious issue especially in large-scale service providers such as an Internet data center (iDC) where a great number of heavily-loaded network servers are operated. As a solution of this problem, we propose a technique of fine-grained control on the invocation intervals of the polling I/O function. The uniqueness of this study is the utilization of POSIX real-time scheduling to enable the fine-grained execution control. Although earlier solutions such as an explicit event delivery mechanism also addressed the problem, they require major modification in the OS kernel and transition from the traditional polling I/O model to the new explicit event-notification model. On the other hand, our technique can be implemented with low cost because it just inserts a few small blocks of codes into the server program and does not require any modification in the OS kernel.
Eunjoo LEE Woochang SHIN Byungjeong LEE Chisu WU
The increasing complexity and shorter life cycle of software have made it necessary to reuse software. Object-oriented development has not facilitated extensive reuse of software and it has become difficult to manage and understand modern object-oriented systems which have become very extensive and complex. However, components, compared with objects, provide more advanced means of structuring, describing and developing systems, because they are more coarse grained and have more domain-specific aspects than objects. In addition, they are also suited to a current distributed environment due to their reusability, maintainability and granularity. In this paper, we present a process of extracting components from object-oriented systems. We define some static metrics and guidelines that can be applied to transform object-oriented systems into component-based systems. Our process consists of two parts. First, basic components are created based on composition and inheritance relationships between classes. Second, the intermediate system is refined into a component-based system with our proposed static metrics and guidelines.
The efficiency of algorithms managing data caches has a major impact on the performance of systems that utilize client-side data caching. In these systems, two versions of data can be maintained without additional overhead by exploiting the replication of data in the server's buffer and clients' caches. In this paper, we present a new cache consistency algorithm employing versions: Two Versions-Callback Locking (2V-CBL). Our experimental results indicate that 2V-CBL provides good performance, and in particular outperforms a leading cache consistency algorithm, Asynchronous Avoidance-based Cache Consistency, when some clients run only read-only transactions.
Masayoshi ARITSUGI Hiroki FUKATSU Yoshinari KANAMORI
Data accessed by many sites are replicated in distributed environments for performance and availability. In this paper, replication schemes are examined in parallel image convolution processing. This paper presents a system architecture that we have developed with CORBA (Common Object Request Broker Architecture) for the processing. Employing CORBA enables us to make use of a cluster of workstations, each of which has a different level of computing power. The paper also describes a parallel and distributed image convolution processing model using replicas stored in a network of workstations, and reports some experimental results showing that our analytical model can agree with practical situations.
Yoshiyuki NAKAMURA Jacob SAVIR Hideo FUJIWARA
Built-in self-test (BIST) hardware is included today in many chips. This hardware is used to test the chip's functional circuits. Since this BIST hardware is manufactured using the same technology as the functional circuits themselves, it is possible for it to be faulty. It is important, therefore, to assess the impact of this unreliable BIST on the product defect level after test. Williams and Brown's formula, relating the product defect level as a function of the manufacturing yield and fault coverage, is re-examined in this paper. In particular, special attention is given to the influence of an unreliable BIST on this relationship. We show that when the BIST hardware is used to screen the functional product, an unreliable BIST circuitry tends, in many cases, to reduce the effective fault coverage and increase the corresponding product defect level. The BIST unreliability impact is assessed for both early life phase, and product maturity phase.
Jangmin O Jongwoo LEE Jae Won LEE Byoung-Tak ZHANG
Effective trading with given pattern-based multi-predictors of stock price needs an intelligent asset allocation strategy. In this paper, we study a method of dynamic asset allocation, called the meta policy, which decides how much the proportion of asset should be allocated to each recommendation for trade. The meta policy makes a decision considering both the recommending information of multi-predictors and the current ratio of stock funds over the total asset. We adopt evolutionary computation to optimize the meta policy. The experimental results on the Korean stock market show that the trading system with the proposed meta policy outperforms other systems with fixed asset allocation methods.
Takahiro KUMURA Norio KAYAMA Shinichi SHIONOYA Kazuo KUMAGIRI Takao KUSANO Makoto YOSHIDA Masao IKEKAWA Ichiro KURODA Takao NISHITANI
This paper provides a performance evaluation of our audio and video CODEC by using a method for rapidly verifying and evaluating overall performance on real-time workloads of system LSIs integrated with SPXK5SC DSP cores. The SPXK5SC have been developed as a DSP core well-suited to system LSIs. Despite the fact that it is very important to evaluate the overall performance of target LSIs on real workloads before actual LSI fabrication, software simulators are too slow to deal with real workloads and full hardware prototyping is unable to respond well to design improvements. Therefore, we have developed a hardware emulation approach to be used on system LSIs integrated with a SPXK5SC DSP core in order to evaluate the overall performance of audio/video CODEC on a target system. Our emulation system using a DSP core TEG, which has a bus interface, and an FPGA is suitable for overall system evaluation on real-time workloads as well as architectural investigation. In this paper, we discuss the use of the emulation system in evaluating performance during AV CODEC execution. In addition, an architecture design based on our emulation system is also described.
The goal of gaze detection is to locate the position (on a monitor) where a user is looking. Previous researches use one wide view camera, which can capture the user's entire face. However, the image resolution is too low with such a camera and the fine movements of user's eye cannot be exactly detected. So, we propose the new gaze detection system with dual cameras (a wide and a narrow view camera). In order to locate the user's eye position accurately, the narrow-view camera has the functionalities of auto focusing/panning/tilting based on the detected 3D eye positions from the wide view camera. In addition, we use the IR-LED illuminators for wide and narrow view camera, which can ease the detecting of facial features, pupil and iris position. To overcome the problem of specular reflection on glasses by illuminator, we use dual IR-LED illuminators for wide and narrow view camera and detect the accurate eye position, which is not hidden by the specular reflection. Experimental results show that the gaze detection error between the computed positions and the real ones is about 2.89 cm of RMS error.
Yosuke SATO Tetsuji OGAWA Tetsunori KOBAYASHI
We propose a modified Hidden Markov Model (HMM) with a view to improve gesture recognition using a moving camera. The conventional HMM is formulated so as to deal with only one feature candidate per frame. However, for a mobile robot, the background and the lighting conditions are always changing, and the feature extraction problem becomes difficult. It is almost impossible to extract a reliable feature vector under such conditions. In this paper, we define a new gesture recognition framework in which multiple candidates of feature vectors are generated with confidence measures and the HMM is extended to deal with these multiple feature vectors. Experimental results comparing the proposed system with feature vectors based on DCT and the method of selecting only one candidate feature point verifies the effectiveness of the proposed technique.
Myung-Seok CHOI Kong-Joo LEE Key-Sun CHOI Gil Chang KIM
It is not always possible to find a global parse for an input sentence owing to problems such as errors of a sentence, incompleteness of lexicon and grammar. Partial parsing is an alternative approach to respond to these problems. Partial parsing techniques try to recover syntactic information efficiently and reliably by sacrificing completeness and depth of analysis. One of the difficulties in partial parsing is how the grammar might be automatically extracted. In this paper we present a method of automatically extracting partial parsing rules from a tree-annotated corpus using the decision tree method. Our goal is deterministic global parsing using partial parsing rules, in other words, to extract partial parsing rules with higher accuracy and broader expansion. First, we define a rule template that enables to learn a subtree for a given substring, so that the resultant rules can be more specific and stricter to apply. Second, rule candidates extracted from a training corpus are enriched with contextual and lexical information using the decision tree method and verified through cross-validation. Last, we underspecify non-deterministic rules by merging substructures with ambiguity in those rules. The learned grammar is similar to phrase structure grammar with contextual and lexical information, but allows building structures of depth one or more. Thanks to automatic learning, the partial parsing rules can be consistent and domain-independent. Partial parsing with this grammar processes an input sentence deterministically using longest-match heuristics, and recursively applies rules to an input sentence. The experiments showed that the partial parser using automatically extracted rules is not only accurate and efficient but also achieves reasonable coverage for Korean.
In order to boost the translation quality of corpus-based MT systems for speech translation, the technique of splitting an input utterance appears promising. In previous research, many methods used word-sequence characteristics like N-gram clues among splitting positions. In this paper, to supplement splitting methods based on word-sequence characteristics, we introduce another clue using similarity based on edit-distance. In our splitting method, we generate candidates for utterance splitting based on N-grams, and select the best one by measuring the utterance similarity against a corpus. This selection is founded on the assumption that a corpus-based MT system can correctly translate an utterance that is similar to an utterance in its training corpus. We conducted experiments using three MT systems: two EBMT systems, one of which uses a phrase as a translation unit and the other of which uses an utterance, and an SMT system. The translation results under various conditions were evaluated by objective measures and a subjective measure. The experimental results demonstrate that the proposed method is valuable for the three systems. Using utterance similarity can improve the translation quality.
Ryuhei OKUNO Kazuya MAEKAWA Jun AKAZAWA Masaki YOSHIDA Kenzo AKAZAWA
Simultaneous recordings of eight channel surface myoelectric signals (EMGs) of the biceps brachii muscles of seven subjects were measured in isovelocity elbow flexion against constant load torque. The velocity was 10, 15, 20 and 25 degree/s and the load torque was 5-15 % of the torque obtained at the maximum voluntary contraction (MVC). Individual motor units were identified from the eight-channel surface EMG, by tracking the waveform change which originated from the change of relative position of muscle fiber and electrode. In the low-load (5 and 7% MVC) experiment, 36 examples of recruitment and 22 examples of derecruitment were measured. In the middle-load (10 and 15% MVC) experiment, most of the motor units did not show an obvious change in the firing rate with the elbow joint angle. Average of the firing rates of all the motor units measured at the elbow angle of 0 to 120 degree (13.3-14.7 Hz) did not depend on flexion velocity between 10 to 25 degree/s. It was concluded that the firing rates of the activated MUs were almost constant and that some MUs were recruited and derecruited during the isovelocity flexion movements. These are the first findings.
Gentaro FUKANO Yoshihiko NAKAMURA Hotaka TAKIZAWA Shinji MIZUNO Shinji YAMAMOTO Kunio DOI Shigehiko KATSURAGAWA Tohru MATSUMOTO Yukio TATENO Takeshi IINUMA
We have proposed a recognition method for pulmonary nodules based on experimentally selected feature values (such as contrast, circularity, etc.) of pathologic candidate regions detected by our Variable N-Quoit (VNQ) filter. In this paper, we propose a new recognition method for pulmonary nodules by use of not experimentally selected feature values, but each CT value itself in a region of interest (ROI) as a feature value. The proposed method has 2 phases: learning and recognition. In the learning phase, first, the pathologic candidate regions are classified into several clusters based on a principal component score. This score is calculated from a set of CT values in the ROI that are regarded as a feature vector, and then eigen vectors and eigen values are calculated for each cluster by application of principal component analysis to the cluster. The eigen vectors (we call them "eigen-images") corresponding to the S-th largest eigen values are utilized as base vectors for subspaces of the clusters in a feature space. In the recognition phase, correlations are measured between the feature vector derived from testing data and the subspace which is spanned by the eigen-images. If the correlation with the nodule subspace is large, the pathologic candidate region is determined to be a nodule, otherwise, it is determined to be a normal organ. In the experiment, first, we decide on the optimal number of subspace dimensions. Then, we demonstrated the robustness of our algorithm by using simulated nodule images.
Hirohisa AMAN Naomi MOCHIDUKI Hiroyuki YAMADA Matu-Tarow NODA
Larger object classes often become more costly classes in the maintenance phase of object-oriented software. Consequently class would have to be constructed in a medium or small size. In order to discuss such desirable size, this paper proposes a simple method for predictively discriminating costly classes in version-upgrades, using a class size metric, Stmts. Concretely, a threshold value of class size (in Stmts) is provided through empirical studies using many Java classes. The threshold value succeeded as a predictive discriminator for about 73% of the sample Java classes.
Dong-Min SEO Kyoung-Soo BOK Jae-Soo YOO
Due to the continuous growth of wireless communication technology and mobile equipment, the need for storing and processing data of moving objects arises in a wide range of location-based applications. In this paper, we propose a new spatio-temporal index structure for moving objects, namely the TPKDB-tree, which supports efficient retrieval of future positions and reduces the update cost.
To guarantee the high reliability of video services, video servers usually adopt parity-encoding techniques in which data blocks and their associated parity blocks form a parity group. For real-time video service, all the blocks in a parity group are prefetched in order to cope with a possible disk failure, thereby incurring a buffering overhead. In this paper, we propose a new scheme called Round-level Parity Grouping (RPG) to reduce the buffer overhead while restoring VBR video streams in the presence of a faulty disk. RPG allows variable parity group sizes so that the exact amount of data is retrieved during each round. Based on RPG, we have developed a storage allocation algorithm for effective buffer management. Experimental results show that our proposed scheme reduces the buffer requirement by 20% to 25%.
Seiji HAYASHI Masahiro SUGUIMOTO
The present paper describes a quality enhancement of band-limited speech signals. In regular telephone communication, the quality of the received speech signal is degraded by band limitation. We propose an effective but simple scheme for obtaining narrowband speech signals in which the frequency components are estimated from band limited signals. The proposed method utilizes aliasing components generated by wavelet reconstruction filters in the inverse discrete wavelet transform. The results of enhancement have been verified by applying this method to speech samples via telephone lines to obtain a noticeable improvement in speech quality.
This letter presents a method to extract vascular structures from magnetic resonance angiography (MRA) volumes based on the geometric variational principle. A minimal function is coupled with flux maximizing geometric flows and the geodesic active surface model while the geometrical description of vessel structure is added. Furthermore, the level set method represents the surface evolution as it is intrinsic and topologically flexible.
Liyang XU Sunil KUMAR Mrinal MANDAL
In this paper, we present an MPEG-4 decoding scheme based on reversible variable length code. The scheme is purely decoder based and compliance with the standard is fully maintained. Moreover, the data recovery scheme suggested in MPEG-4 can still be used as the default scheme. Simulation results show that the proposed scheme achieves better data recovery, both in terms of PSNR and perceptual quality, from error propagation region of a corrupted video packet, as compared to existing MPEG-4 scheme.
Screen pattern used in offset-printed documents has been one of great obstacles in developing document recognition systems that handle color documents. This paper proposes a selective smoothing method for filtering the screen patterns/noise in high-resolution color document images. Experimental results show that the method yields significant improvements in character pattern extraction.