The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E89-D No.10  (Publication Date:2006/10/01)

    Regular Section
  • The Relations among Watson-Crick Automata and Their Relations with Context-Free Languages

    Satoshi OKAWA  Sadaki HIROSE  

     
    PAPER-Automata and Formal Language Theory

      Page(s):
    2591-2599

    Watson-Crick automata were introduced as a new computer model and have been intensively investigated regarding their computational power. In this paper, aiming to establish the relations among language families defined by Watson-Crick automata and the family of context-free languages completely, we obtain the following results. (1) F1WK = FSWK = FWK, (2) FWK = AWK, (3) there exists a language which is not context-free but belongs to NWK, and (4) there exists a context-free language which does not belong to AWK.

  • Node-Disjoint Paths Algorithm in a Transposition Graph

    Yasuto SUZUKI  Keiichi KANEKO  Mario NAKAMORI  

     
    PAPER-Algorithm Theory

      Page(s):
    2600-2605

    In this paper, we give an algorithm for the node-to-set disjoint paths problem in a transposition graph. The algorithm is of polynomial order of n for an n-transposition graph. It is based on recursion and divided into two cases according to the distribution of destination nodes. The maximum length of each path and the time complexity of the algorithm are estimated theoretically to be O(n7) and 3n - 5, respectively, and the average performance is evaluated based on computer experiments.

  • Mining Communities on the Web Using a Max-Flow and a Site-Oriented Framework

    Yasuhito ASANO  Takao NISHIZEKI  Masashi TOYODA  Masaru KITSUREGAWA  

     
    PAPER-Data Mining

      Page(s):
    2606-2615

    There are several methods for mining communities on the Web using hyperlinks. One of the well-known ones is a max-flow based method proposed by Flake et al. The method adopts a page-oriented framework, that is, it uses a page on the Web as a unit of information, like other methods including HITS and trawling. Recently, Asano et al. built a site-oriented framework which uses a site as a unit of information, and they experimentally showed that trawling on the site-oriented framework often outputs significantly better communities than trawling on the page-oriented framework. However, it has not been known whether the site-oriented framework is effective in mining communities through the max-flow based method. In this paper, we first point out several problems of the max-flow based method, mainly owing to the page-oriented framework, and then propose solutions to the problems by utilizing several advantages of the site-oriented framework. Computational experiments reveal that our max-flow based method on the site-oriented framework is very effective in mining communities, related to the topics of given pages, in comparison with the original max-flow based method on the page-oriented framework.

  • Compression/Scan Co-design for Reducing Test Data Volume, Scan-in Power Dissipation, and Test Application Time

    Yu HU  Yinhe HAN  Xiaowei LI  Huawei LI  Xiaoqing WEN  

     
    PAPER-Dependable Computing

      Page(s):
    2616-2625

    LSI testing is critical to guarantee chips are fault-free before they are integrated in a system, so as to increase the reliability of the system. Although full-scan is a widely adopted design-for-testability technique for LSI design and testing, there is a strong need to reduce the test data Volume, scan-in Power dissipation, and test application Time (VPT) of full-scan testing. Based on the analysis of the characteristics of the variable-to-fixed run-length coding technique and the random access scan architecture, this paper presents a novel design scheme to tackle all VPT issues simultaneously. Experimental results on ISCAS'89 benchmarks have shown on average 51.2%, 99.5%, 99.3%, and 85.5% reduction effects in test data volume, average scan-in power dissipation, peak scan-in power dissipation, and test application time, respectively.

  • Effect of BIST Pretest on IC Defect Level

    Yoshiyuki NAKAMURA  Jacob SAVIR  Hideo FUJIWARA  

     
    PAPER-Dependable Computing

      Page(s):
    2626-2636

    In [1] the impact of BIST on the chip defect level after test has been addressed. It was assumed in [1] that no measures are taken to ensure that the BIST circuitry is fault-free before launching the functional test. In this paper we assume that a BIST pretest is first conducted in order to get rid of all chips that fail it. Only chips whose BIST circuitry has passed the pretest are kept, while the rest are discarded. The BIST pretest, however, is assumed to have only a limited coverage against its own faults. This paper studies the product quality improvements as induced by the BIST pretest, and provides some insight as to when it may be worthwhile to perform it.

  • An RTSD System against Various Attacks for Low False Positive Rate Based on Patterns of Attacker's Behaviors

    Joong-seok SONG  Yong-jin KWON  

     
    PAPER-Application Information Security

      Page(s):
    2637-2643

    There is a certain level of requirements for system performance that intrusion detection systems on the Internet need. One of them is to lower the rate of "False Positive" and "False Negative." Another one is to have a convenient user interface so that users can manage system security easily with the detection systems. However, scan detection systems on public domain show a high rate of false detection and have difficulty in detecting various scanning techniques. In addition, since current scan detection systems are based on the command interface, the systems have been poor at user interface and therefore it is difficult to apply them to system security management. Hence, we first propose a set of new filter rules, which detect various scan attacks based on port scanning techniques. Secondly, a set of ABP-Rules derived from attacker's behavioral patterns is proposed in order to minimize the False Positive rate. With these methods, we implement a new real-time scan detection system, overcoming the limitations of current real-time scan detection systems. Also the implemented system contains a GUI interface for user's convenience of managing the network security, which was developed with Tcl/Tk.

  • A Practical Biosignal-Based Human Interface Applicable to the Assistive Systems for People with Motor Impairment

    Ki-Hong KIM  Jae-Kwon YOO  Hong Kee KIM  Wookho SON  Soo-Young LEE  

     
    PAPER-Rehabilitation Engineering and Assistive Technology

      Page(s):
    2644-2652

    An alternative human interface enabling the handicapped with severe motor disabilities to control an assistive system is presented. Since this interface relies on the biosignals originating from the contraction of muscles on the face during particular movements, even individuals with a paralyzed limb can use it with ease. For real-world application, a dedicated hardware module employing a general-purpose DSP was implemented and its validity tested on an electrically powered wheelchair. Furthermore, an additional attempt to reduce error rates to a minimum for stable operation was also made based on the entropy information inherent in the signals during the classification phase. In the experiments in which 11 subjects participated, it was found most of them could control the target system at their own will, and thus the proposed interface could be considered a potential alternative for the interaction of the severely handicapped with electronic systems.

  • Ellipse Fitting with Hyperaccuracy

    Kenichi KANATANI  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    2653-2660

    For fitting an ellipse to a point sequence, ML (maximum likelihood) has been regarded as having the highest accuracy. In this paper, we demonstrate the existence of a "hyperaccurate" method which outperforms ML. This is made possible by error analysis of ML followed by subtraction of high-order bias terms. Since ML nearly achieves the theoretical accuracy bound (the KCR lower bound), the resulting improvement is very small. Nevertheless, our analysis has theoretical significance, illuminating the relationship between ML and the KCR lower bound.

  • Estimating Motion Parameters Using a Flexible Weight Function

    Seok-Woo JANG  Gye-Young KIM  Hyung-Il CHOI  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    2661-2669

    In this paper, we propose a method to estimate affine motion parameters from consecutive images with the assumption that the motion in progress can be characterized by an affine model. The motion may be caused either by a moving camera or moving object. The proposed method first extracts motion vectors from a sequence of images and then processes them by adaptive robust estimation to obtain affine parameters. Typically, a robust estimation filters out outliers (velocity vectors that do not fit into the model) by fitting velocity vectors to a predefined model. To filter out potential outliers, our adaptive robust estimation defines a flexible weight function based on a sigmoid function. During the estimation process, we tune the sigmoid function gradually to its hard-limit as the errors between the input data and the estimation model are decreased, so that we can effectively separate non-outliers from outliers with the help of the finally tuned hard-limit form of the weight function. The experimental results show that the suggested approach is very effective in estimating affine parameters.

  • Pruning-Based Unsupervised Segmentation for Korean

    In-Su KANG  Seung-Hoon NA  Jong-Hyeok LEE  

     
    PAPER-Natural Language Processing

      Page(s):
    2670-2677

    Compound noun segmentation is a key component for Korean language processing. Supervised approaches require some types of human intervention such as maintaining lexicons, manually segmenting the corpora, or devising heuristic rules. Thus, they suffer from the unknown word problem, and cannot distinguish domain-oriented or corpus-directed segmentation results from the others. These problems can be overcome by unsupervised approaches that employ segmentation clues obtained purely from a raw corpus. However, most unsupervised approaches require tuning of empirical parameters or learning of the statistical dictionary. To develop a tuning-less, learning-free unsupervised segmentation algorithm, this study proposes a pruning-based unsupervised technique that eliminates unhelpful segmentation candidates. In addition, unlike previous unsupervised methods that have relied on purely character-based segmentation clues, this study utilizes word-based segmentation clues. Experimental evaluations show that the pruning scheme is very effective to unsupervised segmentation of Korean compound nouns, and the use of word-based prior knowledge enables better segmentation accuracy. This study also shows that the proposed algorithm performs competitively with or better than other unsupervised methods.

  • Physical Register Sharing through Value Similarity Detection

    In Pyo HONG  Ha Young JEONG  Yong Surk LEE  

     
    LETTER-Computer Systems

      Page(s):
    2678-2681

    Modern processors have large instruction windows to improve performance. They usually adopt register renaming, where every active instruction with a valid destination needs a physical register. As the instruction windows get larger, however, bigger physical register files are required. To solve this problem, we proposed a physical register sharing technique. It shares a physical register among multiple instructions based on a value similarity. As a result, we achieved performance improvement without increasing the size of the physical register file. In addition, the proposed technique can also be used to reduce the timing, complexity and area overhead of the physical register file.

  • Homogeneity Based Image Objective Quality Metric

    Kebin AN  Jun SUN  Weina DU  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    2682-2685

    A new fast and reliable image objective quality evaluation technique is presented in this paper. The proposed method takes image structure into account and uses a low complexity homogeneity measure to evaluate the intensity uniformity of a local region based on high-pass operators. We experimented with monochrome images under different types of distortions. Experimental results indicate that the proposed method provides better consistency with the perceived image quality. It is suitable for real applications to control the processed image quality.

  • A Linear Color Correction Method for Compressed Images and Videos

    Kebin AN  Jun SUN  Lei ZHOU  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    2686-2689

    Color correction needs to be performed to improve the quality of image/video production. The typical methods realize the color correction mainly in the spatial domain of RGB color space. In this paper, a linear color correction method in JPEG/MPEG-2 compressed domain is proposed. The correction is realized in the DCT domain of YUV color space without full-frame decompression. Experimental results show that the visual quality of the corrected images/videos in the compressed domain is identical to the quality of the images/videos corrected in the uncompressed domain.

  • Constant Rate Control for Motion JPEG2000

    Jun HOU  Xiangzhong FANG  Haibin YIN  Jiliang LI  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    2690-2692

    The paper proposes a constant bit rate (CBR) control algorithm for motion JPEG2000 (MJ2). In MJ2 coding, every frame can be coded at similar target bitrate due to the accurate rate control feature. Moreover, frames of the same scene have the similar rate-distortion (RD) characters. The proposed method estimates the initial cutoff threshold of the current frame according to the previous frame's RD information. This iterative method reduces computational cost significantly. As opposed to previous algorithms, it can be used at any compression ratio. Experiments show that the performance is comparable to normal JPEG2000 coding.

  • A New Vertex Adjustment Method for Polygon-Based Shape Coding

    Byoung-Ju YUN  Jae-Soo CHO  Yun-Ho KO  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    2693-2695

    In this paper, we propose a new vertex adjustment method which is based on the size ratio of an object and that of a polygon. In the conventional polygonal approximation methods, the sizes of an object and an approximating polygon are quite different, therefore there are so many error pixels between them. The proposed method reduces the size of error regions by adjusting the size of the polygon to that of an object. Simulation results show outstanding performance of the proposed method.