The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] TE(21534hit)

14541-14560hit(21534hit)

  • Nonlinear System Control Using Compensatory Neuro-Fuzzy Networks

    Cheng-Jian LIN  Cheng-Hung CHEN  

     
    PAPER-Optimization and Control

      Vol:
    E86-A No:9
      Page(s):
    2309-2316

    In this paper, a Compensatory Neuro-Fuzzy Network (CNFN) for nonlinear system control is proposed. The compensatory fuzzy reasoning method is using adaptive fuzzy operations of neural fuzzy network that can make the fuzzy logic system more adaptive and effective. An on-line learning algorithm is proposed to automatically construct the CNFN. They are created and adapted as on-line learning proceeds via simultaneous structure and parameter learning. The structure learning is based on the fuzzy similarity measure and the parameter learning is based on backpropagation algorithm. The advantages of the proposed learning algorithm are that it converges quickly and the obtained fuzzy rules are more precise. The performance of CNFN compares excellently with other various exiting model.

  • Batch-Incremental Nearest Neighbor Search Algorithm and Its Performance Evaluation

    Yaokai FENG  Akifumi MAKINOUCHI  

     
    PAPER-Databases

      Vol:
    E86-D No:9
      Page(s):
    1856-1867

    In light of the increasing number of computer applications that rely heavily on multimedia data, the database community has focused on the management and retrieval of multidimensional data. Nearest Neighbor queries (NN queries) have been widely used to perform content-based retrieval (e.g., similarity search) in multimedia applications. Incremental NN (INN) query is a kind of NN queries and can also be used when the number of the NN objects to be retrieved is not known in advance. This paper points out the weaknesses of the existing INN search algorithms and proposes a new one, called Batch-Incremental Nearest Neighbor search algorithm (denoted B-INN search algorithm), which can be used to process the INN query efficiently. The B-INN search algorithm is different from the existing INN search algorithms in that it does not employ the priority queue that is used in the existing INN search algorithms and is very CPU and memory intensive for large databases in high-dimensional spaces. And it incrementally reports b(b > 1) objects simultaneously (Batch-Incremental), whereas the existing INN search algorithms report the neighbors one by one. In order to implement the B-INN search, a new search (called k-d-NN search) with a new pruning strategy is proposed. Performance tests indicate that the B-INN search algorithm clearly outperforms the existing INN search algorithms in high-dimensional spaces.

  • Sentence Extraction by Spreading Activation through Sentence Similarity

    Naoaki OKAZAKI  Yutaka MATSUO  Naohiro MATSUMURA  Mitsuru ISHIZUKA  

     
    PAPER

      Vol:
    E86-D No:9
      Page(s):
    1686-1694

    Although there has been a great deal of research on automatic summarization, most methods rely on statistical methods, disregarding relationships between extracted textual segments. We propose a novel method to extract a set of comprehensible sentences which centers on several key points to ensure sentence connectivity. It features a similarity network from documents with a lexical dictionary, and spreading activation to rank sentences. We show evaluation results of a multi-document summarization system based on the method participating in a competition of summarization, TSC (Text Summarization Challenge) task, organized by the third NTCIR project.

  • Secure Distributed Configuration Management with Randomised Scheduling of System-Administration Tasks

    Frode EIKA SANDNES  

     
    PAPER-Algorithms and Applications

      Vol:
    E86-D No:9
      Page(s):
    1601-1610

    Distributed configuration management involves maintaining a set of distributed storage and processing resources in such a way that they serve a community of users fairly, promptly, reliably and securely. Security has recently received much attention due to the general anxiety of hacking. Parallel computing systems such as clusters of workstations are no exception to this threat. This paper discusses experiments that measure the effect of employing randomisation in the scheduling of interdependent user and management tasks onto a distributed system such as clusters of workstations. Two attributes are investigated, namely performance and security. Performance is usually the prime objective in task scheduling. In this work the scheduling problem is viewed as a multi-objective optimisation problem where there is a subtle balance between efficient schedules and security. A schedule is secure if it is not vulnerable to malicious acts or inadvertent human errors. Further, the scheduling model should be hidden from unauthorised observers. The results of the study support the use of randomisation in the scheduling of tasks over an insecure network of processing nodes inhabited by malicious users.

  • Parallel Algorithms for Higher-Dimensional Euclidean Distance Transforms with Applications

    Yuh-Rau WANG  Shi-Jinn HORNG  Yu-Hua LEE  Pei-Zong LEE  

     
    INVITED PAPER-Algorithms and Applications

      Vol:
    E86-D No:9
      Page(s):
    1586-1593

    Based on the dimensionality reduction technique and the solution for proximate points problem, we achieve the optimality of the three-dimensional Euclidean distance transform (3D_EDT) computation. For an N N N binary image, our algorithms for both 3D_EDT and its applications can be performed in O (log log N) time using CRCW processors or in O (log N) time using EREW processors. To the best of our knowledge, all results described above are the best known. As for the n-dimensional Euclidean distance transform (nD_EDT) and its applications of a binary image of size Nn, all of them can be computed in O (nlog log N) time using CRCW processors or in O (nlog N) time using EREW processors.

  • Topic Keyword Identification for Text Summarization Using Lexical Clustering

    Youngjoong KO  Kono KIM  Jungyun SEO  

     
    PAPER

      Vol:
    E86-D No:9
      Page(s):
    1695-1701

    Automatic text summarization has the goal of reducing the size of a document while preserving its content. Generally, producing a summary as extracts is achieved by including only sentences which are the most topic-related. DOCUSUM is our summarization system based on a new topic keyword identification method. The process of DOCUSUM is as follows. First, DOCUSUM converts the content words of a document into elements of a context vector space. It then constructs lexical clusters from the context vector space and identifies core clusters. Next, it selects topic keywords from the core clusters. Finally, it generates a summary of the document using the topic keywords. In the experiments on various compression ratios (the compression of 30%, the compression of 10%, and the extraction of the fixed number of sentences: 4 or 8 sentences), DOCUSUM showed better performance than other methods.

  • Node-to-Set Disjoint Paths Problem in Pancake Graphs

    Keiichi KANEKO  Yasuto SUZUKI  

     
    PAPER-Algorithms and Applications

      Vol:
    E86-D No:9
      Page(s):
    1628-1633

    In this paper, we give an algorithm for the node-to-set disjoint paths problem in pancake graphs with its evaluation results. The algorithm is of polynomial order of n for an n-pancake graph. It is based on recursion and divided into two cases according to the distribution of destination nodes in classes into which all the nodes in a pancake graph are categorized. The sum of lengths of paths obtained and the time complexity of the algorithm are estimated and the average performance is evaluated based on computer simulation.

  • Cost Analysis of Optimistic Recovery Model for Forked Checkpointing

    Jiman HONG  Sangsu KIM  Yookun CHO  

     
    PAPER-Networking and Architectures

      Vol:
    E86-D No:9
      Page(s):
    1534-1541

    Forked checkpointing scheme is proposed to achieve low checkpoint overhead. When a process wants to take a checkpoint in the forked checkpointing scheme, it creates a child process and continues its normal computation. Two recovery models can be used for forked checkpointing when the parent process fails before the child process establishes the checkpoint. One is the pessimistic recovery model where the recovery process rolls back to the previous checkpoint state. The other is the optimistic recovery model where a recovery process waits for the checkpoint to be established by the child process. In this paper, we present the recovery models for forked checkpointing by deriving the expected execution time of a process with and without checkpointing and also show that the expected recovery time of the optimistic recovery model is smaller than that of the pessimistic recovery model.

  • SVM-Based Multi-Document Summarization Integrating Sentence Extraction with Bunsetsu Elimination

    Tsutomu HIRAO  Kazuhiro TAKEUCHI  Hideki ISOZAKI  Yutaka SASAKI  Eisaku MAEDA  

     
    PAPER

      Vol:
    E86-D No:9
      Page(s):
    1702-1709

    In this paper, we propose a machine learning-based method of multi-document summarization integrating sentence extraction with bunsetsu elimination. We employ Support Vector Machines for both of the modules used. To evaluate the effect of bunsetsu elimination, we participated in the multi-document summarization task at TSC-2 by the following two approaches: (1) sentence extraction only, and (2) sentence extraction + bunsetsu elimination. The results of subjective evaluation at TSC-2 show that both approaches are superior to the Lead-based method from the viewpoint of information coverage. In addition, we made extracts from given abstracts to quantitatively examine the effectiveness of bunsetsu elimination. The experimental results showed that our bunsetsu elimination makes summaries more informative. Moreover, we found that extraction based on SVMs trained by short extracts are better than the Lead-based method, but that SVMs trained by long extracts are not.

  • An Ultra-High-Sensitivity HDTV Camcorder

    Junichi YAMAZAKI  Masayuki MIYAZAKI  Tsuneo IHARA  Itaru MIZUNO  Kazuo YOSHIKAWA  Shigehiro KANAYAMA  Nobuo MATSUI  Takayoshi HIRUMA  Masaharu NISHIMURA  

     
    PAPER

      Vol:
    E86-C No:9
      Page(s):
    1810-1815

    An ultra-high-sensitivity HDTV color camcorder (camera with VTR) has been developed featuring image intensifiers with GaAsP photocathodes, which provide very high quantum efficiency. To achieve superior performance and a compact camera body, we combined three 1-inch image intensifiers with a 2/3-inch taking lens and three 2/3-inch CCDs by means of a new optical system capable of enlarging and reducing images. The camcorder provides excellent color reproducibility even under low light level conditions (0.2 lx) at an iris setting of f/2, with a signal-to-noise ratio of 55 dB at pedestal level. Its sensitivity is about 400 times greater than that of current HDTV CCD camcorders, making it particularly well suited for capturing images of faint objects in space, aurora, etc., filming the nocturnal activities of animals in their natural settings, and reporting breaking news at night.

  • Corpus Based Method of Transforming Nominalized Phrases into Clauses for Text Mining Application

    Akira TERADA  Takenobu TOKUNAGA  

     
    PAPER

      Vol:
    E86-D No:9
      Page(s):
    1736-1744

    Nominalization is a linguistic phenomenon in which events usually described in terms of clauses are expressed in the form of noun phrases. Extracting event structures is an important task in text mining applications. To achieve this goal, clauses are parsed and the argument structure of main verbs are extracted from the parsed results. This kind of preprocessing has been commonly done in the past research. In order to extract event structure from nominalized phrases as well, we need to establish a technique to transform nominalized phrases into clauses. In this paper, we propose a method to transform nominalized phrases into clauses by using corpus-based approach. The proposed method first enumerates possible predicate/argument structures by referring to a nominalized phrase (noun phrase) and makes their ranking based on the frequency of each argument in the corpus. The algorithm based on this method was evaluated using a corpus consisting of 24,626 aviation safety reports in English and it achieved a 78% accuracy in transformation. The algorithm was also evaluated by applying a text mining application to extract events and their cause-effect relations from the texts. This application produced an improvement in the text mining application's performance.

  • Efficient Loop Partitioning for Parallel Codes of Irregular Scientific Computations

    Minyi GUO  

     
    PAPER-Software Systems

      Vol:
    E86-D No:9
      Page(s):
    1825-1834

    In most cases of distributed memory computations, node programs are executed on processors according to the owner computes rule. However, owner computes rule is not best suited for irregular application codes. In irregular application codes, use of indirection in accessing left hand side array makes it difficult to partition the loop iterations, and because of use of indirection in accessing right hand side elements, we may reduce total communication by using heuristics other than owner computes rule. In this paper, we propose a communication cost reduction computes rule for irregular loop partitioning, called least communication computes rule. We partition a loop iteration to a processor on which the minimal communication cost is ensured when executing that iteration. Then, after all iterations are partitioned into various processors, we give global vs. local data transformation rule, indirection arrays remapping and communication optimization methods. The experimental results show that, in most cases, our approaches achieved better performance than other loop partitioning rules.

  • Probabilistic Automaton-Based Fuzzy English-Text Retrieval

    Manabu OHTA  Atsuhiro TAKASU  Jun ADACHI  

     
    PAPER-Software Systems

      Vol:
    E86-D No:9
      Page(s):
    1835-1844

    Optical Character Reader (OCR) incorrect recognition is a serious problem when searching for OCR-scanned documents in databases such as digital libraries. In order to reduce costs, this paper proposes fuzzy retrieval methods for English text containing errors in the recognized text without correcting the errors manually. The proposed methods generate multiple search terms for each input query term based on probabilistic automata which reflect both error-occurrence probabilities and character-connection probabilities. Experimental results of test-set retrieval indicate that one of the proposed methods improves the recall rate from 95.96% to 98.15% at the cost of a decrease in precision from 100.00% to 96.01% with 20 expanded search terms.

  • Blind Estimation of Symbol Timing and Carrier Frequency Offset in Time-Varying Multipath Channels for OFDM Systems

    Tiejun LV  Qun WAN  

     
    PAPER-Wireless Communication Technology

      Vol:
    E86-B No:9
      Page(s):
    2665-2671

    In this paper, a novel algorithm is presented for blind estimation of the symbol timing and frequency offset for OFDM systems. Time-varying frequency-selective Rayleigh fading multipath channel is considered, which is characterized by the power delay profile and time-varying scattering function and has high reliability for real-world mobile environment. The estimators exploit the intrinsic structures of OFDM signals and rely on the second-order moment rather than the probability distribution function of the received signals. They are totally optimum in sense of minimum mean-square-error and can be implemented easily. In addition, we have presented an improved approach which not only preserves the merits of previously proposed method, but also makes the estimation range of the frequency offset cover the entire subcarrier spacing of OFDM signals and the timing estimator be independent of the frequency offset.

  • On 1-Inkdot Alternating Pushdown Automata with Sublogarithmic Space

    Jianliang XU  Yong CHEN  Tsunehiro YOSHINAGA  Katsushi INOUE  

     
    PAPER-Theory of Automata, Formal Language Theory

      Vol:
    E86-D No:9
      Page(s):
    1814-1824

    This paper introduces a 1-inkdot two-way alternating pushdown automaton which is a two-way alternating pushdown automaton (2apda) with the additional power of marking at most 1 tape-cell on the input (with an inkdot) once. We first investigate a relationship between the accepting powers of sublogarithmically space-bounded 2apda's with and without 1 inkdot, and show, for example, that sublogarithmically space-bounded 2apda's with 1 inkdot are more powerful than those which have no inkdots. We next investigate an alternation hierarchy for sublogarithmically space-bounded 1-inkdot 2apda's, and show that the alternation hierarchy on the first level for 1-inkdot 2apda's holds, and we also show that 1-inkdot two-way nondeterministic pushdown automata using sublogarithmic space are incomparable with 1-inkdot two-way alternating pushdown automata with only universal states using the same space.

  • A Performance Study of Task Allocation Algorithms in a Distributed Computing System (DCS)

    Biplab KUMER SARKER  Anil KUMAR TRIPATHI  Deo PRAKASH VIDYARTHI  Kuniaki UEHARA  

     
    PAPER-Algorithms and Applications

      Vol:
    E86-D No:9
      Page(s):
    1611-1619

    A Distributed Computing System (DCS) contributes in proper partitioning of the tasks into modules and allocating them to various nodes so as to enable parallel execution of their modules by individual different processing nodes of the system. The scheduling of various modules on particular processing nodes may be preceded by appropriate allocation of modules of the different tasks to various processing nodes and then only the appropriate execution characteristic can be obtained. A number of algorithms have been proposed for allocation of tasks in a DCS. Most of the solutions proposed had simplifying assumptions. The very first assumption has been: consideration of a single task with their corresponding modules only; second, no consideration of the status of processing nodes in terms of the previously allocated modules of various tasks and third, the capacity and capability of the processing nodes. This work proposes algorithms for a realistic situation wherein multiple tasks with their modules compete for execution on a DCS dynamically considering their architectural capability. In this work, we propose two algorithms based on the two well-known A* and GA for the task allocation models. The paper explains the algorithms elaborately by illustrated examples and presents a comparative performance study among our algorithms and the algorithms for task allocation proposed in the various literatures. The results demonstrate that our GA based task allocation algorithm achieves better performance compared with the other algorithms.

  • GSIC Receiver with Adaptive MMSE Detection for Dual-Rate DS-CDMA System

    Seung Hee HAN  Jae Hong LEE  

     
    LETTER-Wireless Communication Technology

      Vol:
    E86-B No:9
      Page(s):
    2809-2814

    In this letter, we present groupwise successive interference cancellation (GSIC) receiver with adaptive minimum mean squared error (MMSE) detection and extended GSIC (EGSIC) receiver with adaptive MMSE detection for dual-rate DS-CDMA system. The receivers are GSIC receiver and EGSIC receiver combined with adaptive MMSE detection which is introduced to make initial bit detection more reliable. Furthermore, a multi-user detection scheme is introduced to mitigate the effect of multiple access interference (MAI) between users in a group which is usually ignored in conventional GSIC receiver and EGSIC receiver. Specifically, parallel interference cancellation (PIC) is adopted as a multi-user detection scheme within a group. It is shown that performance of the GSIC receiver and EGSIC receiver is significantly improved by employing adaptive MMSE detection. It is also shown that the performance of the receivers can be improved further by using PIC within a group.

  • EEG Cortical Potential Imaging of Brain Electrical Activity by means of Parametric Projection Filters

    Junichi HORI  Bin HE  

     
    PAPER-Biocybernetics, Neurocomputing

      Vol:
    E86-D No:9
      Page(s):
    1909-1920

    The objective of this study was to explore suitable spatial filters for inverse estimation of cortical potentials from the scalp electroencephalogram. The effect of incorporating noise covariance into inverse procedures was examined by computer simulations. The parametric projection filter, which allows inverse estimation with the presence of information on the noise covariance, was applied to an inhomogeneous three-concentric-sphere model under various noise conditions in order to estimate the cortical potentials from the scalp potentials. The present simulation results suggest that incorporation of information on the noise covariance allows better estimation of cortical potentials, than inverse solutions without knowledge about the noise covariance, when the correlation between the signal and noise is low. The method for determining the optimum regularization parameter, which can be applied for parametric inverse techniques, is also discussed.

  • Reactive ECR-Sputter-Deposition of Ni-Zn Ferrite Thin-Films for Backlayer of PMR Media

    Hirofumi WADA  Setsuo YAMAMOTO  Hiroki KURISU  Mitsuru MATSUURA  

     
    PAPER

      Vol:
    E86-C No:9
      Page(s):
    1846-1850

    A reactive sputtering method using an Electron-Cyclotron-Resonance (ECR) microwave plasma was used to deposit Ni-Zn ferrite thin-films for a soft magnetic backlayer of Co-containing spinel ferrite thin-film perpendicular magnetic recording (PMR) media. The Ni-Zn spinel ferrite thin-films with a preferential orientation of (100) and a relatively low coercivity of 15 Oe were obtained at a high deposition rate of 14 nm/min and at a temperature below 200 degrees C. Although post-annealing treatment in air at 200 degrees C was effective to decrease the coercivity of the Ni-Zn ferrite thin-films, the saturation magnetization and initial permeability decreased and the surface smoothness was deteriorated simultaneously. The Ni-Zn ferrite thin-films prepared by ECR sputtering are promising as the backlayer of the perpendicular magnetic recording medium, but further improvement is required in terms of the soft magnetic properties, the grain size and the surface roughness.

  • A Multipurpose Image Watermarking Method for Copyright Notification and Protection

    Zhe-Ming LU  Hao-Tian WU  Dian-Guo XU  Sheng-He SUN  

     
    LETTER-Applications of Information Security Techniques

      Vol:
    E86-D No:9
      Page(s):
    1931-1933

    This paper presents an image watermarking method for two purposes: to notify the copyright owner with a visible watermark, and to protect the copyright with an invisible watermark. These two watermarks are embedded in different blocks with different methods. Simulation results show that the visible watermark is hard to remove and the invisible watermark is robust.

14541-14560hit(21534hit)