The search functionality is under construction.

IEICE TRANSACTIONS on Information

  • Impact Factor

    0.72

  • Eigenfactor

    0.002

  • article influence

    0.1

  • Cite Score

    1.4

Advance publication (published online immediately after acceptance)

Volume E92-D No.2  (Publication Date:2009/02/01)

    Special Section on Foundations of Computer Science
  • FOREWORD Open Access

    Hirotsugu KAKUGAWA  

     
    FOREWORD

      Page(s):
    107-107
  • Self-Stabilization in Dynamic Networks

    Toshimitsu MASUZAWA  

     
    INVITED PAPER

      Page(s):
    108-115

    A self-stabilizing protocol is a protocol that achieves its intended behavior regardless of the initial configuration (i.e., global state). Thus, a self-stabilizing protocol is adaptive to any number and any type of topology changes of networks: after the last topology change occurs, the protocol starts to converge to its intended behavior. This advantage makes self-stabilizing protocols extremely attractive for designing highly dependable distributed systems on dynamic networks. While conventional self-stabilizing protocols require that the networks remain static during convergence to the intended behaviors, some recent works undertook the challenge of realizing self-stabilization in dynamic networks with frequent topology changes. This paper introduces some of the challenges as a new direction of research in self-stabilization.

  • Successful Manipulation in Stable Marriage Model with Complete Preference Lists

    Hirotatsu KOBAYASHI  Tomomi MATSUI  

     
    PAPER

      Page(s):
    116-119

    This paper deals with a strategic issue in the stable marriage model with complete preference lists (i.e., a preference list of an agent is a permutation of all the members of the opposite sex). Given complete preference lists of n men over n women, and a marriage µ, we consider the problem for finding preference lists of n women over n men such that the men-proposing deferred acceptance algorithm (Gale-Shapley algorithm) adopted to the lists produces µ. We show a simple necessary and sufficient condition for the existence of a set of preference lists of women over men. Our condition directly gives an O(n2) time algorithm for finding a set of preference lists, if it exists.

  • A Polynomial Time Algorithm for Finding a Minimally Generalized Linear Interval Graph Pattern

    Hitoshi YAMASAKI  Takayoshi SHOUDAI  

     
    PAPER

      Page(s):
    120-129

    A graph is an interval graph if and only if each vertex in the graph can be associated with an interval on the real line such that any two vertices are adjacent in the graph exactly when the corresponding intervals have a nonempty intersection. A number of interesting applications for interval graphs have been found in the literature. In order to find structural features common to structural data which can be represented by intervals, this paper proposes new interval graph structured patterns, called linear interval graph patterns, and a polynomial time algorithm for finding a minimally generalized linear interval graph pattern explaining a given finite set of interval graphs.

  • Computational Complexities of University Interview Timetabling

    Naoyuki KAMIYAMA  Yuuki KIYONARI  Eiji MIYANO  Shuichi MIYAZAKI  Katsuhisa YAMANAKA  

     
    PAPER

      Page(s):
    130-140

    This paper introduces a new timetabling problem on universities, called interview timetabling. In this problem, some constant number, say three, of referees are assigned to each of 2n graduate students. Our task is to construct a presentation timetable of these 2n students using n timeslots and two rooms, so that two students evaluated by the same referee must be assigned to different timeslots. The optimization goal is to minimize the total number of movements of all referees between two rooms. This problem comes from the real world in the interview timetabling in Kyoto University. We propose two restricted models of this problem, and investigate their time complexities.

  • An Optimal Parallel Algorithm for Constructing a Spanning Tree on Circular Permutation Graphs

    Hirotoshi HONMA  Saki HONMA  Shigeru MASUYAMA  

     
    PAPER

      Page(s):
    141-148

    The spanning tree problem is to find a tree that connects all the vertices of G. This problem has many applications, such as electric power systems, computer network design and circuit analysis. Klein and Stein demonstrated that a spanning tree can be found in O(log n) time with O(n+m) processors on the CRCW PRAM. In general, it is known that more efficient parallel algorithms can be developed by restricting classes of graphs. Circular permutation graphs properly contain the set of permutation graphs as a subclass and are first introduced by Rotem and Urrutia. They provided O(n2.376) time recognition algorithm. Circular permutation graphs and their models find several applications in VLSI layout. In this paper, we propose an optimal parallel algorithm for constructing a spanning tree on circular permutation graphs. It runs in O(log n) time with O(n/log n) processors on the EREW PRAM.

  • Approximation Preserving Reductions among Item Pricing Problems

    Ryoso HAMANE  Toshiya ITOH  Kouhei TOMITA  

     
    PAPER

      Page(s):
    149-157

    When a store sells items to customers, the store wishes to determine the prices of the items to maximize its profit. Intuitively, if the store sells the items with low (resp. high) prices, the customers buy more (resp. less) items, which provides less profit to the store. So it would be hard for the store to decide the prices of items. Assume that the store has a set V of n items and there is a set E of m customers who wish to buy those items, and also assume that each item iV has the production cost di and each customer ejE has the valuation vj on the bundle ejV of items. When the store sells an item iV at the price ri, the profit for the item i is pi=ri-di. The goal of the store is to decide the price of each item to maximize its total profit. We refer to this maximization problem as the item pricing problem. In most of the previous works, the item pricing problem was considered under the assumption that pi ≥ 0 for each iV, however, Balcan, et al. [In Proc. of WINE, LNCS 4858, 2007] introduced the notion of "loss-leader," and showed that the seller can get more total profit in the case that pi < 0 is allowed than in the case that pi < 0 is not allowed. In this paper, we derive approximation preserving reductions among several item pricing problems and show that all of them have algorithms with good approximation ratio.

  • A Space-Saving Approximation Algorithm for Grammar-Based Compression

    Hiroshi SAKAMOTO  Shirou MARUYAMA  Takuya KIDA  Shinichi SHIMOZONO  

     
    PAPER

      Page(s):
    158-165

    A space-efficient approximation algorithm for the grammar-based compression problem, which requests for a given string to find a smallest context-free grammar deriving the string, is presented. For the input length n and an optimum CFG size g, the algorithm consumes only O(g log g) space and O(n log*n) time to achieve O((log*n)log n) approximation ratio to the optimum compression, where log*n is the maximum number of logarithms satisfying log log log n > 1. This ratio is thus regarded to almost O(log n), which is the currently best approximation ratio. While g depends on the string, it is known that g=Ω(log n) and for strings from k-letter alphabet [12].

  • Path Maximum Query and Path Maximum Sum Query in a Tree

    Sung Kwon KIM  Jung-Sik CHO  Soo-Cheol KIM  

     
    PAPER

      Page(s):
    166-171

    Let T be a node-weighted tree with n nodes, and let π(u,v) denote the path between two nodes u and v in T. We address two problems: (i) Path Maximum Query: Preprocess T so that, for a query pair of nodes u and v, the maximum weight on π(u,v) can be found quickly. (ii) Path Maximum Sum Query: Preprocess T so that, for a query pair of nodes u and v, the maximum weight sum subpath of π(u,v) can be found quickly. For the problems we present solutions with O(1) query time and O(n log n) preprocessing time.

  • Learning of Elementary Formal Systems with Two Clauses Using Queries

    Hirotaka KATO  Satoshi MATSUMOTO  Tetsuhiro MIYAHARA  

     
    PAPER

      Page(s):
    172-180

    An elementary formal system, EFS for short, is a kind of logic program over strings, and regarded as a set of rules to generate a language. For an EFS Γ, the language L(Γ) denotes the set of all strings generated by Γ. We consider a new form of EFS, called a restricted two-clause EFS, and denote by rEFS the set of all restricted two-clause EFSs. Then we study the learnability of rEFS in the exact learning model. The class rEFS contains the class of regular patterns, which is extensively studied in Learning Theory. Let Γ* be a target EFS in rEFS of learning. In the exact learning model, an oracle for superset queries answers "yes" for an input EFS Γ in rEFS if L(Γ) is a superset of L*), and outputs a string in L*)-L(Γ), otherwise. An oracle for membership queries answers "yes" for an input string w if w is included in L*), and answers "no", otherwise. We show that any EFS in rEFS is exactly identifiable in polynomial time using membership and superset queries. Moreover, for other types of queries, we show that there exists no polynomial time learning algorithm for rEFS by using the queries. This result indicates the hardness of learning the class rEFS in the exact learning model, in general.

  • Polynomial Time Inductive Inference of TTSP Graph Languages from Positive Data

    Ryoji TAKAMI  Yusuke SUZUKI  Tomoyuki UCHIDA  Takayoshi SHOUDAI  

     
    PAPER

      Page(s):
    181-190

    Two-Terminal Series Parallel (TTSP, for short) graphs are used as data models in applications for electric networks and scheduling problems. We propose a TTSP term graph which is a TTSP graph having structured variables, that is, a graph pattern over a TTSP graph. Let TGTTSP be the set of all TTSP term graphs whose variable labels are mutually distinct. For a TTSP term graph g in TGTTSP, the TTSP graph language of g, denoted by L(g), is the set of all TTSP graphs obtained from g by substituting arbitrary TTSP graphs for all variables in g. Firstly, when a TTSP graph G and a TTSP term graph g are given as inputs, we present a polynomial time matching algorithm which decides whether or not L(g) contains G. The minimal language problem for the class LTTSP={L(g) | gTGTTSP} is, given a set S of TTSP graphs, to find a TTSP term graph g in TGTTSP such that L(g) is minimal among all TTSP graph languages which contain all TTSP graphs in S. Secondly, we give a polynomial time algorithm for solving the minimal language problem for LTTSP. Finally, we show that LTTSP is polynomial time inductively inferable from positive data.

  • Multi-Party Quantum Communication Complexity with Routed Messages

    Seiichiro TANI  Masaki NAKANISHI  Shigeru YAMASHITA  

     
    PAPER

      Page(s):
    191-199

    This paper describes a general quantum lower bounding technique for the communication complexity of a function that depends on the inputs given to two parties connected via paths, which may be shared with other parties, on a network of any topology. The technique can also be employed to obtain a lower-bound of the quantum communication complexity of some functions that depend on the inputs distributed over all parties on the network. As a typical application, we apply our technique to the distinctness problem of deciding whether there are a pair of parties with identical inputs, on a k-party ring; almost matching upper bounds are also given.

  • Formal Language Theoretic Approach to the Disclosure Tree Strategy in Trust Negotiation

    Yoshiaki TAKATA  Hiroyuki SEKI  

     
    PAPER

      Page(s):
    200-210

    Trust negotiation is an authorizing technique based on digital credentials, in which both a user and a server gradually establish trust in each other by repeatedly exchanging their credentials. A trust negotiation strategy is a function that answers a set of credentials to disclose to the other party, depending on policies and the set of already disclosed credentials. The disclosure tree strategy (DTS), proposed by Yu et al., is one of the strategies that satisfies preferable properties. DTS in a simple implementation requires exponential time and space; however, neither an efficient algorithm nor the lower-bound of its complexity was known. In this paper, we investigate the computational complexity of DTS. We formulate subproblems of DTS as problems on derivation trees of a context-free grammar (CFG), and analyze the computational complexity of the subproblems using the concepts of CFGs. As a result, we show that two subproblems EVL and MSET of DTS are NP-complete and NP-hard, respectively, while both are solvable in polynomial time if we modify EVL not to require non-redundancy and MSET not to answer any subset useless for leading the negotiation to success.

  • Extending a Role Graph for Role-Based Access Control

    Yoshiharu ASAKURA  Yukikazu NAKAMOTO  

     
    PAPER

      Page(s):
    211-219

    Role-based access control (RBAC) is widely used as an access control mechanism in various computer systems. Since an organization's lines of authority influence the authorized privileges of jobs, roles also form a hierarchical structure. A role graph is a model that represents role hierarchies and is suitable for the runtime phase of RBAC deployment. Since a role graph cannot take various forms for given roles and cannot handle abstraction of roles well, however, it is not suitable for the design phase of RBAC deployment. Hence, an extended role graph, which can take a more flexible form than that of a role graph, is proposed. The extended role graph improves diversity and clarifies abstraction of roles, making it suitable for the design phase. An equivalent transformation algorithm (ETA), for transforming an extended role graph into an equivalent role graph, is also proposed. Using the ETA, system administrators can deploy efficiently RBAC by using an extended role graph in the design phase and a standard role graph in the runtime phase.

  • Constraint-Based Multi-Completion Procedures for Term Rewriting Systems

    Haruhiko SATO  Masahito KURIHARA  Sarah WINKLER  Aart MIDDELDORP  

     
    PAPER

      Page(s):
    220-234

    In equational theorem proving, convergent term rewriting systems play a crucial role. In order to compute convergent term rewriting systems, the standard completion procedure (KB) was proposed by Knuth and Bendix and has been improved in various ways. The multi-completion system MKB developed by Kurihara and Kondo accepts as input a set of reduction orders in addition to equations and efficiently simulates parallel processes each of which executes the KB procedure with one of the given orderings. Wehrman and Stump also developed a new variant of completion procedure, constraint-based completion, in which reduction orders need not be given by using automated modern termination checkers. As a result, the constraint-based procedures simulate the execution of parallel KB processes in a sequential way, but naive search algorithms sometimes cause serious inefficiency when the number of the potential reduction orders is large. In this paper, we present a new procedure, called a constraint-based multi-completion procedure MKBcs, by augmenting the constraint-based completion with the framework of the multi-completion for suppressing the combinatorial explosion by sharing inferences among the processes. The existing constraint-based system SLOTHROP, which basically employs the best-first search, is more efficient when its built-in heuristics for process selection are appropriate, but when they are not, our system is more efficient. Therefore, both systems have their role to play.

  • Static Dependency Pair Method for Simply-Typed Term Rewriting and Related Techniques

    Keiichirou KUSAKARI  Masahiko SAKAI  

     
    PAPER

      Page(s):
    235-247

    A static dependency pair method, proposed by us, can effectively prove termination of simply-typed term rewriting systems (STRSs). The theoretical basis is given by the notion of strong computability. This method analyzes a static recursive structure based on definition dependency. By solving suitable constraints generated by the analysis result, we can prove the termination. Since this method is not applicable to every system, we proposed a class, namely, plain function-passing, as a restriction. In this paper, we first propose the class of safe function-passing, which relaxes the restriction by plain function-passing. To solve constraints, we often use the notion of reduction pairs, which is designed from a reduction order by the argument filtering method. Next, we improve the argument filtering method for STRSs. Our argument filtering method does not destroy type structure unlike the existing method for STRSs. Hence, our method can effectively apply reduction orders which make use of type information. To reduce constraints, the notion of usable rules is proposed. Finally, we enhance the effectiveness of reducing constraints by incorporating argument filtering into usable rules for STRSs.

  • Linear-Time Recognizable Classes of Tree Languages by Deterministic Linear Pushdown Tree Automata

    Akio FUJIYOSHI  

     
    PAPER

      Page(s):
    248-254

    In this paper, we study deterministic linear pushdown tree automata (deterministic L-PDTAs) and some variations. Since recognition of an input tree by a deterministic L-PDTA can be done in linear time, deterministic L-PDTAs are applicable to many kinds of applications. A strict hierarchy will be shown among the classes of tree languages defined by a variety of deterministic L-PDTAs. It will be also shown that deterministic L-PDTAs are weakly equivalent to nondeterministic L-PDTAs.

  • On the Non-existance of Rotation-Symmetric von Neumann Neighbor Number-Conserving Cellular Automata of Which the State Number is Less than Four

    Naonori TANIMOTO  Katsunobu IMAI  Chuzo IWAMOTO  Kenichi MORITA  

     
    LETTER

      Page(s):
    255-257

    A number-conserving cellular automaton (NCCA) is a cellular automaton such that all states of cells are represented by integers and the total number of its configuration is conserved throughout its computing process. In constrast to normal cellular automata, there are infinitely many assignments of states for NCCAs with a constant state number. As for von Neumann neighbor(radius one) NCCAs with rotation-symmetry, a local function can be represented by summation of four binary functions. In this paper, we show that the minimum size of state set of rotation-symmetric von Neumann neighbor NCCA is 5 by using this representation.

  • Regular Section
  • A Message-Efficient Peer-to-Peer Search Protocol Based on Adaptive Index Dissemination

    Yu WU  Taisuke IZUMI  Fukuhito OOSHITA  Hirotsugu KAKUGAWA  Toshimitsu MASUZAWA  

     
    PAPER-Computation and Computational Models

      Page(s):
    258-268

    Resource search is a fundamental problem in large-scale and highly dynamic Peer-to-Peer (P2P) systems. Unstructured search approaches are widely used because of their flexibility and robustness. However, such approaches incur high communication cost. The index-dissemination-based search is a kind of efficient unstructured search approach. We investigate such approaches with respect to minimize the system communication cost. Based on a dynamic system model that peers continuously leave and join, we solve two problems. One problem is how to efficiently disseminate and maintain a given number of indices. Another is to determine the optimal number of indices for each resource object of a given popularity. Finally, we propose an optimized index dissemination scheme which is fully decentralized and self-adaptive. A remarkable advantage is that the scheme yields no additional communication cost to achieve the self-adaptive feature.

  • Test Compression for Robust Testable Path Delay Fault Testing Using Interleaving and Statistical Coding

    Kazuteru NAMBA  Hideo ITO  

     
    PAPER-Dependable Computing

      Page(s):
    269-282

    This paper proposes a method providing efficient test compression. The proposed method is for robust testable path delay fault testing with scan design facilitating two-pattern testing. In the proposed method, test data are interleaved before test compression using statistical coding. This paper also presents test architecture for two-pattern testing using the proposed method. The proposed method is experimentally evaluated from several viewpoints such as compression rates, test application time and area overhead. For robust testable path delay fault testing on 11 out of 20 ISCAS89 benchmark circuits, the proposed method provides better compression rates than the existing methods such as Huffman coding, run-length coding, Golomb coding, frequency-directed run-length (FDR) coding and variable-length input Huffman coding (VIHC).

  • Visual Aerial Navigation through Adaptive Prediction and Hyper-Space Image Matching

    Muhammad Anwaar MANZAR  Tanweer Ahmad CHEEMA  Abdul JALIL  Ijaz Mansoor QURESHI  

     
    PAPER-Pattern Recognition

      Page(s):
    283-297

    Image matching is an important area of research in the field of artificial intelligence, machine vision and visual navigation. This paper presents a new image matching scheme suitable for visual navigation. In this scheme, gray scale images are sliced and quantized to form sub-band binary images. The information in the binary images is then signaturized to form a vector space and the signatures are sorted as per significance. These sorted signatures are then normalized to transform the represented image pictorial features in a rotation and scale invariant form. For the image matching these two vector spaces from both the images are compared in the transformed domain. This comparison yields efficient results directly in the image spatial domain avoiding the need of image inverse transformation. As compared to the conventional correlation, this comparison avoids the wide range of square error calculations all over the image. In fact, it directly guides the solution to converge towards the estimate given by the adaptive prediction for a high speed performance in an aerial video sequence. A four dimensional solution population scheme has also been presented with a matching confidence factor. This factor helps in terminating the iterations when the essential matching conditions have been achieved. The proposed scheme gives robust and fast results for normal, scaled and rotated templates. Speed comparison with older techniques shows the computational viability of this new technique and its much lesser dependence on image size. The method also shows noise immunity at 30 dB AWGN and impulsive noise.

  • Transcoding-after-Smoothing System for VBR MPEG Video Streaming

    I Gusti Bagus Baskara NUGRAHA  Hiroyoshi MORITA  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    298-309

    Delivering video streaming service over the Internet encounters some challenges. Two of them are heterogeneity of networks capacity and variability of video data rate. The capacity of network segments are constrained. Meanwhile, the rate of video data to be transmitted is highly variable in order to get near-constant images quality. Therefore, to send variable bit rate (VBR) video data over capacity-constrained network, its bit rate should be adjusted. In this paper a system to adjust the transmission bit rate of VBR MPEG video data called Transcoding-after-Smoothing (TaS), which is a combination of bit rate transcoding and bit rate smoothing algorithm, is proposed. The system smoothes out transmission rate of video data while at the same time also performs transcoding on some video frames when necessary in order to keep the transmission bit rate below a certain threshold value. Two kinds of TaS methods are proposed. One method does not have transcoding preference, while the other method uses frame type preference where an intra-coded frame is the last one that will be transcoded. These methods are implemented in our video server where a VBR video data is accessed by a client. Our experiment results show that the first TaS method yields significant reduction in the number of transcoded frames compared with the second TaS method and conventional frame-level transcoding. However, the second method performs better in minimizing the quality distortion.

  • A Fast Block Matching Algorithm Based on Motion Vector Correlation and Integral Projections

    Mohamed GHONEIM  Norimichi TSUMURA  Toshiya NAKAGUCHI  Takashi YAHAGI  Yoichi MIYAKE  

     
    PAPER-Image Processing and Video Processing

      Page(s):
    310-318

    The block based motion estimation technique is adopted by various video coding standards to reduce the temporal redundancy in video sequences. The core of that technique is the search algorithm implemented to find the location of the best matched block. Indeed, the full search algorithm is the most straightforward and optimal but computationally demanding search algorithm. Consequently, many fast and suboptimal search algorithms have been proposed. Reduction of the number of location being searched is the approach used to decrease the computational load of full search. In this paper, hybridization between an adaptive search algorithm and the full search algorithm is proposed. The adaptive search algorithm benefits from the correlation within spatial and temporal adjacent blocks. At the same time, a feature domain based matching criteria is used to reduce the complexity resulting from applying the pixel based conventional criteria. It is shown that the proposed algorithm produces good quality performance and requires less computational time compared with popular block matching algorithms.

  • A Neural Network Based Algorithm for Particle Pairing Problem of PIV Measurements

    Achyut SAPKOTA  Kazuo OHMI  

     
    PAPER-Image Recognition, Computer Vision

      Page(s):
    319-326

    Particle Image Velocimetry (PIV) is a widely used tool for the measurement of the different kinematic properties of the fluid flow. In this measurement technique, a pulsed laser light sheet is used to illuminate a flow field seeded with tracer particles and at each instance of illumination, the positions of the particles are recorded on digital CCD cameras. The resulting two camera frames can then be processed by various techniques to obtain the velocity vectors. One such techniques involve the tracking of the individual particles so as to identify the displacement of the every particles present in the flow field. The displacement of individual particles thus determined gives the velocity information if divided by known time interval. The accuracy as well as efficiency of such measurement systems depend upon the reliability of the algorithms to track those particles. In the present work, a cellular neural network based algorithm has been proposed. Performance test has been carried out using the standard flow images. It performs well in comparison to the existing algorithms in terms of reliability, accuracy and processing time.

  • A New Similar Trajectory Search Algorithm Based on Spatio-Temporal Similarity Measure for Moving Objects in Road Networks

    Young-Chang KIM  Jae-Woo CHANG  

     
    LETTER-Database

      Page(s):
    327-331

    The deployment of historical trajectories of moving objects has greatly increased for various applications in road networks. For instance, similar patterns of moving-object trajectories are very useful for designing the transportation network of a new city. In this paper, we define a spatio-temporal similarity measure based on a road network distance, rather than a Euclidean distance. We also propose a new similar trajectory search algorithm based on the spatio-temporal measure by using an efficient pruning mechanism. Finally, we show the efficiency of our algorithm, both in terms of retrieval accuracy and retrieval efficiency.

  • Schedulability Analysis on Generalized Quantum-Based Fixed Priority Scheduling

    Moonju PARK  

     
    LETTER-Dependable Computing

      Page(s):
    332-335

    This letter analyzes quantum-based scheduling of real-time tasks when each task is allowed to have a different quantum size. It is shown that generalized quantum-based scheduling dominates preemption threshold scheduling in the sense that if tasks are schedulable by preemption threshold scheduling then the tasks must be schedulable by generalized quantum-based scheduling, but the converse does not hold. To determine the schedulability of tasks in quantum-based scheduling, a method to calculate the worst case response time is also presented.

  • Design for Delay Fault Testability of 2-Rail Logic Circuits

    Kentaroh KATOH  Kazuteru NAMBA  Hideo ITO  

     
    LETTER-Dependable Computing

      Page(s):
    336-341

    This paper presents a scan design for delay fault testability of 2-rail logic circuits. The flip flops used in the scan design are based on master-slave ones. The proposed scan design provides complete fault coverage in delay fault testing of 2-rail logic circuits. In two-pattern testing with the proposed scan design, initial vectors are set using the set-reset operation, and the scan-in operation for initial vectors is not required. Hence, the test application time is reduced to about half that of the enhanced scan design. Because the additional function is only the set-reset operation of the slave latch, the area overhead is small. The evaluation shows that the differences in the area overhead of the proposed scan design from those of the standard scan design and the enhanced scan design are 2.1 and -14.5 percent on average, respectively.

  • An Effective Method on Applying Feedback Error Learning Scheme to Functional Electrical Stimulation Controller

    Takashi WATANABE  Kenji KUROSAWA  Makoto YOSHIZAWA  

     
    LETTER-Rehabilitation Engineering and Assistive Technology

      Page(s):
    342-345

    A Feedback Error Learning (FEL) scheme was found to be applicable to joint angle control by Functional Electrical Stimulation (FES) in our previous study. However, the FEL-FES controller had a problem in learning of the inverse dynamics model (IDM) in some cases. In this paper, methods of applying the FEL to FES control were examined in controlling 1-DOF movement of the wrist joint stimulating 2 muscles through computer simulation under several control conditions with several subject models. The problems in applying FEL to FES controller were suggested to be in restricting stimulation intensity to positive values between the minimum and the maximum intensities and in the case of very small output values of the IDM. Learning of the IDM was greatly improved by considering the IDM output range with setting the minimum ANN output value in calculating ANN connection weight change.

  • A Filter Method for Feature Selection for SELDI-TOF Mass Spectrum

    Trung-Nghia VU  Syng-Yup OHN  

     
    LETTER-Pattern Recognition

      Page(s):
    346-348

    We propose a new filter method for feature selection for SELDI-TOF mass spectrum datasets. In the method, a new relevance index was defined to represent the goodness of a feature by considering the distribution of samples based on the counts. The relevance index can be used to obtain the feature sets for classification. Our method can be applied to mass spectrum datasets with extremely high dimensions and process the clinical datasets with practical sizes in acceptable calculation time since it is based on simple counting of samples. The new method was applied to the three public mass spectrum datasets and showed better or comparable results than conventional filter methods.

  • A Variable Break Prediction Method Using CART in a Japanese Text-to-Speech System

    Deok-Su NA  Myung-Jin BAE  

     
    LETTER-Speech and Hearing

      Page(s):
    349-352

    Break prediction is an important step in text-to-speech systems as break indices (BIs) have a great influence on how to correctly represent prosodic phrase boundaries. However, an accurate prediction is difficult since BIs are often chosen according to the meaning of a sentence or the reading style of the speaker. In Japanese, the prediction of an accentual phrase boundary (APB) and major phrase boundary (MPB) is particularly difficult. Thus, this paper presents a method to complement the prediction errors of an APB and MPB. First, we define a subtle BI in which it is difficult to decide between an APB and MPB clearly as a variable break (VB), and an explicit BI as a fixed break (FB). The VB is chosen using the classification and regression tree, and multiple prosodic targets in relation to the pith and duration are then generated. Finally, unit-selection is conducted using multiple prosodic targets. The experimental results show that the proposed method improves the naturalness of synthesized speech.

  • Chrominance Compensation for Multi-View Video Coding

    Min-Woo PARK  Jong-Tae PARK  Gwang-Hoon PARK  Doug-Young SUH  

     
    LETTER-Image Processing and Video Processing

      Page(s):
    353-356

    This letter introduces a cost-effective chrominance compensation scheme. The proposed method is applied to both 'INTER 1616' and 'SKIP' modes in only anchor P-pictures. By testing using JVT common test condition, simulation results show that proposed method can obtain average BD-PSNR gains for U and V as 0.14 dB and 0.13 dB, respectively while maintaining almost the same BD-PSNR's for Y. For the range of low bit-rate, it is observed that average BD-PSNR gains for Y, U and V are 0.14 dB, 0.49 dB and 0.53 dB, respectively. Necessary computational complexity is very marginal because the number of anchor P-pictures is very small in comparison with whole coded video sequences. However it can be found that the proposed method can significantly improve the coding efficiencies of color components.

  • Category Constrained Learning Model for Scene Classification

    Yingjun TANG  De XU  Guanghua GU  Shuoyan LIU  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    357-360

    We present a novel model, named Category Constraint-Latent Dirichlet Allocation (CC-LDA), to learn and recognize natural scene category. Previous work had to resort to additional classifier after obtaining image topic representation. Our model puts the category information in topic inference, so every category is represented in a different topics simplex and topic size, which is consistent with human cognitive habit. The significant feature in our model is that it can do discrimination without combined additional classifier, during the same time of getting topic representation. We investigate the classification performance with variable scene category tasks. The experiments have demonstrated that our learning model can get better performance with less training data.

  • Action Recognition Using Visual-Neuron Feature

    Ning LI  De XU  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    361-364

    This letter proposes a neurobiological approach for action recognition. In this approach, actions are represented by a visual-neuron feature (VNF) based on a quantitative model of object representation in the primate visual cortex. A supervised classification technique is then used to classify the actions. The proposed VNF is invariant to affine translation and scaling of moving objects while maintaining action specificity. Moreover, it is robust to the deformation of actors. Experiments on publicly available action datasets demonstrate the proposed approach outperforms conventional action recognition models based on computer-vision features.

  • An Illumination Invariant Bimodal Method Employing Discriminant Features for Face Recognition

    JiYing WU  QiuQi RUAN  Gaoyun AN  

     
    LETTER-Image Recognition, Computer Vision

      Page(s):
    365-368

    A novel bimodal method for face recognition under low-level lighting conditions is proposed. It fuses an enhanced gray level image and an illumination-invariant geometric image at the feature-level. To further improve the recognition performance under large variations in attributions such as poses and expressions, discriminant features are extracted from source images using the wavelet transform-based method. Features are adaptively fused to reconstruct the final face sample. Then FLD is used to generate a supervised discriminant space for the classification task. Experiments show that the bimodal method outperforms conventional methods under complex conditions.

  • Saliency-Guided Lighting

    Chang Ha LEE  Youngmin KIM  Amitabh VARSHNEY  

     
    LETTER-Computer Graphics

      Page(s):
    369-373

    The comprehensibility of large and complex 3D models can be greatly enhanced by guiding viewer's attention to important regions. Lighting is crucial to our perception of shape. Careful use of lighting has been widely used in art, scientific illustration, and computer graphics to guide visual attention. In this paper, we explore how the saliency of 3D objects can be used to guide lighting to emphasize important regions and suppress less important ones.