The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

2301-2320hit(3945hit)

  • Low Encoding Complexity Video Compression Based on Low-Density Parity Check Codes

    Haruhiko KANEKO  

     
    LETTER-Information Theory

      Vol:
    E89-A No:1
      Page(s):
    340-347

    Conventional video compression methods generally require a large amount of computation in the encoding process because they perform motion estimations. In order to reduce the encoding complexity for video compression, this paper proposes a new video compression method based on low-density parity check codes. The proposed method is suitable for resource-constrained devices such as mobile phones and satellite cameras.

  • Resource Adaptation Scheme for QoS Provisioning in Pervasive Computing Environments: A Welfare Economic Approach

    Wonjun LEE  Eunkyo KIM  Dongshin KIM  Choonhwa LEE  

     
    PAPER-Networks

      Vol:
    E89-D No:1
      Page(s):
    248-255

    Management of applications in the new world of pervasive computing requires new mechanisms to be developed for admission control, QoS negotiation, allocation and scheduling. To solve such resource-allocation and QoS provisioning problems within pervasive and ubiquitous computational environments, distribution and decomposition of the computation are important. In this paper we present a QoS-based welfare economic resource management model that models the actual price-formation process of an economy. We compare our economy-based approach with a mathematical approach we previously proposed. We use the constructs of application benefit functions and resource demand functions to represent the system configuration and to solve the resource allocation problems. Finally empirical studies are conducted to evaluate the performance of our proposed pricing model and to compare it with other approaches such as priority-based scheme and greedy method.

  • Non-Audible Murmur (NAM) Recognition

    Yoshitaka NAKAJIMA  Hideki KASHIOKA  Nick CAMPBELL  Kiyohiro SHIKANO  

     
    PAPER

      Vol:
    E89-D No:1
      Page(s):
    1-8

    We propose a new practical input interface for the recognition of Non-Audible Murmur (NAM), which is defined as articulated respiratory sound without vocal-fold vibration transmitted through the soft tissues of the head. We developed a microphone attachment, which adheres to the skin, by applying the principle of a medical stethoscope, found the ideal position for sampling flesh-conducted NAM sound vibration and retrained an acoustic model with NAM samples. Then using the Julius Japanese Dictation Toolkit, we tested the feasibility of using this method in place of an external microphone for analyzing air-conducted voice sound.

  • Audio Narrowcasting and Privacy for Multipresent Avatars on Workstations and Mobile Phones

    Owen Noel Newton FERNANDO  Kazuya ADACHI  Uresh DUMINDUWARDENA  Makoto KAWAGUCHI  Michael COHEN  

     
    PAPER

      Vol:
    E89-D No:1
      Page(s):
    73-87

    Our group is exploring interactive multi- and hypermedia, especially applied to virtual and mixed reality multimodal groupware systems. We are researching user interfaces to control source→sink transmissions in synchronous groupware (like teleconferences, chatspaces, virtual concerts, etc.). We have developed two interfaces for privacy visualization of narrowcasting (selection) functions in collaborative virtual environments (CVES): for a workstation WIMP (windows/icon/menu/pointer) GUI (graphical user interface), and for networked mobile devices, 2.5- and 3rd-generation mobile phones. The interfaces are integrated with other CVE clients, interoperating with a heterogeneous multimodal groupware suite, including stereographic panoramic browsers and spatial audio backends & speaker arrays. The narrowcasting operations comprise an idiom for selective attention, presence, and privacy-- an infrastructure for rich conferencing capability.

  • A Hill-Shift Learning Algorithm of Hopfield Network for Bipartite Subgraph Problem

    Rong-Long WANG  Kozo OKAZAKI  

     
    LETTER-Neural Networks and Bioengineering

      Vol:
    E89-A No:1
      Page(s):
    354-358

    In this paper, we present a hill-shift learning method of the Hopfield neural network for bipartite subgraph problem. The method uses the Hopfield neural network to get a near-maximum bipartite subgraph, and shifts the local minimum of energy function by adjusts the balance between two terms in the energy function to help the network escape from the state of the near-maximum bipartite subgraph to the state of the maximum bipartite subgraph or better one. A large number of instances are simulated to verify the proposed method with the simulation results showing that the solution quality is superior to that of best existing parallel algorithm.

  • Wearable Telepresence System Based on Multimodal Communication for Effective Teleoperation with a Humanoid

    Yong-Ho SEO  Hun-Young PARK  Taewoo HAN  Hyun Seung YANG  

     
    PAPER

      Vol:
    E89-D No:1
      Page(s):
    11-19

    This paper presents a new type of wearable teleoperation system that can be applied to the control of a humanoid robot. The proposed system has self-contained computing hardware with a stereo head-mounted display, a microphone, a set of headphones, and a wireless LAN. It also has a mechanism that tracks arm and head motion by using several types of sensors that detect the motion data of an operator, along with a simple force reflection mechanism that uses vibration motors at appropriate joints. For remote tasks, we use intelligent self-sensory feedback and autonomous behavior, such as automatic grasping and obstacle avoidance in a slave robot, and we feed the information back to an operator through a multimodal communication channel. Through this teleoperation system, we successfully demonstrate several teleoperative tasks, including object manipulation and mobile platform control of a humanoid robot.

  • Adaptive Plastic-Landmine Visualizing Radar System: Effects of Aperture Synthesis and Feature-Vector Dimension Reduction

    Takahiro HARA  Akira HIROSE  

     
    PAPER-Imaging

      Vol:
    E88-C No:12
      Page(s):
    2282-2288

    We propose an adaptive plastic-landmine visualizing radar system employing a complex-valued self-organizing map (CSOM) dealing with a feature vector that focuses on variance of spatial- and frequency-domain inner products (V-CSOM) in combination with aperture synthesis. The dimension of the new feature vector is greatly reduced in comparison with that of our previous texture feature-vector CSOM (T-CSOM). In experiments, we first examine the effect of aperture synthesis on the complex-amplitude texture in space and frequency domains. We also compare the calculation cost and the visualization performance of V- and T-CSOMs. Then we discuss merits and drawbacks of the two types of CSOMs with/without the aperture synthesis in the adaptive plastic-landmine visualization task. The V-CSOM with aperture synthesis is found promising to realize a useful plastic-landmine detection system.

  • Circuit Performance Prediction Considering Core Utilization with Interconnect Length Distribution Model

    Hidenari NAKASHIMA  Junpei INOUE  Kenichi OKADA  Kazuya MASU  

     
    PAPER-Prediction and Analysis

      Vol:
    E88-A No:12
      Page(s):
    3358-3366

    Interconnect Length Distribution (ILD) represents the correlation between the number of interconnects and their length. The ILD can predict power consumption, clock frequency, chip size, etc. High core utilization and small circuit area have been reported to improve chip performance. We propose an ILD model to predict the correlation between core utilization and chip performance. The proposed model predicts the influences of interconnect length and interconnect density on circuit performances. As core utilization increases, small and simple circuits improve the performances. In large complex circuits, decreasing the wire coupling capacitance is more important than decreasing the total interconnect length for improvement of chip performance. The proposed ILD model expresses the actual ILD more accurately than conventional models.

  • Autonomous Semantic Grid: Principles of Autonomous Decentralized Systems for Grid Computing

    Muhammad Omair SHAFIQ  Hafiz Farooq AHMAD  Hiroki SUGURI  Arshad ALI  

     
    PAPER

      Vol:
    E88-D No:12
      Page(s):
    2640-2650

    Grid computing is an open, heterogeneous and highly dynamic environment based on the principles of service oriented computing. It focuses on basic infrastructure for coordinated resource sharing among virtual organizations to achieve high performance and availability. However, use of existing Grid computing environment is quite complex and requires a lot of human intervention. In order to avoid this intervention, enhancements are required in bringing autonomy and semantics in existing Grid infrastructure. Semantics would act as glue for autonomy in the process of efficient resource discovery and utilization. Several ontologies and ontology languages have been proposed in this regard which not only have some shortcoming but also poses a sort of overhead for the Grid environment. On the other hand, agents are autonomous problem solving entities, and can negotiate semantically for interoperation with each other in dynamic environments. Inspired from the concept of Autonomous Decentralized Systems, we propose that the above mentioned goals can be achieved by integrating FIPA Multi Agent Systems with the Grid Service Architecture and hence to lay the foundation for Autonomous Semantic Grid. Autonomous Semantic Grid system architecture is aimed to provide an improved infrastructure by bringing autonomy, semantic interoperability and decentralization in the Grid computing for emerging applications. This paper then presents implementation details of first milestone toward Autonomous Semantic Grid realization based on a middleware, namely AgentWeb Gateway for integration of Multi Agent Systems and Grid Service Architecture. Evaluation of the system has also been performed over a number of application scenarios.

  • Autonomous Decentralized Control in Ubiquitous Computing

    Akira YAMAGUCHI  Masayoshi OHASHI  Hitomi MURAKAMI  

     
    INVITED PAPER

      Vol:
    E88-B No:12
      Page(s):
    4421-4426

    Ubiquitous computing (ubicomp) is a computing para-digm which utilizes human-centric systems and applications. With the widespread use of information appliances, robots and sensors, the ubicomp paradigm is expected to become a reality in the near future. Because close interaction between a person and the computing environment is required for ubicomp, autonomous decentralized control will play an important role. In this paper, we discuss autonomous decentralized control in ubicomp from the viewpoint of typical ubicomp applications, smart environments and context-awareness.

  • Exact Minimization of FPRMs for Incompletely Specified Functions by Using MTBDDs

    Debatosh DEBNATH  Tsutomu SASAO  

     
    PAPER-Logic Synthesis

      Vol:
    E88-A No:12
      Page(s):
    3332-3341

    Fixed polarity Reed-Muller expressions (FPRMs) exhibit several useful properties that make them suitable for many practical applications. This paper presents an exact minimization algorithm for FPRMs for incompletely specified functions. For an n-variable function with α unspecified minterms there are 2n+α distinct FPRMs, and a minimum FPRM is one with the fewest product terms. To find a minimum FPRM the algorithm requires to determine an assignment of the incompletely specified minterms. This is accomplished by using the concept of integer-valued functions in conjunction with an extended truth vector and a weight vector. The vectors help formulate the problem as an assignment of the variables of integer-valued functions, which are then efficiently manipulated by using multi-terminal binary decision diagrams for finding an assignment of the unspecified minterms. The effectiveness of the algorithm is demonstrated through experimental results for code converters, adders, and randomly generated functions.

  • A Design Algorithm for Sequential Circuits Using LUT Rings

    Hiroki NAKAHARA  Tsutomu SASAO  Munehiro MATSUURA  

     
    PAPER-Logic Synthesis

      Vol:
    E88-A No:12
      Page(s):
    3342-3350

    This paper shows a design method for a sequential circuit by using a Look-Up Table (LUT) ring. The method consists of two steps: The first step partitions the outputs into groups. The second step realizes them by LUT cascades, and allocates the cells of the cascades into the memory. The system automatically finds a fast implementation by maximally utilizing available memory. With the presented algorithm, we can easily design sequential circuits satisfying given specifications. The paper also compares the LUT ring with logic simulator to realize sequential circuits: the LUT ring is 25 to 237 times faster than a logic simulator that uses the same amount of memory.

  • Analysis of Scattering Problem by an Imperfection of Finite Extent in a Plane Surface

    Masaji TOMITA  Tomio SAKASHITA  Yoshio KARASAWA  

     
    PAPER-EM Analysis

      Vol:
    E88-C No:12
      Page(s):
    2177-2191

    In this paper, a new method based on the mode-matching method in the sense of least squares is presented for analyzing the two dimensional scattering problem of TE plane wave incidence to the infinite plane surface with an arbitrary imperfection of finite extent. The semi-infinite upper and lower regions of that surface are a vacuum and a perfect conductor, respectively. Therefore the discussion of this paper is developed about the Dirichlet boundary value problem. In this method, the approximate scattered wave is represented by the integral transform with band-limited spectrum of plane waves. The boundary values of those scattered waves are described by only abscissa z and Fourier spectra are obtained by applying the ordinary Fourier transform. Moreover, new approximate functions are made by inverse Fourier transform of band-limited those spectra. Consequently, the integral equations of Fredholm type of second kind for spectra of approximate scattered wave functions are derived by matching those new functions to exact boundary value in the sense of least squares. Then it is shown analytically and numerically that the sequence of boundary values of approximate wave functions converges to the exact boundary value, namely, the boundary value of the exact scattered wave in the sense of least squares when the profile of imperfection part is described by continuous and piecewise smooth function at least. Moreover, it is shown that this sequence uniformly converges to exact boundary value in arbitrary finite region of the boundary and the sequence of approximate wave functions uniformly converges to the exact scattered field in arbitrary subdomain in the upper vacuum domain of the boundary in wider sense when the uniqueness of the solution of the Helmholtz equation is satisfied with regard to the profile of the imperfection parts of the boundary.

  • A Low Power-Consuming Embedded System Design by Reducing Memory Access Frequencies

    Ching-Wen CHEN  Chih-Hung CHANG  Chang-Jung KU  

     
    PAPER-Computer Systems

      Vol:
    E88-D No:12
      Page(s):
    2748-2756

    When an embedded system is designed, system performance and power consumption have to be taken carefully into consideration. In this paper, we focus on reducing the number of memory access times in embedded systems to improve performance and save power. We use the locality of running programs to reduce the number of memory accesses in order to save power and maximize the performance of an embedded system. We use shorter code words to encode the instructions that are frequently executed and then pack continuous code words into a pseudo instruction. Once the decompression engine fetches one pseudo instruction, it can extract multiple instructions. Therefore, the number of memory access times can be efficiently reduced because of space locality. However, the number of the most frequently executed instructions is different due to the program size of different applications; that is, the number of memory access times increases when there are less encoded instructions in a pseudo instruction. This situation results in a degradation of system performance and power consumption. To solve this problem, we also propose the use of multiple reference tables. Multiple reference tables will result in the most frequently executed instructions having shorter encoded code words, thereby improving the performance and power of an embedded system. From our simulation results, our method reduces the memory access frequency by about 60% when a reference table with 256 instructions is used. In addition, when two reference tables that contain 256 instructions each are used, the memory access ratio is 10.69% less than the ratio resulting from one reference table with 512 instructions.

  • Trace-Driven Performance Simulation Modeling for Fast Evaluation of Multimedia Processor by Simulation Reuse

    Ho Young KIM  Tag Gon KIM  

     
    PAPER-Simulation and Verification

      Vol:
    E88-A No:12
      Page(s):
    3306-3314

    A method for fast but yet accurate performance evaluation of processor architecture is mostly desirable in modern processors design. This paper proposes one such method which can measure cycle counts and power consumption of pipelined processors. The method first develops a trace-driven performance simulation model and then employs simulation reuse in simulation of the model. The trace-driven performance modeling is for accuracy in which performance simulation uses the same execution traces as constructed in simulation for functional verification. Fast performance simulation can be achieved in a way that performance for each instruction in the traces is evaluated without evaluation of the instruction itself. Simulation reuse supports simulation speedup by elimination of an evaluation at the current state, which is identical to that at a previous state. The reuse approach is based on the property that application programs, especially multimedia applications, have many iterative loops in general. A performance simulator for pipeline architecture based on the proposed method has been developed through which greater speedup has been made compared with other approaches in performance evaluation.

  • Classification of Sequential Circuits Based on τk Notation and Its Applications

    Chia Yee OOI  Thomas CLOUQUEUR  Hideo FUJIWARA  

     
    PAPER-VLSI Systems

      Vol:
    E88-D No:12
      Page(s):
    2738-2747

    This paper introduces τk notation to be used to assess test generation complexity of classes of sequential circuits. Using τk notation, we reconsider and restate the time complexity of test generation for existing classes of acyclic sequential circuits. We also introduce a new DFT method called feedback shift register (FSR) scan design technique, which is extended from the scan design technique. Therefore, for a given sequential circuit, the corresponding FSR scan designed circuit has always equal or lower area overhead and test application time than the corresponding scan designed circuit. Furthermore, we identify some new classes of sequential circuits that contain some cyclic sequential circuits, which are τ-equivalent and τ2-bounded. These classes are the l-length-bounded testable circuits, l-length-bounded validity-identifiable circuits, t-time-bounded testable circuits and t-time-bounded validity-identifiable circuits. In addition, we provide two examples of circuits belonging to these classes, namely counter-cycle finite state machine realizations and state-shiftable finite state machine realizations. Instead of using a DFT method, a given sequential circuit described at the finite state machine (FSM) level can be synthesized using another test methodology called synthesis for testability (SFT) into a circuit that belongs to one of the easily testable classes of cyclic sequential circuits.

  • A Graph Based Soft Module Handling in Floorplan

    Hiroaki ITOGA  Chikaaki KODAMA  Kunihiro FUJIYOSHI  

     
    PAPER-Floorplan and Placement

      Vol:
    E88-A No:12
      Page(s):
    3390-3397

    In the VLSI layout design, a floorplan is often obtained to define rough arrangement of modules in the early design stage. In the stage, the aspect ratio of each soft module is also determined. The aspect ratio can be changed in the designated range keeping its area of each module. In this paper, in order to determine the aspect ratio, we propose a graph-based one dimensional compaction method which determines the aspect ratio quickly under the constraint that topology of a floorplan must not be changed. The proposed method is divided into two steps: (1) Selection of a minimal set of soft modules to adjust the aspect ratio. (2) Decision on the aspect ratio. (1) is formulated as the minimal cut problem in graph theory. We solve the problem by transforming it to the shortest path problem. (2) is divided into two operations. One is to determine the increment limit in height or width of each soft module and the other is to determine the aspect ratio of each soft module by Newton-Raphson method. The experimental comparisons show effectiveness of the proposed method.

  • Double Depth First Search Based Parametric Analysis for Parametric Time-Interval Automata

    Tadaaki TANIMOTO  Akio NAKATA  Hideaki HASHIMOTO  Teruo HIGASHINO  

     
    PAPER

      Vol:
    E88-A No:11
      Page(s):
    3007-3021

    In this paper, we propose a parametric model checking algorithm for a subclass of Timed Automata called Parametric Time-Interval Automata (PTIA). In a PTIA, we can specify upper- and lower-bounds of the execution time (time-interval) of each transition using parameter variables. The proposed algorithm takes two inputs, a model described in a PTIA and a property described in a PTIA accepting all invalid infinite/finite runs (called a never claim), or valid finite runs of the model. In the proposed algorithm, firstly we determinize and complement the given property PTIA if it accepts valid finite runs. Secondly, we accelerate the given model, that is, we regard all the actions that are not appeared in the given property PTIA as invisible actions and eliminate them from the model while preserving the set of visible traces and their timings. Thirdly, we construct a parallel composition of the model and the property PTIAs which is accepting all invalid runs that are accepted by the model. Finally, we perform the extension of Double Depth First Search (DDFS), which is used in the automata-theoretic approach to Linear-time Temporal Logic (LTL) model checking, to derive the weakest parameter condition in order that the given model never executes the invalid runs specified by the given property.

  • A New Iris Recognition Method Using Independent Component Analysis

    Seung-In NOH  Kwanghyuk BAE  Kang Ryoung PARK  Jaihie KIM  

     
    PAPER-Image Recognition, Computer Vision

      Vol:
    E88-D No:11
      Page(s):
    2573-2581

    In a conventional method based on quadrature 2D Gabor wavelets to extract iris features, the iris recognition is performed by a 256-byte iris code, which is computed by applying the Gabor wavelets to a given area of the iris. However, there is a code redundancy because the iris code is generated by basis functions without considering the characteristics of the iris texture. Therefore, the size of the iris code is increased unnecessarily. In this paper we propose a new feature extraction algorithm based on independent component analysis (ICA) for a compact iris code. We implemented the ICA to generate optimal basis functions which could represent iris signals efficiently. In practice the coefficients of the ICA expansions are used as feature vectors. Then iris feature vectors are encoded into the iris code for storing and comparing individual's iris patterns. Additionally, we introduce a method to refine the ICA basis functions for improving the recognition performance. Experimental results show that our proposed method has a similar equal error rate as a conventional method based on the Gabor wavelets, and the iris code size of our proposed methods is five times smaller than that of the Gabor wavelets.

  • Recursive Channel Estimation Based on Finite Parameter Model Using Reduced-Complexity Maximum Likelihood Equalizer for OFDM over Doubly-Selective Channels

    Kok Ann Donny TEO  Shuichi OHNO  Takao HINAMOTO  

     
    PAPER

      Vol:
    E88-A No:11
      Page(s):
    3076-3084

    To take intercarrier interference (ICI) attributed to time variations of the channel into consideration, the time- and frequency-selective (doubly-selective) channel is parameterized by a finite parameter model. By capitalizing on the finite parameter model to approximate the doubly-selective channel, a Kalman filter is developed for channel estimation. The ICI suppressing, reduced-complexity Viterbi-type Maximum Likelihood (RML) equalizer is incorporated into the Kalman filter for recursive channel tracking and equalization to improve the system performance. An enhancement in the channel tracking ability is validated by theoretical analysis, and a significant improvement in BER performance using the channel estimates obtained by the recursive channel estimation method is verified by Monte-Carlo simulations.

2301-2320hit(3945hit)