The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] SI(16314hit)

13521-13540hit(16314hit)

  • Module Selection Using Manufacturing Information

    Hiroyuki TOMIYAMA  Hiroto YASUURA  

     
    PAPER-High-level Synthesis

      Vol:
    E81-A No:12
      Page(s):
    2576-2584

    Since manufacturing processes inherently fluctuate, LSI chips which are produced from the same design have different propagation delays. However, the difference in delays caused by the process fluctuation has rarely been considered in most of existing high-level synthesis systems. This paper presents a new approach to module selection in high-level synthesis, which exploits the difference in functional unit delays. First, a module library model which assumes the probabilistic nature of functional unit delays is presented. Then, we propose a module selection problem and an algorithm which minimizes the cost per faultless chip. Experimental results demonstrate that the proposed algorithm finds optimal module selections which would not have been explored without manufacturing information.

  • Excitation of Magnetostatic Surface Wave by Coplanar Waveguide Transducers

    Yoshiaki ANDO  Ning GUAN  Ken'ichiro YASHIRO  Sumio OHKAWA  

     
    PAPER-Electromagnetic Theory

      Vol:
    E81-C No:12
      Page(s):
    1942-1947

    Excitation of magnetostatic surface waves by coplanar waveguide transducers is analyzed by using the integral kernel expansion method. The Fourier integral for the current density is derived in terms of an unknown normal component of the magnetic flux density on slot region of a coplanar waveguide. The integral kernel is expanded into a series of Legendre polynomials and then applying Galerkin's method to the unknown field reduces the Fourier integral to a system of linear equations for the unknown coefficients. In this process, we should take into account the edge conditions which show nonreciprocal characteristics depending on frequency. The present method shows excellent agreement with experiments.

  • Patterned Versus Conventional Object-Oriented Analysis Methods: A Group Project Experiment

    Shuichiro YAMAMOTO  Hiroaki KUROKI  

     
    PAPER-Experiment

      Vol:
    E81-D No:12
      Page(s):
    1458-1465

    Object-oriented analysis methods can be grouped into data-driven and behavior-driven approaches. With data-driven approaches, object models are developed based on a list of objects and their inter-relationships, which describe a static view of the real world. With behavior-oriented approaches, a system usage scenario is analyzed before developing the object models. Although qualitative comparisons of these two types of methods have been made, there was no statistical study has evaluated them based on controlled experiments. This paper proposes the patterned object-oriented method, POOM, which is a behavior-oriented approach, and compares it to OMT, a data-driven approach, using small team experiments. The effectiveness of POOM is shown in terms of productivity and homogeneity.

  • Restructuring Logic Representations with Simple Disjunctive Decompositions

    Hiroshi SAWADA  Shigeru YAMASHITA  Akira NAGOYA  

     
    PAPER-Logic Synthesis

      Vol:
    E81-A No:12
      Page(s):
    2538-2544

    Simple disjunctive decomposition is a special case of logic function decompositions, where variables are divided into two disjoint sets and there is only one newly introduced variable. It offers an optimal structure for a single-output function. This paper presents two techniques that enable us to apply simple disjunctive decompositions with little overhead. Firstly, we propose a method to find symple disjunctive decomposition forms efficiently by limiting decomposition types to be found to two: a decomposition where the bound set is a set of symmetric variables and a decomposition where the output function is a 2-input function. Secondly, we propose an algorithm that constructs a new logic representation for a simple disjunctive decomposition just by assigning constant values to variables in the original representation. The algorithm enables us to apply the decomposition with keeping good structures of the original representation. We performed experiments for decomposing functions and confirmed the efficiency of our method. We also performed experiments for restructuring fanout free cones of multi-level logic circuits, and obtained better results than when not restructuring them.

  • FDTD Implementation of Surface Impedance Boundary Condition for Dispersive Layer Backed by Perfect Conductor

    Yasuhiro NISHIOKA  Osamu MAESHIMA  Toru UNO  Saburo ADACHI  

     
    LETTER

      Vol:
    E81-C No:12
      Page(s):
    1902-1904

    In this paper, the surface impedance boundary condition (SIBC) for a dispersive lossy medium backed by a perfect conductor is implemented in computation of electromagnetic (EM) scattering using the finite difference time domain (FDTD) method. The dispersion of the surface impedance is incorporated into FDTD update equations by using the piecewise linear recursive convolution (PLRC) approach. The validity of the proposed method is confirmed numerically.

  • Performance Evaluation of Media Synchronization in PHS with the H.223 Annex Multiplexing Protocol

    Masami KATO  Yoshihito KAWAI  Shuji TASAKA  

     
    PAPER-QoS Control and Traffic Control

      Vol:
    E81-B No:12
      Page(s):
    2423-2431

    This paper studies the application of a media synchronization mechanism to the interleaved transmission of video and audio specified by the H.223 Annex in PHS. The media synchronization problem due to network delay jitters in the interleaved transmission has not been discussed in either the Annex or any related standards. The slide control scheme, which has been proposed by the authors, is applied to live media. We also propose a QOS control scheme to control both quality of the media synchronization and that of the transmission delay. Through simulation we confirm the effectiveness of the slide control scheme and the QOS control scheme in the interleaved transmission.

  • FD-TD Analysis of Coaxial Probes Inserted into Rectangular Waveguides

    Atsushi SANADA  Minoru SANAGI  Shigeji NOGI  Kuniyoshi YAMANE  

     
    PAPER

      Vol:
    E81-C No:12
      Page(s):
    1821-1830

    Full-wave FD-TD analysis has been carried out for coaxial probes inserted into waveguides. Both single and symmetrically placed paired coaxial probe structures have been discussed and we have revealed the relation between equivalent circuit parameters and structural parameters of the coaxial probes including cases for large diameter and extension length, which is useful for practical waveguide circuit design. The equivalent circuit parameters calculated from the scattering parameters agreed well with corresponding measured data. From the calculated field in a waveguide, field concentration at sharp edges of probe sole or base, which ought to be taken into account for high power application design has been also discussed. Besides, amplitudes of higher order modes in waveguides excited by coaxial probes or pairs of coaxial probes has been calculated so as to estimate the range beyond which higher order modes decay sufficiently. This estimation is necessary for simple and easy design of probe using circuit theory.

  • A Support Tool for Specifying Requirements Using Structures of Documents

    Tomofumi UETAKE  Morio NAGATA  

     
    PAPER-Application

      Vol:
    E81-D No:12
      Page(s):
    1429-1438

    The software requirements specification process consists of three steps; requirements capture and analysis, requirements definition and specification, and requirements validation. At the beginning of the second step which this paper focuses on, there have been several types of massive documents generated in the first step. Since the developers and the clients/users of the new software system may not have common knowledge in the field which the system deals with, it is difficult for the developers to produce correct requirements specification by using these documents. There has been few research work to solve this problem. The authors have developed a support tool to produce correct requirements specification by arranging and restructuring those documents into clearly understandable forms. In the second step, the developers must specify the functions and their constraints of the new system from those documents. Analyzing the developers' real activities for designing the support tool, the authors propose a model of this step as the following four activities. To specify the functions of the new system, the developers must collect the sentences which may suggest the functions scattering those documents. To define the details of each function, the developers must gather the paragraphs including the descriptions of the functions. To verify the correctness of each function, the developers must survey all related documents. To perform above activities successfully, the developers must manage various versions of those documents correctly. According to these four types of activities, the authors propose the effective ways to support the developers by arranging those documents. This paper shows algorithms based on this model by using the structures of the documents and keywords which may suggest the functions or constraints. To examine the feasibility of their proposal, the authors implemented a prototype tool. Their tool extracts complete information scattering those documents. The effectiveness of their proposal is demonstrated by their experiments.

  • A New Image Coding Technique with Low Entropy Using a Flexible Zerotree

    Sanghyun JOO  Hisakazu KIKUCHI  Shigenobu SASAKI  Jaeho SHIN  

     
    PAPER-Source Encoding

      Vol:
    E81-B No:12
      Page(s):
    2528-2535

    A zerotree image-coding scheme is introduced that effectively exploits the inter-scale self-similarities found in the octave decomposition by a wavelet transform. A zerotree is useful for efficiently coding wavelet coefficients; its efficiency was proved by Shapiro's EZW. In the EZW coder, wavelet coefficients are symbolized, then entropy-coded for further compression. In this paper, we analyze the symbols produced by the EZW coder and discuss the entropy for a symbol. We modify the procedure used for symbol-stream generation to produce lower entropy. First, we modify the fixed relation between a parent and children used in the EZW coder to raise the probability that a significant parent has significant children. The modified relation is flexibly modified again based on the observation that a significant coefficient is more likely to have significant coefficients in its neighborhood. The three relations are compared in terms of the number of symbols they produce.

  • An Optimization Algorithm for High Performance ASIP Design with Considering the RAM and ROM Sizes

    Nguyen Ngoc BINH  Masaharu IMAI  Yoshinori TAKEUCHI  

     
    PAPER-Co-design

      Vol:
    E81-A No:12
      Page(s):
    2612-2620

    In designing ASIPs (Application Specific Integrated Processors), the papers investigated so far have almost focused on the optimization of the CPU core and did not pay enough attention to the optimization of the RAM and ROM sizes together. This paper overcomes this limitation and proposes an optimization algorithm to define the best ratio between the CPU core, RAM and ROM of an ASIP chip to achieve the highest performance while satisfying design constraints on the chip area. The partitioning problem is formalized as a combinatorial optimization problem that partitions the operations into hardware and software so that the performance of the designed ASIP is maximized under given chip area constraint, where the chip area includes the HW cost of the register file for a given application program with associated input data set. The optimization problem is parameterized so that it can be applied with different technologies to synthesize CPU cores, RAMs or ROMs. The experimental results show that the proposed algorithm is found to be effective and efficient.

  • Register-Transfer Level Testability Analysis and Its Application to Design for Testability

    Mizuki TAKAHASHI  Ryoji SAKURAI  Hiroaki NODA  Takashi KAMBE  

     
    PAPER-Test

      Vol:
    E81-A No:12
      Page(s):
    2646-2654

    In this paper, we propose a new register transfer level (RT level) testability analysis method. Controllability and observability measures are defined for signal vectors based on the numbers of values they can take. The control part and the datapath part are automatically identified in the given RT level model and distinctive analysis methods are applied. We also describe a DFT point selection method based on our testability measures. In a experiment on a signal processing circuit whose gate count is 7690 including 578 FFs, almost the same fault coverage is achieved with fewer scan FFs than a conventional method based on gate level testability analysis.

  • A Unified Lossless Coding Scheme for Gray-Level, Bi-Level and Compound Images

    Shigeo KATO  Muling GUO  

     
    PAPER-Source Encoding

      Vol:
    E81-B No:12
      Page(s):
    2536-2543

    A unified source coding method is highly desired for many systems that deal with images diversifying from 1 bit/pel bi-level documents to SHD (Super High Definition) images of 12 bit/pel for each color component, and progressive coding that allows images to be reconstructed with increasing pixel accuracy or spatial resolution is essential for many applications including World Wide Web, medical images archive, digital library, pre-press and quick look applications. In this paper, we propose a unified continuous-tone and bi-level image coding method with pyramidal and progressive transmission feature. Hierarchical structure is constructed by interlacing subsampling, and each hierarchy is encoded by DPCM combined with reduced Markov model. Simulation results show that the proposed method is a little inferior than JBIG for bi-level image coding but can achieve better lossless compression ratio for gray-level image coding than CREW, in which wavelet transform is exploited to construct hierarchical structure.

  • A New Routing Method Considering Neighboring-Wire Capacitance Constraints

    Takumi WATANABE  Kimihiro YAMAKOSHI  Hitoshi KITAZAWA  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E81-A No:12
      Page(s):
    2679-2687

    This paper presents a new routing method that takes into account neighboring-wire-capacitance (inter-layer and intra-layer) constraints. Intermediate routing (IR) assigns each H/V wire segment to the detailed routing (DR) grid using global routing (GR) results, considering the neighboring-wire constraints (NWC) for critical nets. In DR, the results of IR for constrained nets and their neighboring wires are preserved, and violations that occur in IR are corrected. A simple method for setting NWC that satisfy the initial wire capacitance given in a set-wire-load (SWL) file is also presented. The routing method enables more accurate delay evaluation by considering inter-wire capacitance before DR, and avoids long and costly turnaround in deepsubmicron layout design. Experimental results using MCNC benchmark test data shows that the errors between the maximum delay from IR and that from DR for each net were less than 5% for long (long delay) nets.

  • Ultrasonic Closing Click of the Prosthetic Cardiac Valve

    Jun HASEGAWA  Kenji KOBAYASHI  Hiroshi MATSUMOTO  

     
    LETTER-Bio-Cybernetics and Neurocomputing

      Vol:
    E81-D No:12
      Page(s):
    1517-1521

    Mechanical prosthetic cardiac valves generate not only the widely recognized audible closing clicks but also ultrasonic closing clicks, as previously reported by us. A personal-computer-based measurement and analysis system with the bandwidth of 625 kHz has been developed to clarify the characteristics of these ultrasonic closing clicks. Fifty cases in total were assessed clinically, including cases with tilting disk valves, bileaflet valves, and flat disk valves. The ultrasonic closing clicks are damped vibrations continuing for about two milliseconds, and their frequency range was confirmed to be from 8 kHz to 625 kHz, while that of the audible click was up to 8 kHz. Although the sensitivity of the sensor decreased by approximately 30 dB at 625 kHz, effective power of the ultrasonic closing click was confirmed at this frequency. Moreover, it was shown that, surprisingly, the signal power at 625 kHz was still at the same level as that at around 100 kHz. Those wide bandwidth signal components exist independent of the type of mechanical valve, but the spectral pattern shows some dependence on the valve type.

  • The Underlying Ontology of a DSS Generator for Transportation Demand Forecasting

    Cristina FIERBINTEANU  Toshio OKAMOTO  Naotugu NOZUE  

     
    PAPER-Theory and Methodology

      Vol:
    E81-D No:12
      Page(s):
    1330-1338

    We introduce an ontology for transportation systems demand forecasting and its implementation into a decision support system (DSS) generator. The term ontology, as we use it here, means a collection of building blocks necessary and sufficient to construct a skeleton of a specific DSS, that is a task ontology. The ontology is specified in constraint logic, which also ensures a good support for modularity.

  • An Efficient Method for Finding an Optimal Bi-Decomposition

    Shigeru YAMASHITA  Hiroshi SAWADA  Akira NAGOYA  

     
    PAPER-Logic Synthesis

      Vol:
    E81-A No:12
      Page(s):
    2529-2537

    This paper presents a new efficient method for finding an "optimal" bi-decomposition form of a logic function. A bi-decomposition form of a logic function is the form: f(X) = α(g1(X1), g2(X2)). We call a bi-decomposition form optimal when the total number of variables in X1 and X2 is the smallest among all bi-decomposition forms of f. This meaning of optimal is adequate especially for the synthesis of LUT (Look-Up Table) networks where the number of function inputs is important for the implementation. In our method, we consider only two bi-decomposition forms; (g1 g2) and (g1 g2). We can easily find all the other types of bi-decomposition forms from the above two decomposition forms. Our method efficiently finds one of the existing optimal bi-decomposition forms based on a branch-and-bound algorithm. Moreover, our method can also decompose incompletely specified functions. Experimental results show that we can construct better networks by using optimal bi-decompositions than by using conventional decompositions.

  • A Model for Recording Software Design Decisions and Design Rationale

    Seiichi KOMIYA  

     
    PAPER-Theory and Methodology

      Vol:
    E81-D No:12
      Page(s):
    1350-1363

    For the improvement of software quality and productivity, the author aims at realizing a software development environment to develop software through utilizing the merits of group work. Since networking is necessary for collaborative software development, he has developed a software distributed development environment for collaborative software development. In this environment, discussions about software design are held through a communication network, and the contents of discussions are recorded as software design decisions and decision rationale. One feature of this environment is that the contents of discussions can be recorded in on-line real time and reused without reconstructing the information recorded through this environment. This paper clarifies the essential conditions for actualizing this environment and proposes an information structure model for recording the contents of discussions that actualizes the above-mentioned feature. The effectiveness of the proposed model is proved through an example of its application to software design discussions.

  • A Meta-Model of Work Structure of Software Project and a Framework for Software Project Management Systems

    Seiichi KOMIYA  Atsuo HAZEYAMA  

     
    PAPER-System

      Vol:
    E81-D No:12
      Page(s):
    1415-1428

    Development of large-scale software is usually conducted through a project to unite a work force. In addition, no matter what kind of life cycle model is employed, a development plan is required for a software development project in order for the united work force to function effectively. For the project to be successful, it is also necessary to set management objectives based on this plan and confirm that they are achieved. This method is considered to be effective, but actually making a software development project and following the achievement of the management objectives at each step is not easy because predicting the necessary work amount and risks that the project involves is difficult in software development. Therefore, it is necessary to develop a system to support software project management so that the project manager can manage the entire project and the work load is reduced. This paper proposes a meta-model of work structure of software development projects for project management by using an object-oriented database with constraints as well as a framework for software project management systems based on this meta-model. Also proven, through an example of a system that analyzes repercussions on progress of a software development project, is that the meta-model and framework are effective in software project management.

  • A Test Methodology for Core-Based System LSIs

    Makoto SUGIHARA  Hiroshi DATE  Hiroto YASUURA  

     
    PAPER-Test

      Vol:
    E81-A No:12
      Page(s):
    2640-2645

    In this paper, we propose a test methodology for core-based system LSIs. Our test methodology aims to decrease testing time for core-based system LSIs. In our method, every core is supplied with several sets of test vectors. Every set of test vectors guarantees sufficient fault coverage. Each set of test vectors consists of two parts. One is based on built-in self-test (BIST) and the other is based on external testing. These sets of test vectors are designed to have different ratio of BIST to external testing each other for every core. We can minimize testing time for core-based system LSIs by selecting from the given sets of test vectors for each core. The main contributions of this paper are summarized as follows. (i) BIST is efficiently combined with external testing to relax the limitation of the external primary inputs and outputs. (ii) External testing for one of cores and BISTs for the others are performed in parallel to reduce the total testing time. (iii) The testing time minimization problem for core-based system LSIs is formulated as a combinatorial optimization problem to select the optimal set of test vectors from given sets of test vectors for each core.

  • Database Guided Realistic Grasping Posture Generation Using Inverse Kinematics

    Yahya AYDIN  Masayuki NAKAJIMA  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E81-D No:11
      Page(s):
    1272-1280

    This paper addresses the important issue of estimating realistic grasping postures, and presents a methodology and algorithm to automate the generation of hand and body postures during the grasp of arbitrary shaped objects. Predefined body postures stored in a database are generalized to adapt to a specific grasp using inverse kinematics. The reachable space is represented discretely dividing into small subvolumes, which enables to construct the database. The paper also addresses some common problems of articulated figure animation. A new approach for body positioning with kinematic constraints on both hands is described. An efficient and accurate manipulation of joint constraints is presented. Obtained results are quite satisfactory, and some of them are shown in the paper. The proposed algorithms can find application in the motion of virtual actors, all kinds of animation systems including human motion, robotics and some other fields such as medicine, for instance, to move the artificial limbs of handicapped people in a natural way.

13521-13540hit(16314hit)