The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] OMP(3945hit)

3681-3700hit(3945hit)

  • On Computing Connecting Orbits: General Algorithm and Applications to the Sine–Gordon and Hodgkin–Huxley Equations

    Eusebius J. DOEDEL  Mark J. FRIEDMAN  John GUCKENHEIMER  

     
    PAPER-Chaos and Related Topics

      Vol:
    E77-A No:11
      Page(s):
    1801-1805

    A systematic method for locating and computing branches of connecting orbits developed by the authors is outlined. The method is applied to the sine–Gordon and Hodgkin–Huxley equations.

  • Properties of Circuits in a W-Graph

    Hua-An ZHAO  Wataru MAYEDA  

     
    PAPER-Graphs, Networks and Matroids

      Vol:
    E77-A No:10
      Page(s):
    1692-1699

    A W-graph is a partially known graph which contains wild-components. A wild-component is an incompletely defined connected subgraph having p vertices and p-1 unspecified edges. The informations we know on a wild-component are which has a vertex set and between any two vertices there is one and only one path. In this paper, we discuss the properties of circuits in a W-graph (called W-circuits). Although a W-graph has unspecified edges, we can obtain some important properties of W-circuits. We show that the W-ring sum of W-circuits is also a W-circuit in the same W-graph. The following (1) and (2) are proved: (1) A W-circuit Ci of a W-graph can be transformed into either a circuit or an edge disjoint union of circuits, denoted by Ci*, of a graph derived from the W-graph, (2) if W-circuits C1, C2, , Cn are linearly independent, then C1*, C2*, , Cn* obtained in (1) are also linearly independent.

  • Recent Development of Testing System for Arcing Contacts

    Hideaki SONE  Tasuku TAKAGI  

     
    INVITED PAPER

      Vol:
    E77-C No:10
      Page(s):
    1545-1552

    Reliability of an electric contact can be defined by two parameters, contact resistance and wear, and the parameters of contacts operated in arcing condition are governed by the arc discharge. Thus the measurement on the relationship between the parameters and arc phenomena is necessary to improve the contact performance. The parameters for arcing electric contacts and problems were reviewed, and new concept for electric contact testing systems was proposed. Measurement with such an advanced system should be concurrent parallel measurement, quantitative measurement of degradation, systematic measurement, and analysis of arc discharge phenomena. Some examples of advanced measurement systems and new data obtained with such systems were described. Systematic results on relationships between condition and performance parameters were obtained by systematic measurement with systematically settled conditions, such as opening speed or material condition. A measurement method for the metallic phase arc duration was developed by the authors, and role of the metallic phase arc on contact performance parameters was found from interpretation of obtained data. The real-time surface profile measurement of an operating contact and the optical transient spectrum analyser for arc light radiated from breaking contact were also described.

  • Contact Characterisitcs of New Self-Lubricating Composite Materials

    Yoshitada WATANABE  

     
    PAPER-Sliding Contacts

      Vol:
    E77-C No:10
      Page(s):
    1662-1667

    Composite materials of solid lubricants, such as graphite, MoS2, WS2, etc., and metals are being used as the sliding electrical contacts. However, few reports have so far been presented on the detailed characteristics of such composite materials. It is shown in this report that contact resistance and coefficient of friction of the sliding contact of the composite material of Cu-Nb system against Cu were higher than those of the sliding contact of the composite material of Cu-Sn system against Cu. It was, further, found that composite materials of Cu-Sn system were superior to those of Cu-Nb system being both contact resistances and coefficients of friction lowered. At the same time, it was found that performances of composite materials of Cu-Sn alloy base containing exclusively WS2 were superior to those containing both WS2 and MoS2. It was, therefore, suggested that proper samples suitable for the service conditions should be selected from the composite materials of Cu-Sn system which contain exclusively WS2 for the practical applications.

  • Estimation of 3-D Motion from Optical Flow with Unbiased Objective Function

    Norio TAGAWA  Takashi TORIU  Toshio ENDOH  

     
    PAPER-Image Processing, Computer Graphics and Pattern Recognition

      Vol:
    E77-D No:10
      Page(s):
    1148-1161

    This paper describes a noise resistant algorithm for estimating 3-D rigid motion from optical flow. We first discuss the problem of constructing the objective function to be minimized. If a Gaussian distribution is assumed for the niose, it is well-known that the least-squares minimization becomes the maximum likelihood estimation. However, the use of this objective function makes the minimization procedure more expensive because the program has to go through all the points in the image at each iteration. We therefore introduce an objective function that provides unbiased estimators. Using this function reduces computational costs. Furthermore, since good approximations can be analytically obtained for the function, using them as an initial guess we can apply an iterative minimization method to the function, which is expected to be stable. The effectiveness of this method is demonstrated by computer simulation.

  • A preconstrained Compaction Method Applied to Direct Design-Rule Conversion of CMOS Layouts

    Hiroshi MIYASHITA  

     
    PAPER-Computer Aided Design (CAD)

      Vol:
    E77-A No:10
      Page(s):
    1684-1691

    This paper describes a preconstrained compaction method and its application to the direct design-rule conversion of CMOS layouts. This approach can convert already designed physical patterns into compacted layouts that satisfy user-specified design rules. Furthermore, preconstrained compaction can eliminate unnecessarily extended diffusion areas and polysilicon wires which tend to be created with conventional longest path based compactions. Preconstrained compaction can be constructed by combining a longest path algorithm with forward and backward slack processes and a preconstraint generation process. This contrasts with previously proposed approaches based on longest path algorithms followed by iterative improvement processes, which include applications of linear programming. The layout styles in those approaches are usually limited to a model where fixed-shaped rectilinear blocks are moved so as to minimize the total length of rectilinear interconnections among the blocks. However, preconstrained compaction can be applied to reshaping polygonal patterns such as diffusion and channel areas. Thus, this compaction method makes it possible to reuse CMOS leaf and macro cell layouts even if design rules change. The proposed preconstrained compaction approach has been applied to direct design-rule conversion from 0.8-µm to 0.5-µm rules of CMOS layouts containing from several to 10,195 transistors. Experimental results demonstrate that a 10.6% reduction in diffusion areas can be achieved without unnecessary extensions of polysilicon wires with a 39% increase in processing times compared with conventional approaches.

  • A Parallel Method for the Prefix Convex Hulls Problem

    Wei CHEN  Koji NAKANO  Toshimitsu MASUZAWA  Nobuki TOKURA  

     
    PAPER-Algorithms, Data Structures and Computational Complexity

      Vol:
    E77-A No:10
      Page(s):
    1675-1683

    Given a sorted set S of n points in the plane, the prefix convex hulls problem of S is to compute the convex hull for every prefix set of S. We present a parallel algorithm for this problem. Our algorithm runs in O(logn) time using n/logn processors in the CREW PRAM computational model. The algorithm is shown to be time and cost optimal. One of the techniques we adopt to achieve these optimal bounds is the use of a new parallel data structure Array-Tree.

  • High-Density, High-Pin-Count Flexible SMD Connector for High-Speed Data Bus

    Shinichi SASAKI  Tohru KISHIMOTO  

     
    PAPER-Components

      Vol:
    E77-C No:10
      Page(s):
    1694-1701

    This paper describes a high-density, high-pin-count flexible SMD connector used for high-speed data buses between MCMs or daughter boards. This connector consists of a flexible film cable interconnection with accurately controlled characteristic impedance, and a contact housing composed of double-line contacts and SMD type leads. It has 98 contacts each with a pitch of 0.4 mm. The connector mounting area is 6 mm wide and 23 mm long. The flexible cable has a double-sided triple-parallel micro stripline structure with an insertion force of less than 2.9 kgf and characteristic impedance of 48 to 50 Ω. Insertion loss is -0.5 dB at 600 MHz and crosstalk noise is less than 110 mV at 250 ps rising time. This connector can be used for high-speed data transmission of up to 300 ps rising time.

  • Mapping QR Decomposition on Parallel Computers: A Study Case for Radar Applications

    Antonio d'ACIERNO  Michele CECCARELLI  Alfonso FARINA  Alfredo PETROSINO  Luca TIMMONERI  

     
    PAPER-Electronic and Radio Applications

      Vol:
    E77-B No:10
      Page(s):
    1264-1271

    The sidelobe canceler in radar systems is a highly computational demanding problem. It can be efficiently tackled by resorting to the QR decomposition mapped onto a systolic array processor. The paper reports several mapping strategies by using massive parallel computers available on the market. MIMD as well as SIMD machines have been used, specifically MEIKO Computing Surface, nCUBE2, Connection Machine CM-200, and MasPar MP-1. The achieved data throughput values have been measured for a number of operational situations of practical interest.

  • A Polynomial Time Learning Algorithm for Recognizable Series

    Hiroyuki OHNISHI  Hiroyuki SEKI  Tadao KASAMI  

     
    PAPER-Automata, Languages and Theory of Computing

      Vol:
    E77-D No:10
      Page(s):
    1077-1085

    Recognizable series is a model of a sequential machine. A recognizable series S is represented by a triple (λ,µ,γ), called a linear representation of S, where λ is a row vector of dimension n specifying the initial state, γ is a column vector of dimension n specifying the output at a state, and µ is a morphism from input words to nn matrices specifying the state transition. The output for an input word w is defined as λ(µw) γ, called the coefficient of w in S, and written as (S,w). We present an algorithm which constructs a reduced linear representation of an unknown recognizable series S, with coefficients in a commutative field, using coefficient queries and equivalence queries. The answer to a coefficient query, with a word w, is the coefficient (S, w) of w in S. When one asks an equivalence query with a linear representation (λ,µ,γ), if (λ,µ,γ) is a linear representation of S, yes is returned, and otherwise a word c such that λ (µc) γ(S, c) and the coefficient (S, c) are returned: Such a word c is called a counterexample for the query. For each execution step of the algorithm, the execution time consumed from the initial step to the current step is O(mN 4M), where N is the dimension of a reduced linear representation of S, M is the maximum time consumed by a single fundamental operation (addition, subtraction, multiplication or division), and m is the maximum length of counterexamples as answers to equivalence queries returned until that step.

  • Logic Synthesis and Optimization Algorithm of Multiple-Valued Logic Functions

    Ali Massound HAIDAR  Mititada MORISUE  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E77-D No:10
      Page(s):
    1106-1117

    This paper presents a novel and successful logic synthesis method for optimizing ternary logic functions of any given number of input variables. A new optimization algorithm to synthesize and minimize an arbitrary ternary logic function of n-input variables can always lead this function to optimal or very close to optimal solution, where [n (n1)/2]1 searches are necessary to achieve the optimal solution. Therefore, the complexity number of this algorithm has been greatly reduced from O(3n) into O(n2). The advantages of this synthesis and optimization algorithm are: (1) Very easy logic synthesis method. (2) Algorithm complexity is O(n2). (3) Optimal solution can be obtained in very short time. (4) The method can solve the interconnection problems (interconnection delay) of VLSI and ULSI processors, where very fast and parallel operations can be achieved. A transformation method between operational and polynomial domains of ternary logic functions of n-input variables is also discussed. This transformation method is very effective and simple. Design of the circuits of GF(3) operators, addition and multiplication mod-3, have been proposed, where these circuits are composed of Josephson junction devices. The simulation results of these circuits and examples show the following advantages: very good performances, very low power consumption, and ultra high speed switching operation.

  • Data Compression and Interpolation of Multi-View Image Set

    Toshiaki FUJII  Hiroshi HARASHIMA  

     
    PAPER

      Vol:
    E77-D No:9
      Page(s):
    987-995

    This paper is concerned with the data compression and interpolation of multi-view image set. In this paper, we propose a novel disparity compensation scheme based on geometric relationship. We first investigate the geometric relationship between a point in the object space and its projection onto view images. Then, we propose the disparity compensation scheme which utilize the geometric constraints between view images. This scheme is used to compress the multi-view image into the structure of the triangular patches and the texture data on the surface of patches. This scheme not only compresses the multi-view image but also synthesize the view images from any viewpoints in the viewing zone. Also, this scheme is fast and have compatibility with 2-D interframe coding. Finally, we report the experiment, where two sets multi-view image were used as original images and the amount of data was reduced to 1/19 and 1/20 with SNR 34 dB and 20 dB, respectively.

  • A Method for Solving Configuration Problem in Scene Reconstruction Based on Coplanarity

    Seiichiro DAN  Toshiyasu NAKAO  Tadahiro KITAHASHI  

     
    PAPER

      Vol:
    E77-D No:9
      Page(s):
    958-965

    We can understand and recover a scene even from a picture or a line drawing. A number of methods have been developed for solving this problem. They have scarcely aimed to deal with scenes of multiple objects although they have ability to recognize three-dimensional shapes of every object. In this paper, challenging to solve this problem, we describe a method for deciding configurations of multiple objects. This method employs the assumption of coplanarity and the constraint of occlusion. The assumption of coplanarity generates the candidates of configurations of multiple objects and the constraint of occlusion prunes impossible configurations. By combining this method with a method of shape recovery for individual objects, we have implemented a system acquirig a three-dimensional information of scene including multiple objects from a monocular image.

  • A Fault Model for Multiple-Valued PLA's and Its Equivalences

    Yasunori NAGATA  Masao MUKAIDONO  

     
    PAPER-Computer Aided Design (CAD)

      Vol:
    E77-A No:9
      Page(s):
    1527-1534

    In this paper, a fault model for multiple-valued programmable logic arrays (MV-PLAs) is proposed and the equivalences of faults of MV-PLA's are discussed. In a supposed multiple-valued NOR/TSUM PLA model, it is shown that multiple-valued stuck-at faults, multiple-valued bridging faults, multiple-valued threshold shift faults and other some faults in a literal generator circuit are equivalent or subequivalent to a multiple crosspoint fault in the NOR plane or a multiple fault of weights in the TSUM plane. These results lead the fact that multiple-valued test vector set which indicates all multiple crosspoint fault and all multiple fault of weights also detects above equivalent or subequivalent faults in a MV-PLA.

  • Implication Problems for Specialization Constraints on Databases Supporting Complex Objects

    Minoru ITO  Michio NAKANISHI  

     
    PAPER-Algorithms, Data Structures and Computational Complexity

      Vol:
    E77-A No:9
      Page(s):
    1510-1519

    For a complex object model, a form of range restriction called specialization constraint (SC), has been proposed, which is associated not only with the properties themselves but also with property value paths. The domain and range of an SC, however, were limited to single classes. In this paper, SCs are generalized to have sets of classes as their domains and ranges. Let Σ be a set of SCs, where each SC in Σ has a set of classes as its domain and a non-empty set of classes as its range. It is proved that an SC is a logical consequence of Σ if and only if it is a finite logical consequence of Σ. Then a sound and complete axiomatization for SCs is presented. Finally, a polynomial-time algorithm is given, which decides whether or not an SC is a logical consequence of Σ.

  • Exhaustive Computation to Derive the Lower Bound for Sorting 13 Items

    Shusaku SAWATO  Takumi KASAI  Shigeki IWATA  

     
    PAPER-Algorithm and Computational Complexity

      Vol:
    E77-D No:9
      Page(s):
    1027-1031

    We have made an exhaustive computation to establish that 33 comparisons never sort 13 items. The computation was carried out within 10 days by a workstation. Since merge insertion sort [Ford, et al. A tournament problem, Amer. Math. Monthly, vol. 66, (1959)] uses 34 comparisons for sorting 13 items, our result guarantees the optimality of the sorting procedure to sort 13 items as far as the number of comparisons is concerned. The problem has been open for nearly three decades since Mark Wells discovered that 30 comparisons are required to sort 12 items in 1965.

  • Passive Depth Acquisition for 3D Image Displays

    Kiyohide SATOH  Yuichi OHTA  

     
    INVITED PAPER

      Vol:
    E77-D No:9
      Page(s):
    949-957

    In this paper, we first discuss on a framework for a 3D image display system which is the combination of passive sensing and active display technologies. The passive sensing enables to capture real scenes under natural condition. The active display enables to present arbitrary views with proper motion parallax following the observer's motion. The requirements of passive sensing technology for 3D image displays are discussed in comparison with those for robot vision. Then, a new stereo algorithm, called SEA (Stereo by Eye Array), which satisfies the requirements is described in detail. The SEA uses nine images captured by a 33 camera array. It has the following features for depth estimation: 1) Pixel-based correspondence search enables to obtain a dense and high-spatial-resolution depth map. 2) Correspondence ambiguity for linear edges with the orientation parallel to a particular baseline is eliminated by using multiple baselines with different orientations. 3) Occlusion can be easily detected and an occlusion-free depth map with sharp object boundaries is generated. The feasibility of the SEA is demonstrated by experiments by using real image data.

  • Computer Error Analysis of Rainfall Rates Measured by a C-Band Dual-Polarization Radar

    Yuji OHSAKI  

     
    PAPER-Antennas and Propagation

      Vol:
    E77-B No:9
      Page(s):
    1162-1170

    Radar signals fluctuate because of the incoherent scattering of raindrops. Dual-polarization radar estimates rainfall rates from differential reflectivity (ZDR) and horizontal reflectivity (ZH). Here, ZDR and ZH are extracted from fluctuating radar signals by averaging. Therefore, instrumentally measured ZDR and ZH always have errors, so that estimated rainfall rates also have errors. This paper evaluates rainfall rate errors caused by signal fluctuation. Computer simulation based on a physical raindrop model is used to investigate the standard deviation of rainfall rate. The simulation considers acquisition time, and uses both simultaneous and alternate sampling of horizontal and vertical polarizations for square law and logarithmic estimators at various rainfall rates and elevation angles. When measuring rainfall rates that range from 1.0 to 10.0mm/h with the alternate sampling method, using a logarithmic estimator at a relatively large elevation angle, the estimated rainfall rates have significant errors. The simultaneous sampling method is effective in reducing these errors.

  • Some Two-Person Game is Complete for ACk Under Many-One NC1 Reducibility

    Shigeki IWATA  

     
    PAPER-Automata, Languages and Theory of Computing

      Vol:
    E77-D No:9
      Page(s):
    1022-1026

    ACk is the class of problems solvable by an alternating Turing machine in space O(log n) and alternation depth O(logk n) [S. A. Cook, A taxonomy of problems with fast parallel algorithms, Inform. Contr. vol. 64]. We consider a game played by two persons: each player alternately moves a marker along an edge of a given digraph, and the first palyer who cannot move loses the game. It is shown that the problem to determine whether the first player can win the game on a digraph with n nodes exactly after logk n moves is complete for ACk nuder NC1 reducibility.

  • Error Performance of Overlapping Multi-Pulse Pulse Position Modulation (OMPPM) and Trellis Coded OMPPM in Optical Direct-Detection Channel

    Tomoaki OHTSUKI  Iwao SASASE  Shinsaku MORI  

     
    PAPER-Optical Communication

      Vol:
    E77-B No:9
      Page(s):
    1133-1143

    We analyze the error performance of overlapping multipulse pulse position modulation (OMPPM) in optical direct-detection channel with existing noise. Moreover we analyze the error performance of trellis-coded OMPPM with the small overlap index N=2 in optical direct-detection channel to achieve significant coding gains over uncoded PPM, uncoded MPPM and the trellis coded overlapping PPM (OPPM) with the same pulsewidth. First we analyze the symbol error probability of OMPPM in both the quantum-limited case and the quantum and background noise case by using the distance defined as the number of nonoverlapped pulsed chips between symbols. Second by using this distance, we partition the OMPPM signals and apply the four-state and the eight-state codes described by Unger-boeck to OMPPM. It is shown that the trellis coding over OMPPM is effective in optical direct-detection channel: the eight-state trellis coded (4,2,2) OMPPM can achieve gains of 3.92dB and 3.23dB over uncoded binary PPM in the quantum-limited case and in the quantum and background noise case with noise photons per slot time is one, respectively.

3681-3700hit(3945hit)