The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] edge(512hit)

401-420hit(512hit)

  • Fuzzy Rule-Based Edge Detection Using Multiscale Edge Images

    Kaoru ARAKAWA  

     
    PAPER

      Vol:
    E83-A No:2
      Page(s):
    291-300

    Fuzzy rule-based edge detection using multiscale edge images is proposed. In this method, the edge image is obtained by fuzzy approximate reasoning from multiscale edge images which are obtained by derivative operators with various window sizes. The effect of utilizing multiscale edge images for edge detection is already known, but how to design the rules for deciding edges from multiscale edge images is not clarified yet. In this paper, the rules are represented in a fuzzy style, since edges are usually defined ambiguously, and the fuzzy rules are designed optimally by a training method. Here, the fuzzy approximate reasoning is expressed as a nonlinear function of the multiscale edge image data, and the nonlinear function is optimized so that the mean square error of the edge detection be the minimum. Computer simulations verify its high performance for actual images.

  • A Note on the Edge Guard Problem for Spiral Polygons

    Xuehou TAN  

     
    LETTER-Theory/Models of Computation

      Vol:
    E83-D No:2
      Page(s):
    283-284

    Two different examples have been respectively given by Aggarwal and Viswanathan to establish the necessity of (n + 2)/5 edge guards for spiral polygons. However, the former example is incorrect. To show why it is wrong, we give an alternate proof of sufficiency of (n + 2)/5 edge guards for spiral polygons. Our proof is simpler than the sufficiency proof given by Viswanathan.

  • Design Aspects of Discovery Systems

    Osamu MARUYAMA  Satoru MIYANO  

     
    INVITED PAPER

      Vol:
    E83-D No:1
      Page(s):
    61-70

    This paper reviews design aspects of computational discovery systems through the analysis of some successful discovery systems. We first review the concept of viewscope/view on data which provides an interpretation of raw data in a specific domain. Then we relate this concept to the KDD process described by Fayyad et al. (1996) and the developer's role in computational discovery due to Langley (1998). We emphasize that integration of human experts and discovery systems is a crucial problem in designing discovery systems and claim together with the analysis of discovery systems that the concept of viewscope/view gives a way for approaching this problem.

  • Diffraction Pattern Functions of a Wedge Composed of Metal and Lossless Dielectric Illuminated by an E-Polarized Plane Wave

    Huen-Tae HA  Jung-Woong RA  Se-Yun KIM  

     
    LETTER-Antennas and Propagation

      Vol:
    E83-B No:1
      Page(s):
    103-106

    Diffraction pattern functions of an E-polarized scattering by a wedge composed of perfectly conducting metal and lossless dielectric with arbitrary permittivity are analyzed by applying an improved physical optics approximation and its correction. The correction terms are expressed into a complete expansion of the Neumann's series, of which coefficients are calculated numerically to satisfy the null-field condition in the complementary region.

  • Recent Progress of High-Temperature Superconductor Josephson Junction Technology for Digital Circuit Applications

    Jiro YOSHIDA  

     
    INVITED PAPER-Digital Applications

      Vol:
    E83-C No:1
      Page(s):
    49-59

    Recent progress of high-temperature superconductor Josephson junction technology is reviewed in the light of the future application to digital circuits. Among various types of Josephson junctions so far developed, ramp-edge-type junctions with a barrier layer composed of oxide materials in the vicinity of metal-insulator transition seem to offer a unique opportunity to fulfill all the requirements for digital circuit applications by virtue of their small junction dimensions, overdamped properties and relatively high IcRn product values at the temperature of around 30-40 K. Recently developed interface engineered junctions can be classified as junctions of this type. These junctions also raise an interesting problem in physics concerning the possibility of resonant tunneling of Cooper pairs via localized states in the barrier. From the viewpoint of practical applications, the improvement of the spread of the junction parameters is still a serious challenge to the present fabrication technology. Although interface engineered junctions seem to be most promising in this regard at present, 1σ spread of around 8% in the present fabrication technology is far from satisfactory for the fabrication of large-scale integrated circuits. The detailed understanding of the barrier formation mechanism in the interface engineered junction is indispensable not only for advancing this particular fabrication technology but also for improving other junction technology utilizing ramp-edge structures.

  • Utilizing Repair Cases of Home Electrical Appliances

    Satoshi HORI  Hiromitsu SUGIMATSU  Soshi FURUKAWA  Hirokazu TAKI  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E82-D No:12
      Page(s):
    1612-1617

    We have developed a diagnostic Case-Based Reasoning (CBR) system, Doctor, which infers possible defects in a home electrical appliance and lists up necessary service parts. The CBR is suitable to build a diagnostic system for the field service because the CBR imitates how experienced service technicians infer and is able to learn defect trends and novel repair cases from a service report database. In order to apply a CBR system to this real-world problem, Our system has the following new features: (1) Its CBR mechanism utilizes not only repair cases, but also diagnostic rules that are elicited from human experts so that accurate diagnosis can be achieved. (2) Its casebase maintenance mechanism updates the casebase and adapts it to the changing real world.

  • An Edge-Preserving Image Coding System with Vector Quantization

    Chou-Chen WANG  Chin-Hsing CHEN  Chaur-Heh HSIEH  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E82-D No:12
      Page(s):
    1572-1581

    Image coding with vector quantization (VQ) reveals several defects which include edge degradation and high encoding complexity. This paper presents an edge-preserving coding system based on VQ to overcome these defects. A signal processing unit first classifies image blocks into low-activity or high-activity class. A high-activity block is then decomposed into a smoothing factor, a bit-plane and a smoother (lower variance) block. These outputs can be more efficiently encoded by VQ with lower distortion. A set of visual patterns is used to encode the bit-planes by binary vector quantization. We also develop a modified search-order coding to further reduce the redundancy of quantization indexes. Simulation results show that the proposed algorithm achieves much better perceptual quality with higher compression ratio and significant lower computational complexity, as compared to the direct VQ.

  • Simple Expression of Antenna Coupling by Using a Product of Radiation Patterns

    Hiroaki MIYASHITA  Isamu CHIBA  Shuji URASAKI  Shoichiro FUKAO  

     
    PAPER-Antennas and Propagation

      Vol:
    E82-B No:11
      Page(s):
    1867-1873

    Simple approximate formulas are obtained for the mutual impedance and admittance by using a product of radiation patterns of antennas. The formulas come from a stationary expression of the reaction integral between two antennas where far-field approximations are employed. The theory deals with antennas in free space as well as under the presence of a wedge. Two applications are given for microstrip antennas with experimental verifications.

  • A Simple Proof of a Minimum Cut Algorithm and Its Applications

    Hiroshi NAGAMOCHI  Toshimasa ISHII  Toshihide IBARAKI  

     
    PAPER-Algorithms and Data Structures

      Vol:
    E82-A No:10
      Page(s):
    2231-2236

    For the correctness of the minimum cut algorithm proposed in [H. Nagamochi and T. Ibaraki, Computing edge-connectivity of multigraphs and capacitated graphs, SIAM J. Discrete Mathematics, 5, 1992, pp. 54-66], several simple proofs have been presented so far. This paper gives yet another simple proof. As a byproduct, it can provide an O(m log n) time algorithm that outputs a maximum flow between the pair of vertices s and t selected by the algorithm, where n and m are the numbers of vertices and edges, respectively. This algorithm can be used to speed up the algorithm to compute DAGs,t that represents all minimum cuts separating vertices s and t in a graph G, and the algorithm to compute the cactus Γ(G) that represents all minimum cuts in G.

  • Vision Chip for Very Fast Detection of Motion Vectors: Design and Implementation

    Zheng LI  Kiyoharu AIZAWA  

     
    PAPER-Imaging Circuits and Algorithms

      Vol:
    E82-C No:9
      Page(s):
    1739-1748

    This paper gives a detailed presentation of a "vision chip" for a very fast detection of motion vectors. The chip's design consists of a parallel pixel array and column parallel block-matching processors. Each pixel of the pixel array contains a photo detector, an edge detector and 4 bits of memory. In the detection of motion vectors, first, the gray level image is binarized by the edge detector and subsequently the binary edge data is used in the block matching processor. The block-matching takes place locally in pixel and globally in column. The chip can create a dense field of motion where a vector is assigned to each pixel by overlapping 2 2 target blocks. A prototype with 16 16 pixels and four block-matching processors has been designed and implemented. Preliminary results obtained by the prototype are shown.

  • Image Edge Sharpening with Phase Correction

    Hiroshi KONDO  Lifeng ZHANG  

     
    PAPER-Image Processing,Computer Graphics and Pattern Recognition

      Vol:
    E82-D No:8
      Page(s):
    1200-1209

    An image edge sharpening technique with phase correction for digital image is presented. In this paper the point spread functions of a typical standard single focal lens and zoom lens are investigated with a several different apertures. And from this investigation the Fourier phase figure pattern of the point-spread function is identified. The technique here includes a traditional one (a Laplacian operator) and phase-only synthesis with the corrected Fourier phase. The Fourier phase of the original non-blurred image is estimated recursively and it is utilized for implementation of the phase-only synthesis, which is powerful for image edge sharpening. A human visual property is also introduced as a weight function in order to maintain the natural smoothness in the gray level of the resulting processed image. Simulation examples show that the proposed technique is superior to the traditional one.

  • A Query Processing Method for Amalgamated Knowledge Bases

    Lifeng HE  Yuyan CHAO  Tsuyoshi NAKAMURA  Hirohisa SEKI  Hidenori ITOH  

     
    PAPER-Databases

      Vol:
    E82-D No:8
      Page(s):
    1180-1189

    We propose a query processing method for amalgamated knowledge bases. Our query processing method is an extension of the magic sets technique for query processing in amalgamated knowledge bases, augmented with the capabilities of handling amalgamated atoms. Through rewriting rules in a given amalgamated knowledge base, our method offers the advantages associated with top-down as well as bottom-up evaluation. We discuss how to handle amalgamated atoms, consider how to check whether an amalgamated atom is satisfiable in a fact set and how to extend a fact set by inserting an amalgamated atom. We also give the transformation procedures for amalgamated knowledge databases and show the correctness of our method.

  • Specific Features of the QUIK Mediator System

    Bojiang LIU  Kazumasa YOKOTA  Nobutaka OGATA  

     
    PAPER-Distributed and Heterogeneous Databases

      Vol:
    E82-D No:1
      Page(s):
    180-188

    For advanced data-oriented applications in distributed environments, effective information is frequently obtained by integrating or merging various autonomous information sources. There are many problems: how to search information sources, how to resolve their heterogeneity, how to merge or integrate target sources, how to represent information sources with a common protocol, and how to process queries. We have proposed a new language, QUIK, as an extension of a deductive object-oriented database (DOOD) language, QUIXOTE, and extend typical mediator systems. In this paper, we discuss various features of QUIK: programming capabilities as integrating an exchange model and mediator specifications, merging subsumption relations for maintaining consistency, searching alternative information sources by hypothesis generation, and identifying objects.

  • Statistical Zero-Knowledge Protocols to Prove Modular Polynomial Relations

    Eiichiro FUJISAKI  Tatsuaki OKAMOTO  

     
    PAPER

      Vol:
    E82-A No:1
      Page(s):
    81-92

    This paper proposes a bit-commitment scheme, BC(), and an efficient statistical zero-knowledge (in short, SZK) protocol in which, for any given multi-variable polynomial, f(X1,,Xt), and any given modulus, n, a prover, P, gives (I1,,It) to a verifier,ν, and can convince ν that P knows (x1,,xt) which satisfies f(x1,,xt) 0 (mod n) and Ii = BC(xi), (i = 1,,t). The proposed protocol is O(|n|) times more efficient than the corresponding previous ones. The (knowledge) soundness of our protocol holds under a computational assumption, the intractability of a modified RSA problem (see Def. 3.2), while the (statistical) zero-knowledgeness of the protocol needs no computational assumption. The protocol can be employed to construct various practical cryptographic protocols, such as fair exchange, untraceable electronic cash and verifiable secret sharing protocols.

  • Multi-Signature Schemes Secure against Active Insider Attacks

    Kazuo OHTA  Tatsuaki OKAMOTO  

     
    PAPER

      Vol:
    E82-A No:1
      Page(s):
    21-31

    This paper proposes the first provably secure multi-signature schemes under the random oracle model. The security of our schemes can be proven in the sense of concrete security in Ref. [13]. The proposed schemes are efficient if the random oracle is replaced by practical hash functions. The essential techniques in our proof of security are the optimal reduction from breaking the corresponding identification to breaking signatures (ID Reduction Technique), and the hierarchical heavy row lemmas used in the concrete reduction from solving the primitive problem to breaking the identification scheme.

  • Practical and Proven Zero-Knowledge Constant Round Variants of GQ and Schnorr

    Yvo G. DESMEDT  Kaoru KUROSAWA  

     
    PAPER

      Vol:
    E82-A No:1
      Page(s):
    69-76

    In 1992 Burmester studied how to adapt the Guillou-Quisquater identification scheme to a proven zero-knowledge proof without significantly increasing the communication complexity and computational overhead. He proposed an almost constant round version of Guillou-Quisquater. Di Crescenzo and Persiano presented a 4-move constant round zero-knowledge interactive proof of membership for the corresponding language. A straightforward adaptation of the ideas of Bellare-Micali-Ostrovsky will also give a constant round protocol. However, these protocols significantly increase the communication and computational complexity of the scheme. In this paper we present constant round variants of the protocols of Guillou-Quisquater and Schnorr with the same (order-wise) communication and computational complexity as the original schemes. Note that in our schemes the probability that a dishonest prover will fool a honest verifier may be exponentially small, while it can only be one over a superpolynomial in Burmester's scheme. Our protocols are perfect zero-knowledge under no cryptographic assumptions.

  • Data Analysis by Positive Decision Trees

    Kazuhisa MAKINO  Takashi SUDA  Hirotaka ONO  Toshihide IBARAKI  

     
    PAPER-Theoretical Aspects

      Vol:
    E82-D No:1
      Page(s):
    76-88

    Decision trees are used as a convenient means to explain given positive examples and negative examples, which is a form of data mining and knowledge discovery. Standard methods such as ID3 may provide non-monotonic decision trees in the sense that data with larger values in all attributes are sometimes classified into a class with a smaller output value. (In the case of binary data, this is equivalent to saying that the discriminant Boolean function that the decision tree represents is not positive. ) A motivation of this study comes from an observation that real world data are often positive, and in such cases it is natural to build decision trees which represent positive (i. e. , monotone) discriminant functions. For this, we propose how to modify the existing procedures such as ID3, so that the resulting decision tree represents a positive discriminant function. In this procedure, we add some new data to recover the positivity of data, which the original data had but was lost in the process of decomposing data sets by such methods as ID3. To compare the performance of our method with existing methods, we test (1) positive data, which are randomly generated from a hidden positive Boolean function after adding dummy attributes, and (2) breast cancer data as an example of the real-world data. The experimental results on (1) tell that, although the sizes of positive decision trees are relatively larger than those without positivity assumption, positive decision trees exhibit higher accuracy and tend to choose correct attributes, on which the hidden positive Boolean function is defined. For the breast cancer data set, we also observe a similar tendency; i. e. , positive decision trees are larger but give higher accuracy.

  • A Program Normalization to Improve Flexibility of Knowledge-Based Program Understander

    Haruki UENO  

     
    PAPER-Theory and Methodology

      Vol:
    E81-D No:12
      Page(s):
    1323-1329

    This paper discusses the experimental evaluation of the knowledge-based program understander ALPUS and methods of program normalization based on the evaluation to improve the flexibility of the system performance. ALPUS comprehends students' buggy Pascal programs using four kinds of programming knowledge, detects logical bugs, infers user's intentions and gives advice for fixing bugs. By means of the combination of the pattern matching technique and the HPG-based formalism of programming knowledge in addition to program normalization high performance of comprehension has been achieved for relatively complex programs such as Quicksort programs. The experimental evaluation told that program normalization would solve some 55% of unsucceeded student programs. Program normalization has contributed both in decreasing the number of knowledge patterns and increasing the flexibility. This paper proposes a five-step normalization procedure which works well in an experimental situation.

  • Radar Cross Section of an Open-Ended Rectangular Cylinder with an Iris Inside the Cavity

    Shinichiro OHNUKI  Takashi HINATA  

     
    PAPER

      Vol:
    E81-C No:12
      Page(s):
    1875-1880

    This paper shows an analysis of electromagnetic scattering from an open-ended rectangular cylinder for a plane wave incidence. The internal region is separated into two areas by additional plates to investigate the cavity resonance in detail. The applied numerical technique is the point matching method taking account of the edge condition. As numerical examples, the radar cross section is presented for E - polarized case and H - polarized case. Physical meanings of the computational results are discussed with a view to the contribution of the iris.

  • Association Rule Filter for Data Mining in Call Tracking Data

    Kazunori MATSUMOTO  Kazuo HASHIMOTO  

     
    PAPER-Network Design, Operation, and Management

      Vol:
    E81-B No:12
      Page(s):
    2481-2486

    Call tracking data contains a calling address, called address, service type, and other useful attributes to predict a customer's calling activity. Call tracking data is becoming a target of data mining for telecommunication carriers. Conventional data-mining programs control the number of association rules found with two types of thresholds (minimum confidence and minimum support), however, often they generate too many association rules because of the wide variety of patterns found in call tracking data. This paper proposes a new method to reduce the number of generated rules. The method proposed tests each generated rule based on Akaike Information Criteria (AIC) without using conventional thresholds. Experiments with artificial call tracking data show the high performance of the proposed method.

401-420hit(512hit)