The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Al(20498hit)

18601-18620hit(20498hit)

  • A Single-Layer Power Divider for a Slotted Waveguide Array Using π-Junctions with an Inductive Wall

    Tsukasa TAKAHASHI  Jiro HIROKAWA  Makoto ANDO  Naohisa GOTO  

     
    PAPER-Antennas and Propagation

      Vol:
    E79-B No:1
      Page(s):
    57-62

    The authors propose a waveguide π-junction with an inductive wall. Galerkin's method of moments is applied to analyze it and small reflection and desired power division ratio is realized. Good agreement between the calculated result and the measured one verifies the design of a unit π-junction. The characteristics of aπ-junction with a wall are almost the same as those of a conventional π-junction with a post. Important advantage of the new π-junction with a wall is that it can be manufactured in the die-cast process of the waveguide while a post in the conventional one must be attached in an additional process. A 16-way power divider consisting of 8 π-junctions is designed at 11.85 GHz and the characteristics are predicted.

  • New EIGamal Type Threshold Digital Signature Scheme

    Choonsik PARK  Kaoru KUROSAWA  

     
    PAPER

      Vol:
    E79-A No:1
      Page(s):
    86-93

    In a (k,n) threshold digital signature scheme, k out of n signers must cooperate to issue a signature. In this paper, we show an efncient (k,n) threshold EIGamal type digital signature scheme with no trusted center. We first present a variant of EIGamal type digital signature scheme which requires only a linear combination of two shared secrets when applied to the (k,n)-threshold scenario. More precisely, it is a variant of Digital Signature Standard (DSS) which was recommended by the U.S. National Institute ofStandard and Technology (NIST). We consider that it is meaningful to develop an efficient (k,n)-threshold digital signature scheme for DSS. The proposed (k,n)-threshold digital signature scheme is proved to be as secure as the proposed variant of DSS against chosen message attack.

  • Proposal of an Automatic Signature Scheme Using a Compiler

    Keisuke USUDA  Masahiro MAMBO  Tomohiko UYEMATSU  Eiji OKAMOTO  

     
    PAPER

      Vol:
    E79-A No:1
      Page(s):
    94-101

    Computer viruses, hackers, intrusions and ther computer crimes have recently become a serious security problem in information systems. Digital signatures are useful to defend against these threats, especially against computer viruses. This is because a modification of a file can be detected by checking the consistency of the originai file with its accompanying digital signature. But an executable program might have been infected with the viruses before the signature was created. In this case, the infection cannot be detected by signature verification and the origin of the infection cannot be specified either. In this paper, we propose a signature scheme in which one can sign right after the creation of an executable program. That is, when a user compiles a source program, the compiler automatically creates both the executable program and its signature. Thus viruses cannot infect the executable programs without detection. Moreover, we can specify the creator of contaminated executable programs. In our signature scheme, a signature is created from a set of secret integers stored in a compiler, which is calculated from a compiler-maker's secret key. Each compiler is possessed by only one user and it is used only when a secret value is fed into it. In this way a signature of an executable program and the compiler-owner are linked to each other. Despite these measures, an executable program could run abnormally because of an infection in prepro-cessing step, e.g. an infection of library files or included files. An infection of these files is detected by ordinary digital signatures. The proposed signature scheme together with digital signature against infection in the preprocessing step enables us to specify the origin of the infection. The name of the signature creator is not necessary for detecting an infection. So, an owner's public value is not searched in our scheme, and only a public value of a compiler-maker is required for signature verification. Furthermore, no one can use a compiler owned by another to create a proper signature.

  • The Security of an RDES Cryptosystem against Linear Cryptanalysis

    Yasushi NAKAO  Toshinobu KANEKO  Kenji KOYAMA  Routo TERADA  

     
    PAPER

      Vol:
    E79-A No:1
      Page(s):
    12-19

    RDES cryptosystem is an n-round DES in which an probabilistic swapping is added onto the right half of the input in each round. It is more effective than a simple increase of DES rounds for a countermeasure against differential attack. In this paper, we show that the RDES is also effective against linear cryptanalysis. We applied Matsui's search algorithm to find the best expression for RDES-1 and RDES-2. The results are as follows: (a) The 16-round RDES-1 is approximately as strong as a 22-round DES, and the 16-round RDES-2 is approximately as strong as a 29-round DES. (b) Linear cryptanalysis for a 16-round RDES-1 and a 16-round RDES-2 requires more than 264 known-plaintexts.

  • A New Version of FEAL, Stronger against Differential Cryptanalysis*

    Routo TERADA  Paulo G. PINHEIRO  Kenji KOYAMA  

     
    PAPER

      Vol:
    E79-A No:1
      Page(s):
    28-34

    We create a new version of the FEAL-N(X) cryptographic function, called FEAL-N(X)S, by introducing a dynamic swapping function. FEAL-N(X)S is stronger against Differential Cryptanalysis in the sense that any characteristic for FEAL-N(X) is less effective when applied to FEAL-N(X)S. Furthermore, the only iterative characteristics. that may attack the same number of rounds for the two versions are the symmetric ones, which have an average probability bounded above by 2-4 per round, i.e., the FEAL-N(X)S is at least as strong as DES with respect to this type of characteristic. We also show that in general the probability of an iterative characteristic for the FEAL-N(X) that is still valid for FEAL-N(X)S is decreased by 1/2 per round. Some of the best characteristics are shown. Experimental results show that the running time required by FEAL-N(X)S is around 10% greater compared to FEAL-N(X), in software; but this price is small compared to the gained strength against Differential Cryptanalysis.

  • The Best Linear Expression Search of FEAL

    Shiho MORIAI  Kazumaro AOKI  Kazuo OHTA  

     
    PAPER

      Vol:
    E79-A No:1
      Page(s):
    2-11

    It is important to find the best linear expression to estimate the vulnerability of cryptosystems to Linear Cryptanalysis. This paper shows the results of the best linear expressions search of FEAL-N (N32) and discusses the security of FEAL against Linear Cryptanalysis. We improve Matsui's search algorithm which determines the best linear expressions, and apply it to FEAL. The improved search algorithm finds all the best linear expression of FEAL-N (N32) much faster than the original; the required time is decreased from over three months to about two and a half days. We find the best linear expressions of FEAL-7, FEAL-15, and FEAL-31 with deviations of 1.152-8, 1.482-20, and 1.992-41, respectively. These linear expressions have higher deviations than those derived from Bi-ham's 4-round iterative linear approximations. Using these data we calculated the number of known plaintexts required to attack FEAL-8, FEAL-16, and FEAL-32. It is proved that FEAL-32 is secure against Linear Cryptanalysis.

  • On the Complexity of the Discrete Logarithm for a General Finite Group

    Tatsuaki OKAMOTO  Kouichi SAKURAI  Hiroki SHIZUYA  

     
    PAPER

      Vol:
    E79-A No:1
      Page(s):
    61-65

    GDL is the language whose membership problerm is polynomial-time Turing equivalent to the discrete logarithm problem for a general finite group G. This paper gives a characterization of GDL from the viewpoint of computational complexity theory. It is shown that GDL NP co-AM, assuming that G is in NP co-NP, and that the group law operation of G can be executed in polynomial time of the element size. Furthermore, as a natural probabilistic extension, the complexity of GDL is investigated under the assumption that the group law operation is executed in an expected polynomial time of the element size. In this case, it is shown that GDL MA co-AM if G MA co-MA. As a consequence, we show that GDL is not NP-complete unless the polynomial time hierarchy collapses to the second level.

  • Optimization of Time-Memory Trade-Off Cryptanalysis and Its Application to DES, FEAL-32, and Skipjuck

    Koji KUSUDA  Tsutomu MATSUMOTO  

     
    PAPER

      Vol:
    E79-A No:1
      Page(s):
    35-48

    In 1980, Hellman presented "time-memory trade-off cryptanalysis" for block ciphers, which requires precomputation equivalent to time complexity of exhaustive search, but can drastically reduce both time complexity on intercepted ciphertexts of exhaustive search and space complexity of table lookup. This paper extends his cryptanalysis and optimizes a relation among the breaking cost, time, and success probability. The power of the optimized cryptanalytic method can be demonstrated by the estimates as of January 1995 in the following. For breaking DES in one hour with success probability of 50% or more, the estimated cost of a simple and a highly parallel machine is respectively about 0.26[million dollars] and 0.06[million dollars]. Also it takes about six and two years respectively until each machine costs for breaking FEAL-32 on the same condition decreases to 1[million dollars]. Moreover, it takes about 22.5 and 19[years] respectively until each costs for breaking Skipjack similarly decreases to 1[million dollars], but time complexity of precomputation is huge in case of the former. The cost-time product for this precomputation will decrease to 20[million dollarsyears] in about 30[years].

  • Fuzzy Clustering Networks: Design Criteria for Approximation and Prediction

    John MITCHELL  Shigeo ABE  

     
    PAPER-Artificial Intelligence and Cognitive Science

      Vol:
    E79-D No:1
      Page(s):
    63-71

    In previous papers the building of hierarchical networks made up of components using fuzzy rules was presented. It was demonstrated that this approach could be used to construct networks to solve classification problems, and that in many cases these networks were computationally less expensive and performed at least as well as existing approaches based on feedforward neural networks. It has also been demonstrated how this approach could be extended to real-valued problems, such as function approximation and time series prediction. This paper investigates the problem of choosing the best network for real-valued approximation problems. Firstly, the nature of the network parameters, how they are interrelated, and how they affect the performance of the system are clarified. Then we address the problem of choosing the best values of these parameters. We present two model selection tools in this regard, the first using a simple statistical model of the network, and the second using structural information about the network components. The resulting network selection methods are demonstrated and their performance tested on several benchmark and applied problems. The conclusions look at future research issues for further improving the performance of the clustering network.

  • Differential-Linear Cryptanalysis of FEAL-8

    Kazumaro AOKI  Kazuo OHTA  

     
    PAPER

      Vol:
    E79-A No:1
      Page(s):
    20-27

    In CRYPTO '94, Langford and Hellman attacked DES reduced to 8-round in the chosen plaintext scenario by their "differential-1inear cryptanalysis," which is a combination of differential cryptanalysis and linear cryptanalysis. In this paper, a historical review of differential-linear cryptanalysis, our formalization of differential-linear cryptanalysis, and the application of differential-linear cryptanalysis to FEAL-8 are presented. As a result, though the previous best method (differential cryptanalysis) required 128 chosen plaintexts, only 12 chosen plaintexts are sufficient, in computer experimentations, to attack FEAL-8.

  • Optimal Structure-from-Motion Algorithm for Optical Flow

    Naoya OHTA  Kenichi KANATANI  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1559-1566

    This paper presents a new method for solving the structure-from-motion problem for optical flow. The fact that the structure-from-motion problem can be simplified by using the linearization technique is well known. However, it has been pointed out that the linearization technique reduces the accuracy of the computation. In this paper, we overcome this disadvantage by correcting the linearized solution in a statistically optimal way. Computer simulation experiments show that our method yields an unbiased estimator of the motion parameters which almost attains the theoretical bound on accuracy. Our method also enables us to evaluate the reliability of the reconstructed structure in the form of the covariance matrix. Real-image experiments are conducted to demonstrate the effectiveness of our method.

  • Throughput Analysis of Spread-Slotted ALOHA in LEO Satellite Communication Systems with Nonuniform Traffic Distribution

    Abbas JAMALIPOUR  Masaaki KATAYAMA  Takaya YAMAZATO  Akira OGAWA  

     
    PAPER-Satellite Communication

      Vol:
    E78-B No:12
      Page(s):
    1657-1665

    An analytical framework to study the nonuniformity in geographical distribution of the traffic load in low earth orbit satellite communication systems is presented. The model is then used to evaluate the throughput performance of the system with direct-sequence packet spread-slotted ALOHA multiple-access technique. As the result, it is shown that nonuniformity in traffic makes the characteristics of the system significantly different from the results of uniform traffic case and that the performance of each user varies according to its location. Moreover, the interference reached from users of adjacent satellites is shown to be one of the main factors that limit the performance of system.

  • Principal Component Analysis for Remotely Sensed Data Classified by Kohonen's Feature Mapping Preprocessor and Multi-Layered Neural Network Classifier

    Hiroshi MURAI  Sigeru OMATU  Shunichiro OE  

     
    PAPER

      Vol:
    E78-B No:12
      Page(s):
    1604-1610

    There have been many developments on neural network research, and ability of a multi-layered network for classification of multi-spectral image data has been studied. We can classify non-Gaussian distributed data using the neural network trained by a back-propagation method (BPM) because it is independent of noise conditions. The BPM is a supervised classifier, so that we can get a high classification accuracy by using the method, so long as we can choose the good training data set. However, the multi-spectral data have many kinds of category information in a pixel because of its pixel resolution of the sensor. The data should be separated in many clusters even if they belong to a same class. Therefore, it is difficult to choose the good training data set which extract the characteristics of the class. Up to now, the researchers have chosen the training data set by random sampling from the input data. To overcome the problem, a hybrid pattern classification system using BPM and Kohonens feature mapping (KFM) has been proposed recently. The system performed choosing the training data set from the result of rough classification using KFM. However, how the remotely sensed data had been influenced by the KFM has not been demonstrated quantitatively. In this paper, we propose a new approach using the competitive weight vectors as the training data set, because we consider that a competitive unit represents a small cluster of the input patterns. The approach makes the training data set choice work easier than the usual one, because the KFM can automatically self-organize a topological relation among the target image patterns on a competitive plane. We demonstrate that the representative of the competitive units by principal component analysis (PCA). We also illustrate that the approach improves the classification accuracy by applying it on the classification of the real remotely sensed data.

  • High-Resolution Determination of Transit Time of Ultrasound in a Thin Layer in Pulse-Echo Method

    Tomohisa KIMURA  Hiroshi KANAI  Noriyoshi CHUBACHI  

     
    PAPER

      Vol:
    E78-A No:12
      Page(s):
    1677-1682

    In this paper we propose a new method for removing the characteristic of the piezoelectric transducer from the received signal in the pulse-echo method so that the time resolution in the determination of transit time of ultrasound in a thin layer is increased. The total characteristic of the pulse-echo system is described by cascade of distributed-constant systems for the ultrasonic transducer, matching layer, and acoustic medium. The input impedance is estimated by the inverse matrix of the cascade system and the voltage signal at the electrical port. From the inverse Fourier transform of input impedance, the transit time in a thin layer object is accurately determined with high time resolution. The principle of the method is confirmed by simulation experiments.

  • Quantitative Evaluation of TMJ Sound by Frequency Analysis

    Hiroshi SHIGA  Yoshinori KOBAYASHI  

     
    LETTER

      Vol:
    E78-A No:12
      Page(s):
    1683-1688

    In order to evaluate quantitatively TMJ sound, TMJ sound in normal subject group, CMD patient group A with palpable sounds unknown to them, CMD patient group B with palpable sounds known to them, and CMD patient group C with audible sounds were detected by a contact microphone, and frequency analysis of the power spectra was performed. The power spectra of TMJ sound of normal subject group and patient group A showed patterns with frequency values below 100 Hz, whereas the power spectra of patient groups B and C showed distinctively different patterns with peaks of frequency component exceeding 100 Hz. As regards the cumulative frequency value, the patterns for each group clearly differed from those of other groups; in particular the 80% cumulative frequency value showed the greatest difference. From these results, it is assumed that the 80% cumulative frequency value can be used as an effective indicator for quantitative evaluation of TMJ sound.

  • Edge Extraction Method Based on Separability of Image Features

    Kazuhiro FUKUI  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1533-1538

    This paper proposes a robust method for detecting step and ramp edges. In this method, an edge is defined not as a point where there is a large change in intensity, but as a region boundary based on the separability of image features which can be calculated by linear discriminant analysis. Based on this definition of an edge, its intensity can be obtained from the separability, which depends only on the shape of an edge. This characteristic enables easy selection of the optimum threshold value for the extraction of an edge, and this method can be applied to color and texture edge extraction. Experimental results have demonstrated that this proposed method is robust to noise and dulled edges, and, in addition, allows easy selection of the optimum threshold value.

  • An Efficient Clustering Algorithm for Region Merging

    Takio KURITA  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1546-1551

    This paper proposes an efficient clustering algorithm for region merging. To speed up the search of the best pair of regions which is merged into one region, dissimilarity values of all possible pairs of regions are stored in a heap. Then the best pair can be found as the element of the root node of the binary tree corresponding to the heap. Since only adjacent pairs of regions are possible to be merged in image segmentation, this constraints of neighboring relations are represented by sorted linked lists. Then we can reduce the computation for updating the dissimilarity values and neighboring relations which are influenced by the merging of the best pair. The proposed algorithm is applied to the segmentations of a monochrome image and range images.

  • Extraction of Three-Dimensional Multiple Skeletons and Digital Medial Skeleton

    Masato MASUYA  Junta DOI  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1567-1572

    We thought that multiple skeletons were inherent in an ordinary three-dimensional object. A thinning method is developed to extract multiple skeletons using 333 templates for boundary deletion based on the hit or miss transformation and 222 templates for checking one voxel thickness. We prepared twelve sets of deleting templates consisting of total 194 templates and 72 one voxel checking templates. One repetitive iteration using one sequential use of the template sets extracts one skeleton. Some of the skeletons thus obtained are identical; however, multiple independent skeletons are extracted by this method. These skeletons fulfill the well-recognized three conditions for a skeleton. We extracted three skeletons from the cube, two from the space shuttle model and four from the L-shaped figure by Tsao and Fu. The digital medial skeleton, which is not otherwise extracted, is extracted by comparing the multiple skeletons with the digital medial-axis-like-figure. One of our skeletons for the cude agreed with the ideal medial axis. The locations of the gravity center of the multiple skeletons are compared with that of the original shape to evaluate how uniform or non-biased skeletons are extracted. For the L-shaped figure, one of our skeletons is found to be most desirable from the medial and uniform points of view.

  • Efficient Algorithms for Real-Time Octree Motion

    Yoshifumi KITAMURA  Andrew SMITH  Fumio KISHINO  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1573-1580

    This paper presents efficient algorithms for updating moving octrees with real-time performance. The first algorithm works for octrees undergoing both translation and rotation motion; it works efficiently by compacting source octrees into a smaller set of cubes (not necessarily standard octree cubes) as a precomputation step, and by using a fast, exact cube/cube intersection test between source octree cubas and target octree cubes. A parallel version of the algorithm is also described. Finally, the paper presents an efficient algorithm for the more limited case of octree translation only. Experimental results are given to show the efficiency of the algorithms in comparison to competing algorithms. In addition to being fast, the algorithms presented are also space efficient in that they can produce target octrees in the linear octree representation.

  • An Integration Algorithm for Stereo, Motion and Color in Real-Time Applications

    Hiroshi ARAKAWA  Minoru ETOH  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1615-1620

    This paper describes a statistical integration algorithm for color, motion and stereo disparity, and introduces a real-time stereo system that can tell us where and what objects are moving. Regarding the integration algorithm, motion estimation and depth estimation are simultaneously performed by a clustering process based on motion, stereo disparity, color, and pixel position. As a result of the clustering, an image is decomposed into region fragments. Eath fragment is characterized by distribution parameters of spatiotemporal intensity gradients, stereo difference, color and pixel positions. Motion vectors and stereo disparities for each fragment are obtained from those distribution parameters. The real-time stereo system can view the objects with the distribution parameters over frames. The implementation and experiments show that we can utilize the proposed algorithm in real-time applications such as surveillance and human-computer interaction.

18601-18620hit(20498hit)