The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] Al(20498hit)

10941-10960hit(20498hit)

  • Effect of Delay of Feedback Force on Perception of Elastic Force: A Psychophysical Approach

    Hitoshi OHNISHI  Kaname MOCHIZUKI  

     
    PAPER-Network

      Vol:
    E90-B No:1
      Page(s):
    12-20

    The performance of a force feedback system is disturbed by delay that arises from the time required for transmission and processing of data. We used a psychophysical method to measure how much a user's subjective impression of elasticity associated with delays of feedback force deviated from the original physical elasticity. The results show that users' point of subjective equality (PSE) for their subjective impression of elasticity decreased as the delay of feedback force increased. We proposed a model that estimates the PSE of elasticity from the variables that can be physically measured. Another experiment was conducted to examine the model's prediction, which the results supported.

  • Non-optimistic Secure Circuit Evaluation Based on ElGamal Encryption and Its Applications

    Koji CHIDA  Go YAMAMOTO  Koutarou SUZUKI  Shigenori UCHIYAMA  Noburou TANIGUCHI  Osamu SHIONOIRI  Atsushi KANAI  

     
    PAPER-Protocols

      Vol:
    E90-A No:1
      Page(s):
    128-138

    We propose a protocol for implementing secure circuit evaluation (SCE) based on the threshold homomorphic ElGamal encryption scheme and present the implementation results of the protocol. To the best of knowledge of the authors, the proposed protocol is more efficient in terms of computational complexity than previously reported protocols. We also introduce applications using SCE and estimate their practicality based on the implementation results.

  • Universally Composable Hierarchical Hybrid Authenticated Key Exchange

    Haruki OTA  Kazuki YONEYAMA  Shinsaku KIYOMOTO  Toshiaki TANAKA  Kazuo OHTA  

     
    PAPER-Protocols

      Vol:
    E90-A No:1
      Page(s):
    139-151

    Password-based authenticated key exchange protocols are more convenient and practical, since users employ human-memorable passwords that are simpler to remember than cryptographic secret keys or public/private keys. Abdalla, Fouque, and Pointcheval proposed the password-based authenticated key exchange protocol in a 3-party model (GPAKE) in which clients trying to establish a secret do not share a password between themselves but only with a trusted server. On the other hand, Canetti presented a general framework, which is called universally composable (UC) framework, for representing cryptographic protocols and analyzing their security. In this framework, the security of protocols is maintained under a general protocol composition operation called universal composition. Canetti also proved a UC composition theorem, which states that the definition of UC-security achieves the goal of concurrent general composition. A server must manage all the passwords of clients when the 3-party password-based authenticated key exchange protocols are realized in large-scale networks. In order to resolve this problem, we propose a hierarchical hybrid authenticated key exchange protocol (H2AKE). In H2AKE, forwarding servers are located between each client and a distribution server, and the distribution server sends the client an authentication key via the forwarding servers. In H2AKE, public/private keys are used between servers, while passwords are also used between clients and forwarding servers. Thus, in H2AKE, the load on the distribution server can be distributed to the forwarding servers concerning password management. In this paper, we define hierarchical hybrid authenticated key exchange functionality. H2AKE is the universal form of the hierarchical (hybrid) authenticated key exchange protocol, which includes a 3-party model, and it has the characteristic that the construction of the protocol can flexibly change according to the situation. We also prove that H2AKE is secure in the UC framework with the security-preserving composition property.

  • Logic Synthesis Method for Dual-Rail RSFQ Digital Circuits Using Root-Shared Binary Decision Diagrams

    Koji OBATA  Kazuyoshi TAKAGI  Naofumi TAKAGI  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E90-A No:1
      Page(s):
    257-266

    We propose a new method of logic synthesis for dual-rail RSFQ (rapid single-flux-quantum) digital circuits. RSFQ circuit technology is one of the strongest candidates for the next generation technology of digital circuits. For representing logic functions, we use a root-shared binary decision diagram (RSBDD) which is a directed acyclic graph constructed from binary decision diagrams. In the method, first we construct an RSBDD from given logic functions, and then reduce the number of nodes in the constructed RSBDD by variable re-ordering. Finally, we synthesize a dual-rail RSFQ circuit from the reduced RSBDD. We have implemented the method and have synthesized benchmark circuits. We have synthesized dual-rail circuits that consist of about 27% fewer logic elements than those synthesized by a Transduction-based method on average.

  • A Genetic Algorithm with Conditional Crossover and Mutation Operators and Its Application to Combinatorial Optimization Problems

    Rong-Long WANG  Shinichi FUKUTA  Jia-Hai WANG  Kozo OKAZAKI  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E90-A No:1
      Page(s):
    287-294

    In this paper, we present a modified genetic algorithm for solving combinatorial optimization problems. The modified genetic algorithm in which crossover and mutation are performed conditionally instead of probabilistically has higher global and local search ability and is more easily applied to a problem than the conventional genetic algorithms. Three optimization problems are used to test the performances of the modified genetic algorithm. Experimental studies show that the modified genetic algorithm produces better results over the conventional one and other methods.

  • Tunable Vertical Comb for Driving Micromirror Realized by Bending Device Wafer

    Minoru SASAKI  Masahiro ISHIMORI  JongHyeong SONG  Kazuhiro HANE  

     
    LETTER-Micro/Nano Photonic Devices

      Vol:
    E90-C No:1
      Page(s):
    147-148

    An electrostatically driven micromirror is described. The vertical comb of a three-dimensional microstructure is realized by bending the device wafer having microstructures. By resetting the bending angle, the tuning of the vertical gap between moving and stationary combs is possible. The characteristics of the vertical comb drive actuator can be tuned, confirming the performance.

  • Retrieval of Images Captured by Car Cameras Using Its Front and Side Views and GPS Data

    Toshihiko YAMASAKI  Takayuki ISHIKAWA  Kiyoharu AIZAWA  

     
    PAPER

      Vol:
    E90-D No:1
      Page(s):
    217-223

    Recently, cars are equipped with a lot of sensors for safety driving. We have been trying to store the driving-scene video with such sensor data and to detect the change of scenery of streets. Detection results can be used for building historical database of town scenery, automatic landmark updating of maps, and so forth. In order to compare images to detect changes, image retrieval taken at nearly identical locations is required as the first step. Since Global Positioning System (GPS) data essentially contain some noises, we cannot rely only on GPS data for our image retrieval. Therefore, we have developed an image retrieval algorithm employing edge-histogram-based image features in conjunction with hierarchical search. By using edge histograms projected onto the vertical and horizontal axes, the retrieval has been made robust to image variation due to weather change, clouds, obstacles, and so on. In addition, matching cost has been made small by limiting the matching candidates employing the hierarchical search. Experimental results have demonstrated that the mean retrieval accuracy has been improved from 65% to 76% for the front-view images and from 34% to 53% for the side-view images.

  • A MATLAB-Based Code Generator for Parallel Sparse Matrix Computations Utilizing PSBLAS

    Taiji SASAOKA  Hideyuki KAWABATA  Toshiaki KITAMURA  

     
    PAPER-Parallel Programming

      Vol:
    E90-D No:1
      Page(s):
    2-12

    Parallel programs for distributed memory machines are not easy to create and maintain, especially when they involve sparse matrix computations. In this paper, we propose a program translation system for generating parallel sparse matrix computation codes utilizing PSBLAS. The purpose of the development of the system is to offer the user a convenient way to construct parallel sparse code based on PSBLAS. The system is build up on the idea of bridging the gap between the easy-to-read program representations and highly-tuned parallel executables based on existing parallel sparse matrix computation libraries. The system accepts a MATLAB program with annotations and generates subroutines for an SPMD-style parallel program which runs on distributed-memory machines. Experimental results on parallel machines show that the prototype of our system can generate fairly efficient PSBLAS codes for simple applications such as CG and Bi-CGSTAB programs.

  • A Study on Higher Order Differential Attack of KASUMI

    Nobuyuki SUGIO  Hiroshi AONO  Sadayuki HONGO  Toshinobu KANEKO  

     
    PAPER-Symmetric Cryptography

      Vol:
    E90-A No:1
      Page(s):
    14-21

    This paper proposes novel calculuses of linearizing attack that can be applied to higher order differential attack. Higher order differential attack is a powerful and versatile attack on block ciphers. It can be roughly summarized as follows: (1) Derive an attack equation to estimate the key by using the higher order differential properties of the target cipher, (2) Determine the key by solving an attack equation. Linearizing attack is an effective method of solving attack equations. It linearizes an attack equation and determines the key by solving a system of linearized equations using approaches such as the Gauss-Jordan method. We enhance the derivation algorithm of the coefficient matrix for linearizing attack to reduce computational cost (fast calculus 1). Furthermore, we eliminate most of the unknown variables in the linearized equations by making the coefficient column vectors 0 (fast calculus 2). We apply these algorithms to an attack of the five-round variant of KASUMI and show that the attack complexity is equivalent to 228.9 chosen plaintexts and 231.2 KASUMI encryptions.

  • Random Switching Logic: A New Countermeasure against DPA and Second-Order DPA at the Logic Level

    Daisuke SUZUKI  Minoru SAEKI  Tetsuya ICHIKAWA  

     
    PAPER-Side Channel Attacks

      Vol:
    E90-A No:1
      Page(s):
    160-168

    This paper proposes a new countermeasure, Random Switching Logic (RSL), against DPA (Differential Power Analysis) and Second-Order DPA at the logic level. RSL makes a signal transition uniform at each gate and suppresses the propagation of glitch to allow power consumption to be independent of predictable data. Furthermore, we implement basic logic circuits on the FPGA (Field Programmable Gate Array) by using RSL, and evaluate the effectiveness. As a result, we confirm the fact that the secure circuit can be structured against DPA and Second-Order DPA.

  • Scalable FPGA/ASIC Implementation Architecture for Parallel Table-Lookup-Coding Using Multi-Ported Content Addressable Memory

    Takeshi KUMAKI  Yutaka KONO  Masakatsu ISHIZAKI  Tetsushi KOIDE  Hans Jurgen MATTAUSCH  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E90-D No:1
      Page(s):
    346-354

    This paper presents a scalable FPGA/ASIC implementation architecture for high-speed parallel table-lookup-coding using multi-ported content addressable memory, aiming at facilitating effective table-lookup-coding solutions. The multi-ported CAM adopts a Flexible Multi-ported Content Addressable Memory (FMCAM) technology, which represents an effective parallel processing architecture and was previously reported in [1]. To achieve a high-speed parallel table-lookup-coding solution, FMCAM is improved by additional schemes for a single search mode and counting value setting mode, so that it permits fast parallel table-lookup-coding operations. Evaluation results for Huffman encoding within the JPEG application show that a synthesized semi-custom ASIC implementation of the proposed architecture can already reduce the required clock-cycle number by 93% in comparison to a conventional DSP. Furthermore, the performance per area unit, measured in MOPS/mm2, can be improved by a factor of 3.8 in comparison to parallel operated DSPs. Consequently, the proposed architecture is very suitable for FPGA/ASIC implementation, and is a promising solution for small area integrated realization of real-time table-lookup-coding applications.

  • A Modified Generalized Hough Transform for Image Search

    Preeyakorn TIPWAI  Suthep MADARASMI  

     
    PAPER

      Vol:
    E90-D No:1
      Page(s):
    165-172

    We present the use of a Modified Generalized Hough Transform (MGHT) and deformable contours for image data retrieval where a given contour, gray-scale, or color template image can be detected in the target image, irrespective of its position, size, rotation, and smooth deformation transformations. Potential template positions are found in the target image using our novel modified Generalized Hough Transform method that takes measurements from the template features by extending a line from each edge contour point in its gradient direction to the other end of the object. The gradient difference is used to create a relationship with the orientation and length of this line segment. Potential matching positions in the target image are then searched by also extending a line from each target edge point to another end along the normal, then looking up the measurements data from the template image. Positions with high votes become candidate positions. Each candidate position is used to find a match by allowing the template to undergo a contour transformation. The deformed template contour is matched with the target by measuring the similarity in contour tangent direction and the smoothness of the matching vector. The deformation parameters are then updated via a Bayesian algorithm to find the best match. To avoid getting stuck in a local minimum solution, a novel coarse-and-fine model for contour matching is included. Results are presented for real images of several kinds including bin picking and fingerprint identification.

  • Low-Power Partial Distortion Sorting Fast Motion Estimation Algorithms and VLSI Implementations

    Yang SONG  Zhenyu LIU  Takeshi IKENAGA  Satoshi GOTO  

     
    PAPER

      Vol:
    E90-D No:1
      Page(s):
    108-117

    This paper presents two hardware-friendly low-power oriented fast motion estimation (ME) algorithms and their VLSI implementations. The basic idea of the proposed partial distortion sorting (PDS) algorithm is to disable the search points which have larger partial distortions during the ME process, and only keep those search points with smaller ones. To further reduce the computation overhead, a simplified local PDS (LPDS) algorithm is also presented. Experiments show that the PDS and LPDS algorithms can provide almost the same image quality as full search only with 36.7% computation complexity. The proposed two algorithms can be integrated into different FSBMA architectures to save power consumption. In this paper, the 1-D inter ME architecture [12] is used as an detailed example. Under the worst working conditions (1.62 V, 125) and 166 MHz clock frequency, the PDS algorithm can reduce 33.3% power consumption with 4.05 K gates extra hardware cost, and the LPDS can reduce 37.8% power consumption with 1.73 K gates overhead.

  • Voltage Island Generation in Cell Based Dual-Vdd Design

    Yici CAI  Bin LIU  Qiang ZHOU  Xianlong HONG  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E90-A No:1
      Page(s):
    267-273

    The voltage island style has been widely accepted as an effective way to design low power high performance chips. This paper proposes an automated voltage island generation flow in standard cell based designs. Two important objectives in voltage island designs are addressed in this flow: 1) reducing power dissipation under given performance constraints; 2) reducing implementation overheads, mainly layout overheads caused by cell clustering to form islands. The first objective is handled with timing and power driven netweighting and timing analysis in voltage assignment. For the second objective, we propose layout aware voltage assignment, i.e., voltage assignment during placement. We iteratively perform the following to adjustments: adjustment on voltage assignment to facilitate voltage island generation, and adjustment on cell locations to cluster cells in voltage islands. These iterations lead to a flow featured with tightly integrated voltage assignment and cell placement. Experimental results have demonstrated the advantages of our approach.

  • HDR Image Compression by Local Adaptation for Scene and Display Using Retinal Model

    Lijie WANG  Takahiko HORIUCHI  Hiroaki KOTERA  

     
    PAPER

      Vol:
    E90-D No:1
      Page(s):
    173-181

    Adaptation process of retina helps human visual system to see a high dynamic range scene in real world. This paper presents a simple static local adaptation method for high dynamic range image compression based on a retinal model. The proposed simple model aims at recreating the same sensations between the real scene and the range compressed image on display device when viewed after reaching steady state local adaptation respectively. Our new model takes the display adaptation into account in relation to the scene adaptation based on the retinal model. In computing local adaptation, the use of nonlinear edge preserving bilateral filter presents a better tonal rendition in preserving the local contrast and details while avoiding banding artifacts normally seen in local methods. Finally, we demonstrate the effectiveness of the proposed model by estimating the color difference between the recreated image and the target visual image obtained by trial and error method.

  • Sketch-Based Evaluation of Image Segmentation Methods

    David GAVILAN  Hiroki TAKAHASHI  Suguru SAITO  Masayuki NAKAJIMA  

     
    PAPER

      Vol:
    E90-D No:1
      Page(s):
    156-164

    A method for evaluating image segmentation methods is proposed in this paper. The method is based on a perception model where the drawing act is used to represent visual mental percepts. Each segmented image is represented by a minimal set of features and the segmentation method is tested against a set of sketches that represent a subset of the original image database, using the Mahalanobis distance function. The covariance matrix is set using a collection of sketches drawn by different users. The different drawings are demonstrated to be consistent across users. This evaluation method can be used to solve the problem of parameter selection in image segmentation, as well as to show the goodness or limitations of the different segmentation algorithms. Different well-known color segmentation algorithms are analyzed with the proposed method and the nature of each one is discussed. This evaluation method is also compared with heuristic functions that serve for the same purpose, showing the importance of using users' pictorial knowledge.

  • Viewpoint Vector Rendering for Efficient Elemental Image Generation

    Kyoung Shin PARK  Sung-Wook MIN  Yongjoo CHO  

     
    PAPER

      Vol:
    E90-D No:1
      Page(s):
    233-241

    This paper presents a fast elemental image generation algorithm, called the Viewpoint Vector Rendering (VVR), for the computer-generated integral imaging system. VVR produces a set of elemental images in real-time by assembling the segmented area of the directional scenes taken from a range of viewpoints. This algorithm is less affected by system factors such as the number of elemental lens and the number of polygons. It also supports all display modes of the integral imaging system, real, virtual and focused mode. This paper first describes the characteristics of integral imaging system. It then discusses the design, implementation, and performance evaluation of the VVR algorithm, which can be easily adapted to render the integral images of complex 3D objects.

  • Parallel Adaptive Estimation of Hip Range of Motion for Total Hip Replacement Surgery

    Yasuhiro KAWASAKI  Fumihiko INO  Yoshinobu SATO  Shinichi TAMURA  Kenichi HAGIHARA  

     
    PAPER-Parallel Image Processing

      Vol:
    E90-D No:1
      Page(s):
    30-39

    This paper presents the design and implementation of a hip range of motion (ROM) estimation method that is capable of fine-grained estimation during total hip replacement (THR) surgery. Our method is based on two acceleration strategies: (1) adaptive mesh refinement (AMR) for complexity reduction and (2) parallelization for further acceleration. On the assumption that the hip ROM is a single closed region, the AMR strategy reduces the complexity for N N N stance configurations from O(N3) to O(ND), where 2≤D≤3 and D is a data-dependent value that can be approximated by 2 in most cases. The parallelization strategy employs the master-worker paradigm with multiple task queues, reducing synchronization between processors with load balancing. The experimental results indicate that the implementation on a cluster of 64 PCs completes estimation of 360360180 stance configurations in 20 seconds, playing a key role in selecting and aligning the optimal combination of artificial joint components during THR surgery.

  • Reliable Parallel File System with Parity Cache Table Support

    Sheng-Kai HUNG  Yarsun HSU  

     
    PAPER-Parallel Processing System

      Vol:
    E90-D No:1
      Page(s):
    22-29

    Providing data availability in a high performance computing environment is very important, especially in this data-intensive world. Most clusters either equip with RAID (Redundant Array of Independent Disks) devices or use redundant nodes to protect data from loss. However, neither of these can really solve the reliability problem incurred in a striped file system. Striping provides an efficient way to increase I/O throughput both in the distributed and parallel paradigms. But it also reduces the overall reliability of a disk system by N fold, where N is the number of independent disks in the system. Parallel Virtual File System (PVFS) is an open source parallel file system which has been widely used in the Linux environment. Its striping structure is good for performance but provides no fault tolerance. We implement Reliable Parallel File System (RPFS) based on PVFS but with reliability support. Our quantitative analysis shows that MTTF (Mean Time To Failure) of our RPFS is better than that of PVFS. Besides, we propose a parity cache table (PCT) to alleviate the penalty of parity updating. The evaluation of our RPFS shows that its read performance is almost the same as that of PVFS (2% to 13% degradation). As to the write performance, 28% to 45% improvement can be achieved depending on the behavior of the operations.

  • A New Coding Technique for Digital Holographic Video Using Multi-View Prediction

    Young-Ho SEO  Hyun-Jun CHOI  Jin-Woo BAE  Hoon-Jong KANG  Seung-Hyun LEE  Ji-Sang YOO  Dong-Wook KIM  

     
    PAPER

      Vol:
    E90-D No:1
      Page(s):
    118-125

    In this paper, we proposed an efficient coding method for digital hologram (fringe pattern) acquired with a CCD camera or by computer generation using multi-view prediction and MPEG video compression standard techniques. It processes each R, G, or B color component separately. The basic processing unit is a partial image segmented as the size of MN. Each partial image retains the information of the whole object. This method generates an assembled image for a column of the segmented and frequency-transformed partial images, which is the basis of the coding process. That is, a motion estimation and compensation technique of MPEG is applied between the reconstructed images from the assembled images with the disparities found during generation of assembled image and the original partial images. Therefore the compressed results are the disparity of each partial image to form the assembled image for the corresponding column, assembled image, and the motion vectors and the compensated image for each partial image. The experimental results with the implemented algorithm showed that the proposed method has NC (Normalized Correlation) values about 4% higher than the previous method at the same compression ratios, which convinced us that ours has better compression efficiency. Consequently, the proposed method is expected to be used effectively in the application areas to transmit or store in digital format the digital hologram data.

10941-10960hit(20498hit)