The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] NCO(318hit)

141-160hit(318hit)

  • Efficient, High-Quality, GPU-Based Visualization of Voxelized Surface Data with Fine and Complicated Structures

    Sven FORSTMANN  Jun OHYA  

     
    PAPER-Computer Graphics

      Vol:
    E93-D No:11
      Page(s):
    3088-3099

    This paper proposes a GPU-based method that can visualize voxelized surface data with fine and complicated features, has high rendering quality at interactive frame rates, and provides low memory consumption. The surface data is compressed using run-length encoding (RLE) for each level of detail (LOD). Then, the loop for the rendering process is performed on the GPU for the position of the viewpoint at each time instant. The scene is raycasted in planes, where each plane is perpendicular to the horizontal plane in the world coordinate system and passes through the viewpoint. For each plane, one ray is cast to rasterize all RLE elements intersecting this plane, starting from the viewpoint and ranging up to the maximum view distance. This rasterization process projects each RLE element passing the occlusion test onto the screen at a LOD that decreases with the distance of the RLE element from the viewpoint. Finally, the smoothing of voxels in screen space and full screen anti-aliasing is performed. To provide lighting calculations without storing the normal vector inside the RLE data structure, our algorithm recovers the normal vectors from the rendered scene's depth buffer. After the viewpoint changes, the same process is re-executed for the new viewpoint. Experiments using different scenes have shown that the proposed algorithm is faster than the equivalent CPU implementation and other related methods. Our experiments further prove that this method is memory efficient and achieves high quality results.

  • A High-Throughput Binary Arithmetic Coding Architecture for H.264/AVC CABAC

    Yizhong LIU  Tian SONG  Takashi SHIMAMOTO  

     
    PAPER-VLSI Design Technology and CAD

      Vol:
    E93-A No:9
      Page(s):
    1594-1604

    In this paper, we propose a high-throughput binary arithmetic coding architecture for CABAC (Context Adaptive Binary Arithmetic Coding) which is one of the entropy coding tools used in the H.264/AVC main and high profiles. The full CABAC encoding functions, including binarization, context model selection, arithmetic encoding and bits generation, are implemented in this proposal. The binarization and context model selection are implemented in a proposed binarizer, in which a FIFO is used to pack the binarization results and output 4 bins in one clock. The arithmetic encoding and bits generation are implemented in a four-stage pipeline with the encoding ability of 4 bins/clock. In order to improve the processing speed, the context variables access and update for 4 bins are paralleled and the pipeline path is balanced. Also, because of the outstanding bits issue, a bits packing and generation strategy for 4 bins paralleled processing is proposed. After implemented in verilog-HDL and synthesized with Synopsys Design Compiler using 90 nm libraries, this proposal can work at the clock frequency of 250 MHz and takes up about 58 K standard cells, 3.2 Kbits register files and 27.6 K bits ROM. The throughput of processing 1000 M bins per second can be achieved in this proposal for the HDTV applications.

  • Solving Open Job-Shop Scheduling Problems by SAT Encoding

    Miyuki KOSHIMURA  Hidetomo NABESHIMA  Hiroshi FUJITA  Ryuzo HASEGAWA  

     
    LETTER-Artificial Intelligence, Data Mining

      Vol:
    E93-D No:8
      Page(s):
    2316-2318

    This paper tries to solve open Job-Shop Scheduling Problems (JSSP) by translating them into Boolean Satisfiability Testing Problems (SAT). The encoding method is essentially the same as the one proposed by Crawford and Baker. The open problems are ABZ8, ABZ9, YN1, YN2, YN3, and YN4. We proved that the best known upper bounds 678 of ABZ9 and 884 of YN1 are indeed optimal. We also improved the upper bound of YN2 and lower bounds of ABZ8, YN2, YN3 and YN4.

  • An Asynchronous FPGA Based on LEDR/4-Phase-Dual-Rail Hybrid Architecture

    Shota ISHIHARA  Yoshiya KOMATSU  Masanori HARIYAMA  Michitaka KAMEYAMA  

     
    PAPER-Electronic Circuits

      Vol:
    E93-C No:8
      Page(s):
    1338-1348

    This paper presents an asynchronous FPGA that combines 4-phase dual-rail encoding and LEDR (Level-Encoded Dual-Rail) encoding. 4-phase dual-rail encoding is employed to achieve small area and low power for function units, while LEDR encoding is employed to achieve high throughput and low power for the data transfer using programmable interconnection resources. Area-efficient protocol converters and their control circuits are also proposed in transistor-level implementation. The proposed FPGA is designed using the e-Shuttle 65nm CMOS process. Compared to the 4-phase-dual-rail-based FPGA, the throughput is increased by 69% with almost the same transistor count. Compared to the LEDR-based FPGA, the transistor count is reduced by 47% with almost the same throughput. In terms of power consumption, the proposed FPGA achieves the lowest power compared to the 4-phase-dual-rail-based and the LEDR-based FPGAs. Compared to the synchronous FPGA, the proposed FPGA has lower power consumption when the workload is below 35%.

  • Extended Selective Encoding of Scan Slices for Reducing Test Data and Test Power

    Jun LIU  Yinhe HAN  Xiaowei LI  

     
    PAPER-Information Network

      Vol:
    E93-D No:8
      Page(s):
    2223-2232

    Test data volume and test power are two major concerns when testing modern large circuits. Recently, selective encoding of scan slices is proposed to compress test data. This encoding technique, unlike many other compression techniques encoding all the bits, only encodes the target-symbol by specifying a single bit index and copying group data. In this paper, we propose an extended selective encoding which presents two new techniques to optimize this method: a flexible grouping strategy, X bits exploitation and filling strategy. Flexible grouping strategy can decrease the number of groups which need to be encoded and improve test data compression ratio. X bits exploitation and filling strategy can exploit a large number of don't care bits to reduce testing power with no compression ratio loss. Experimental results show that the proposed technique needs less test data storage volume and reduces average weighted switching activity by 25.6% and peak weighted switching activity by 9.68% during scan shift compared to selective encoding.

  • BioEncoding: A Reliable Tokenless Cancelable Biometrics Scheme for Protecting IrisCodes

    Osama OUDA  Norimichi TSUMURA  Toshiya NAKAGUCHI  

     
    PAPER-Information Network

      Vol:
    E93-D No:7
      Page(s):
    1878-1888

    Despite their usability advantages over traditional authentication systems, biometrics-based authentication systems suffer from inherent privacy violation and non-revocability issues. In order to address these issues, the concept of cancelable biometrics was introduced as a means of generating multiple, revocable, and noninvertible identities from true biometric templates. Apart from BioHashing, which is a two-factor cancelable biometrics technique based on mixing a set of tokenized user-specific random numbers with biometric features, cancelable biometrics techniques usually cannot preserve the recognition accuracy achieved using the unprotected biometric systems. However, as the employed token can be lost, shared, or stolen, BioHashing suffers from the same issues associated with token-based authentication systems. In this paper, a reliable tokenless cancelable biometrics scheme, referred to as BioEncoding, for protecting IrisCodes is presented. Unlike BioHashing, BioEncoding can be used as a one-factor authentication scheme that relies only on sole IrisCodes. A unique noninvertible compact bit-string, referred to as BioCode, is randomly derived from a true IrisCode. Rather than the true IrisCode, the derived BioCode can be used efficiently to verify the user identity without degrading the recognition accuracy obtained using original IrisCodes. Additionally, BioEncoding satisfies all the requirements of the cancelable biometrics construct. The performance of BioEncoding is compared with the performance of BioHashing in the stolen-token scenario and the experimental results show the superiority of the proposed method over BioHashing-based techniques.

  • An Ultra Low Power and Variation Tolerant GEN2 RFID Tag Front-End with Novel Clock-Free Decoder

    Sung-Jin KIM  Minchang CHO  SeongHwan CHO  

     
    PAPER

      Vol:
    E93-C No:6
      Page(s):
    785-795

    In this paper, an ultra low power analog front-end for EPCglobal Class 1 Generation 2 RFID tag is presented. The proposed RFID tag removes the need for high frequency clock and counters used in conventional tags, which are the most power hungry blocks. The proposed clock-free decoder employs an analog integrator with an adaptive current source that provides a uniform decoding margin regardless of the data rate and a link frequency extractor based on a relaxation oscillator that generates frequency used for backscattering. A dual supply voltage scheme is also employed to increase the power efficiency of the tag. In order to improve the tolerance of the proposed circuit to environmental variations, a self-calibration circuit is proposed. The proposed RFID analog front-end circuit is designed and simulated in 0.25 µm CMOS, which shows that the power consumption is reduced by an order magnitude compared to the conventional RFID tags, without losing immunity to environmental variations.

  • Inconsistency Resolution Method for RBAC Based Interoperation

    Chao HUANG  Jianling SUN  Xinyu WANG  Di WU  

     
    PAPER

      Vol:
    E93-D No:5
      Page(s):
    1070-1079

    In this paper, we propose an inconsistency resolution method based on a new concept, insecure backtracking role mapping. By analyzing the role graph, we prove that the root cause of security inconsistency in distributed interoperation is the existence of insecure backtracking role mapping. We propose a novel and efficient algorithm to detect the inconsistency via finding all of the insecure backtracking role mappings. Our detection algorithm will not only report the existence of inconsistency, but also generate the inconsistency information for the resolution. We reduce the inconsistency resolution problem to the known Minimum-Cut problem, and based on the results generated by our detection algorithm we propose an inconsistency resolution algorithm which could guarantee the security of distributed interoperation. We demonstrate the effectiveness of our approach through simulated tests and a case study.

  • imCast: Studio-Quality Digital Media Platform Exploiting Broadband IP Networks

    Jinyong JO  JongWon KIM  

     
    PAPER-Educational Technology

      Vol:
    E93-D No:5
      Page(s):
    1214-1224

    The recent growth in available network bandwidth envisions the wide-spread use of broadband applications such as uncompressed HD-SDI (High-definition serial digital interface) over IP. These cutting-edge applications are also driving the development of a media-oriented infrastructure for networked collaboration. This paper introduces imCast, a high-quality digital media platform dealing with uncompressed HD-SDI over IP, and discusses its internal architecture in depth. imCast mainly provides cost-effective hardware-based approaches for high-quality media acquisition and presentation; flexible software-based approaches for presentation; and allows for economical network transmission. Experimental results (taken over best-effort IP networks) will demonstrate the functional feasibility and performance of imCast.

  • Security Proof of Quantum Key Distribution

    Kiyoshi TAMAKI  Toyohiro TSURUMARU  

     
    INVITED PAPER

      Vol:
    E93-A No:5
      Page(s):
    880-888

    Quantum key distribution (QKD) is a way to securely expand the secret key to be used in One-time pad, and it is attracting great interest from not only theorists but also experimentalists or engineers who are aiming for the actual implementations. In this paper, we review the theoretical aspect of QKD, especially we focus on its security proof, and we briefly mention the possible problems and future directions.

  • Changes to Quantum Cryptography

    Yasuyuki SAKAI  Hidema TANAKA  

     
    INVITED PAPER

      Vol:
    E93-A No:5
      Page(s):
    872-879

    Quantum cryptography has become a subject of widespread interest. In particular, quantum key distribution, which provides a secure key agreement by using quantum systems, is believed to be the most important application of quantum cryptography. Quantum key distribution has the potential to achieve the "unconditionally" secure infrastructure. We also have many cryptographic tools that are based on "modern cryptography" at the present time. They are being used in an effort to guarantee secure communication over open networks such as the Internet. Unfortunately, their ultimate efficacy is in doubt. Quantum key distribution systems are believed to be close to practical and commercial use. In this paper, we discuss what we should do to apply quantum cryptography to our communications. We also discuss how quantum key distribution can be combined with or used to replace cryptographic tools based on modern cryptography.

  • Facial Image Recognition Based on a Statistical Uncorrelated Near Class Discriminant Approach

    Sheng LI  Xiao-Yuan JING  Lu-Sha BIAN  Shi-Qiang GAO  Qian LIU  Yong-Fang YAO  

     
    LETTER-Image Recognition, Computer Vision

      Vol:
    E93-D No:4
      Page(s):
    934-937

    In this letter, a statistical uncorrelated near class discriminant (SUNCD) approach is proposed for face recognition. The optimal discriminant vector obtained by this approach can differentiate one class and its near classes, i.e., its nearest neighbor classes, by constructing the specific between-class and within-class scatter matrices and using the Fisher criterion. In this manner, SUNCD acquires all discriminant vectors class by class. Furthermore, SUNCD makes every discriminant vector satisfy locally statistical uncorrelated constraints by using the corresponding class and part of its most neighboring classes. Experiments on the public AR face database demonstrate that the proposed approach outperforms several representative discriminant methods.

  • Noncoherent Maximum Likelihood Detection for Differential Spatial Multiplexing MIMO Systems

    Ziyan JIA  Katsunobu YOSHII  Shiro HANDA  Fumihito SASAMORI  Shinjiro OSHITA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E93-B No:2
      Page(s):
    361-368

    In this paper, we propose a novel noncoherent maximum likelihood detection (NMLD) method for differential spatial multiplexing (SM) multiple-input multiple-output (MIMO) systems. Unlike the conventional maximum likelihood detection (MLD) method which needs the knowledge of channel state information (CSI) at the receiver, NMLD method has no need of CSI at either the transmitter or receiver. After repartitioning the observation block of multiple-symbol differential detection (MSDD) and following a decision feedback process, the decision metric of NMLD is derived by reforming that of MSDD. Since the maximum Doppler frequency and noise power are included in the derived decision metric, estimations of both maximum Doppler frequency and noise power are needed at the receiver for NMLD. A fast calculation algorithm (FCA) is applied to reduce the computational complexity of NMLD. The feasibility of the proposed NMLD is demonstrated by computer simulations in both slow and fast fading environments. Simulation results show that the proposed NMLD has good bit error rate (BER) performance, approaching that of the conventional coherent MLD with the extension of reference symbols interval. It is also proved that the BER performance is not sensitive to the estimation errors in maximum Doppler frequency and noise power.

  • Efficient Almost Secure 1-Round Message Transmission Schemes for 3t+1 Channels

    Toshinori ARAKI  Wakaha OGATA  

     
    PAPER-Secure Protocol

      Vol:
    E93-A No:1
      Page(s):
    126-135

    In the model, a sender S wants to send a message to a receiver R secretly and reliably in r-round. They do not share any information like keys, but there are n independent communication channels between S and R, and an adversary A can observe and/or substitute the data which goes through some channels (but not all). In this paper, we propose almost secure (1-round, 3t+1-channel ) MTSs which have following two properties where t is the number of channels A can observe and/or forge. (1) The running time of message decryption algorithm is polynomial in n. (2) Communication cost is smaller than the previous MTSs, if the message is large to some degree.

  • Recent Advances in Millimeter-Wave NRD-Guide Circuits Open Access

    Tsukasa YONEYAMA  

     
    INVITED PAPER

      Vol:
    E92-C No:9
      Page(s):
    1106-1110

    Though millimeter wave applications have attracted much attention in recent years, they have not yet been put to practical use. The major reason for the failure may be a large transmission loss peculiar to the short wavelength. In order to overcome the inconvenience, it may be promising to introduce the technology of millimeter-wave NRD-guide circuits. In this technology, not only NRD-guide but also Gunn diodes and Schottky diodes play the important role in high bit-rate millimeter-wave applications. A variety of practical millimeter wave wireless systems have been proposed and fabricated. Performances and applications of them are discussed in detail as well.

  • Unconditionally Secure Group Signatures

    Takenobu SEITO  Yuki HARA  Junji SHIKATA  Tsutomu MATSUMOTO  

     
    PAPER-Cryptography and Information Security

      Vol:
    E92-A No:8
      Page(s):
    2067-2085

    A group signature scheme introduced by Chaum and Van Heyst allows a group member to sign messages anonymously on behalf of the group. However, in the case of a dispute, the identity of a signer of a group signature can be revealed only by a privileged entity, called a group manager. The group signature scheme has mainly been studied from the viewpoint of computational security setting so far. The main contribution of this paper is to study group signature schemes in unconditional security. More specifically, we newly introduce strong security notions of unconditionally secure group signatures (USGS for short) based on the idea of those of computationally secure group signatures proposed by Bellare, Micciancio and Warinschi. We also provide a generic method to construct USGS that is provably secure in our security definition. More precisely, we construct USGS by combining an encryption scheme with a signature, and show that the constructed scheme is unconditionally secure if the encryption and the signature used in the construction are unconditionally secure. Finally, we provide an instantiation of the one-time secure group signature scheme based on the generic construction.

  • High-Speed EA-DFB Laser for 40-G and 100-Gbps Open Access

    Shigeki MAKINO  Kazunori SHINODA  Takeshi KITATANI  Hiroaki HAYASHI  Takashi SHIOTA  Shigehisa TANAKA  Masahiro AOKI  Noriko SASADA  Kazuhiko NAOE  

     
    INVITED PAPER

      Vol:
    E92-C No:7
      Page(s):
    937-941

    We have developed a high-speed electroabsorption modulator integrated distributed feedback (EA/DFB) lasers. Transmission performance over 10 km was investigated under 25 Gbps and 43 Gbps modulation. In addition, the feasibility of wide temperature range operation was also investigated. An uncooled EA/DFB laser can contribute to the realization of low-power-consumption, small-footprint and cost-effective transceiver module. In this study, we used the temperature-tolerant InGaAlAs materials in an EA modulator. A wide temperature ranged 12 km transmission with over 9.6 dB dynamic extinction ratio was demonstrated under 25 Gbps modulation. A 43 Gbps 10 km transmission was also demonstrated. The laser achieved a clear, opened eye diagram with a dynamic extinction ratio over 7 dB from 25 to 85. The modulated output power was more than +2.9 dBm even at 85. These devices are suitable for next-generation, high-speed network systems, such as 40 Gbps and 100 Gbps Ethernet.

  • An Application of Vector Coding with IBI Cancelling Demodulator and Code Elimination to Delay Spread MIMO Channels

    Zhao LI  Hiroshi FURUKAWA  

     
    PAPER-Wireless Communication Technologies

      Vol:
    E92-B No:6
      Page(s):
    2153-2159

    Vector Coding (VC) is a novel vector modulation scheme that partitions a SISO (Single-Input Single-Output) channel into orthogonal subchannels by singular value decomposition (SVD). Because the orthogonal transmissions enabled by VC cannot cope with inter block interference (IBI) that is inevitable in delay spread channels, this paper proposes an IBI cancelling demodulator which can remove IBI by an iterative technique. We also show that code elimination in which insignificant eigencodes with lowermost eigenvalues are intentionally removed from transmission vectors greatly reduces BER (Bit Error Rate). The VC which utilizes the IBI cancelling demodulator and code elimination to reduce BER is compared with the original VC in not only delay spread SISO channels but also delay spread MIMO (Multi-Input Multi-Output) channels while emphasis is placed on the MIMO cases. Simulation results show that, under a predetermined BER, the enhanced MIMO-VC can improve effective transmission rate than the natural extension of VC to delay spread MIMO channels.

  • Fast Packet Classification Using Multi-Dimensional Encoding

    Chi Jia HUANG  Chien CHEN  

     
    PAPER-Internet

      Vol:
    E92-B No:6
      Page(s):
    2044-2053

    Internet routers need to classify incoming packets quickly into flows in order to support features such as Internet security, virtual private networks and Quality of Service (QoS). Packet classification uses information contained in the packet header, and a predefined rule table in the routers. Packet classification of multiple fields is generally a difficult problem. Hence, researchers have proposed various algorithms. This study proposes a multi-dimensional encoding method in which parameters such as the source IP address, destination IP address, source port, destination port and protocol type are placed in a multi-dimensional space. Similar to the previously best known algorithm, i.e., bitmap intersection, multi-dimensional encoding is based on the multi-dimensional range lookup approach, in which rules are divided into several multi-dimensional collision-free rule sets. These sets are then used to form the new coding vector to replace the bit vector of the bitmap intersection algorithm. The average memory storage of this encoding is θ (LNlog N) for each dimension, where L denotes the number of collision-free rule sets, and N represents the number of rules. The multi-dimensional encoding practically requires much less memory than bitmap intersection algorithm. Additionally, the computation needed for this encoding is as simple as bitmap intersection algorithm. The low memory requirement of the proposed scheme means that it not only decreases the cost of packet classification engine, but also increases the classification performance, since memory represents the performance bottleneck in the packet classification engine implementation using a network processor.

  • Design of a High-Throughput CABAC Encoder

    Chia-Cheng LO  Ying-Jhong ZENG  Ming-Der SHIEH  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E92-D No:4
      Page(s):
    681-688

    Context-based Adaptive Binary Arithmetic Coding(CABAC) is one of the algorithmic improvements that the H.264/AVC standard provides to enhance the compression ratio of video sequences. Compared with the context-based adaptive variable length coding (CAVLC), CABAC can obtain a better compression ratio at the price of higher computation complexity. In particular, the inherent data dependency and various types of syntax elements in CABAC results in a dramatically increased complexity if two bins obtained from binarized syntax elements are handled at a time. By analyzing the distribution of binarized bins in different video sequences, this work shows how to effectively improve the encoding rate with limited hardware overhead by allowing only a certain type of syntax element to be processed two bins at a time. Together with the proposed context memory management scheme and range renovation method, experimental results reveal that an encoding rate of up to 410 M-bin/s can be obtained with a limited increase in hardware requirement. Compared with related works that do not support multi-symbol encoding, our development can achieve nearly twice their throughput rates with less than 25 % hardware overhead.

141-160hit(318hit)