The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] CTI(8214hit)

1901-1920hit(8214hit)

  • Independent Spanning Trees of 2-Chordal Rings

    Yukihiro HAMADA  

     
    PAPER-Graphs and Networks

      Vol:
    E99-A No:1
      Page(s):
    355-362

    Two spanning trees T1,T2 of a graph G = (V,E) are independent if they are rooted at the same vertex, say r, and for each vertex v ∈ V, the path from r to v in T1 and the path from r to v in T2 have no common vertices and no common edges except for r and v. In general, spanning trees T1,T2,…,Tk of a graph G = (V,E) are independent if they are pairwise independent. A graph G = (V,E) is called a 2-chordal ring and denoted by CR(N,d1,d2), if V = {0,1,…,N-1} and E = {(u,v)|[v-u]N = 1 or [v-u]N = d1 or [v-u]N = d2, 2 ≤ d1 < d2 ≤ N/2}. CR(N,d1,N/2) is 5-connected if N ≥ 8 is even and d1 ≠ N/2-1. We give an algorithm to construct 5 independent spanning trees of CR(N,d1,N/2),N ≥ 8 is even and 2 ≤ d1 ≤ ⌈N/4⌉.

  • A 12×16-Element Double-Layer Corporate-Feed Waveguide Slot Array Antenna

    Satoshi ITO  Miao ZHANG  Jiro HIROKAWA  Makoto ANDO  

     
    PAPER-Antennas and Propagation

      Vol:
    E99-B No:1
      Page(s):
    40-47

    A 12×16-element corporate-feed slot array is presented. The corporate-feed circuit for the 12×16-elemtent array consists of cross-junctions and asymmetric T-junctions, whereas the conventional one is limited to arrays of 2m×2n slots by its use of symmetric T-junctions. Simulations of the 12×16-element array show a 7.6% bandwidth for reflection less than -14dB. A 31.7-dBi gain with an antenna efficiency of 82.6% is obtained at the design frequency of 61.5GHz. The 12×16-element array is fabricated by diffusion bonding of laminated thin metal plates. Measurements indicate 31.1-dBi gain with 71.9% antenna efficiency at 61.5GHz.

  • An Analytical Model of AC-DC Charge Pump Voltage Multipliers

    Toru TANZAWA  

     
    PAPER-Integrated Electronics

      Vol:
    E99-C No:1
      Page(s):
    108-118

    This paper proposes an analytical, closed-form AC-DC voltage multiplier model and investigates the dependency of output current and input power on circuit and device parameters. The model uses no fitting parameters and a frequency term applicable to both multipliers using diodes and metal-oxide semiconductor field effect transistors (MOSFETs). Analysis enables circuit designers to estimate circuit parameters, such as the number of stages and capacitance per stages, and device parameters such as saturation current (in the case of diodes) or transconductance (in the case of MOSFETs). Comparisons of the proposed model with SPICE simulation results as well as other models are also provided for validation. In addition, design optimizations and the impact of AC power source impedance on output power are also investigated.

  • On Recursive Representation of Optimum Projection Matrix

    Norisato SUGA  Toshihiro FURUKAWA  

     
    LETTER-Digital Signal Processing

      Vol:
    E99-A No:1
      Page(s):
    412-416

    In this letter, we show the recursive representation of the optimum projection matrix. The recursive representation of the orthogonal projection and oblique projection have been done in past references. These projections are optimum when the noise is only characterized by the white noise or the structured noise. However, in some practical applications, a desired signal is deteriorated by both the white noise and structured noise. In this situation, the optimum projection matrix has been given by Behrens. For this projection matrix, the recursive representation has not been done. Therefore, in this letter, we propose the recursive representation of this projection matrix.

  • Approximately-Zero Correlation Zone Sequence Set

    Sayuri FUKUI  Masanori HAMAMURA  

     
    PAPER

      Vol:
    E99-A No:1
      Page(s):
    159-166

    An algorithm that finds a set of real-valued approximately-zero correlation zone (AZCZ) sequences is proposed on the basis of the concept of feedback-controlled direct-sequence code-division multiple access (FC/DS-CDMA). It is known that ordinary algorithms can construct low correlation zone (LCZ) and zero correlation zone (ZCZ) sequence sets in which the choices of the number of sequences, sequence length, and LCZ or ZCZ length are limited. It is shown that the proposed algorithm finds AZCZ sequence sets by a numerical method under arbitrary conditions. The properties of AZCZ sequence sets are evaluated in terms of the autocorrelation and cross-correlation functions. It is shown that the periodic autocorrelation and cross-correlation functions take small values within a designated AZCZ. It is also shown that we can construct approximately-perfect sequences that have approximately ideal autocorrelation functions and new sequence sets that have multiple AZCZs using the proposed algorithm.

  • A Speech Enhancement Algorithm Based on Blind Signal Cancelation in Diffuse Noise Environments

    Jaesik HWANG  Jaepil SEO  Ji-Won CHO  Hyung-Min PARK  

     
    LETTER-Speech and Hearing

      Vol:
    E99-A No:1
      Page(s):
    407-411

    This letter describes a speech enhancement algorithm for stereo signals corrupted by diffuse noise. It estimates the noise signal and also a beamformed target signal based on blind target signal cancelation derived from sparsity minimization. Enhanced target speech is obtained by Wiener filtering using both the signals. Experimental results demonstrate the effectiveness of the proposed method.

  • A Collision Attack on a Double-Block-Length Compression Function Instantiated with 8-/9-Round AES-256

    Jiageng CHEN  Shoichi HIROSE  Hidenori KUWAKADO  Atsuko MIYAJI  

     
    PAPER

      Vol:
    E99-A No:1
      Page(s):
    14-21

    This paper presents the first non-trivial collision attack on the double-block-length compression function presented at FSE 2006 instantiated with round-reduced AES-256: f0(h0||h1,M)||f1(h0||h1,M) such that f0(h0||h1, M) = Eh1||M(h0)⊕h0 , f1(h0||h1,M) = Eh1||M(h0⊕c)⊕h0⊕c , where || represents concatenation, E is AES-256 and c is a 16-byte non-zero constant. The proposed attack is a free-start collision attack using the rebound attack proposed by Mendel et al. The success of the proposed attack largely depends on the configuration of the constant c: the number of its non-zero bytes and their positions. For the instantiation with AES-256 reduced from 14 rounds to 8 rounds, it is effective if the constant c has at most four non-zero bytes at some specific positions, and the time complexity is 264 or 296. For the instantiation with AES-256 reduced to 9 rounds, it is effective if the constant c has four non-zero bytes at some specific positions, and the time complexity is 2120. The space complexity is negligible in both cases.

  • LSA-X: Exploiting Productivity Factors in Linear Size Adaptation for Analogy-Based Software Effort Estimation

    Passakorn PHANNACHITTA  Akito MONDEN  Jacky KEUNG  Kenichi MATSUMOTO  

     
    PAPER-Software Engineering

      Pubricized:
    2015/10/15
      Vol:
    E99-D No:1
      Page(s):
    151-162

    Analogy-based software effort estimation has gained a considerable amount of attention in current research and practice. Its excellent estimation accuracy relies on its solution adaptation stage, where an effort estimate is produced from similar past projects. This study proposes a solution adaptation technique named LSA-X that introduces an approach to exploit the potential of productivity factors, i.e., project variables with a high correlation with software productivity, in the solution adaptation stage. The LSA-X technique tailors the exploitation of the productivity factors with a procedure based on the Linear Size Adaptation (LSA) technique. The results, based on 19 datasets show that in circumstances where a dataset exhibits a high correlation coefficient between productivity and a related factor (r≥0.30), the proposed LSA-X technique statistically outperformed (95% confidence) the other 8 commonly used techniques compared in this study. In other circumstances, our results suggest using any linear adaptation technique based on software size to compensate for the limitations of the LSA-X technique.

  • A Proposal of Access Point Selection Method Based on Cooperative Movement of Both Access Points and Users

    Ryo HAMAMOTO  Tutomu MURASE  Chisa TAKANO  Hiroyasu OBATA  Kenji ISHIDA  

     
    PAPER-Wireless System

      Pubricized:
    2015/09/15
      Vol:
    E98-D No:12
      Page(s):
    2048-2059

    In recent times, wireless Local Area Networks (wireless LANs) based on the IEEE 802.11 standard have been spreading rapidly, and connecting to the Internet using wireless LANs has become more common. In addition, public wireless LAN service areas, such as train stations, hotels, and airports, are increasing and tethering technology has enabled smartphones to act as access points (APs). Consequently, there can be multiple APs in the same area. In this situation, users must select one of many APs. Various studies have proposed and evaluated many AP selection methods; however, existing methods do not consider AP mobility. In this paper, we propose an AP selection method based on cooperation among APs and user movement. Moreover, we demonstrate that the proposed method dramatically improves throughput compared to an existing method.

  • ECC-Based Bit-Write Reduction Code Generation for Non-Volatile Memory

    Masashi TAWADA  Shinji KIMURA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER-High-Level Synthesis and System-Level Design

      Vol:
    E98-A No:12
      Page(s):
    2494-2504

    Non-volatile memory has many advantages such as high density and low leakage power but it consumes larger writing energy than SRAM. It is quite necessary to reduce writing energy in non-volatile memory design. In this paper, we propose write-reduction codes based on error correcting codes and reduce writing energy in non-volatile memory by decreasing the number of writing bits. When a data is written into a memory cell, we do not write it directly but encode it into a codeword. In our write-reduction codes, every data corresponds to an information vector in an error-correcting code and an information vector corresponds not to a single codeword but a set of write-reduction codewords. Given a writing data and current memory bits, we can deterministically select a particular write-reduction codeword corresponding to the data to be written, where the maximum number of flipped bits are theoretically minimized. Then the number of writing bits into memory cells will also be minimized. Experimental results demonstrate that we have achieved writing-bits reduction by an average of 51% and energy reduction by an average of 33% compared to non-encoded memory.

  • High Performance VLSI Architecture of H.265/HEVC Intra Prediction for 8K UHDTV Video Decoder

    Jianbin ZHOU  Dajiang ZHOU  Shihao WANG  Takeshi YOSHIMURA  Satoshi GOTO  

     
    PAPER-High-Level Synthesis and System-Level Design

      Vol:
    E98-A No:12
      Page(s):
    2519-2527

    8K Ultra High Definition Television (UHDTV) requires extremely high throughput for video decoding based on H.265. In H.265, intra coding could significantly enhance video compression efficiency, at the expense of an increased computational complexity compared with H.264. For intra prediction of 8K UHDTV real-time H.265 decoding, the joint complexity and throughput issue is more difficult to solve. Therefore, based on the divide-and-conquer strategy, we propose a new VLSI architecture in this paper, including two techniques, in order to achieve 8K UHDTV H.265 intra prediction decoding. The first technique is the LUT based Reference Sample Fetching Scheme (LUT-RSFS), reducing the number of reference samples in the worst case from 99 to 13. It further reduces the circuit area and enhances the performance. The second one is the Hybrid Block Reordering and Data Forwarding (HBRDF), minimizing the idle time and eliminating the dependency between TUs by creating 3 Data Forwarding paths. It achieves the hardware utilization of 94%. Our design is synthesized using Synopsys Design Compiler in 40nm process technology. It achieves an operation frequency of 260MHz, with a gate count of 217.8K for 8-bit design, and 251.1K for 10-bit design. The proposed VLSI architecture can support 4320p@120fps H.265 intra decoding (8-bit or 10-bit), with all 35 intra prediction modes and prediction unit sizes ranging from 4×4 to 64×64.

  • Circularity of the Fractional Fourier Transform and Spectrum Kurtosis for LFM Signal Detection in Gaussian Noise Model

    Guang Kuo LU  Man Lin XIAO  Ping WEI  Hong Shu LIAO  

     
    LETTER-Digital Signal Processing

      Vol:
    E98-A No:12
      Page(s):
    2709-2712

    This letter investigates the circularity of fractional Fourier transform (FRFT) coefficients containing noise only, and proves that all coefficients coming from white Gaussian noise are circular via the discrete FRFT. In order to use the spectrum kurtosis (SK) as a Gaussian test to check if linear frequency modulation (LFM) signals are present in a set of FRFT points, the effect of the noncircularity of Gaussian variables upon the SK of FRFT coefficients is studied. The SK of the α th-order FRFT coefficients for LFM signals embedded in a white Gaussian noise is also derived in this letter. Finally the signal detection algorithm based on FRFT and SK is proposed. The effectiveness and robustness of this algorithm are evaluated via simulations under lower SNR and weaker components.

  • High Efficiency CU Depth Prediction Algorithm for High Resolution Applications of HEVC

    Xiantao JIANG  Tian SONG  Wen SHI  Takashi SHIMAMOTO  Lisheng WANG  

     
    PAPER-High-Level Synthesis and System-Level Design

      Vol:
    E98-A No:12
      Page(s):
    2528-2536

    The purpose of this work is to reduce the redundant coding process with the tradeoff between the encoding complexity and coding efficiency in HEVC, especially for high resolution applications. Therefore, a CU depth prediction algorithm is proposed for motion estimation process of HEVC. At first, an efficient CTU depth prediction algorithm is proposed to reduce redundant depth. Then, CU size termination and skip algorithm is proposed based on the neighboring block depth and motion consistency. Finally, the overall algorithm, which has excellent complexity reduction performance for high resolution application is proposed. Moreover, the proposed method achieves steady performance, and it can significantly reduce the encoding time in different environment configuration and quantization parameter. The simulation experiment results demonstrate that, in the RA case, the average time saving is about 56% with only 0.79% BD-bitrate loss for the high resolution, and this performance is better than the previous state of the art work.

  • Top-Down Visual Attention Estimation Using Spatially Localized Activation Based on Linear Separability of Visual Features

    Takatsugu HIRAYAMA  Toshiya OHIRA  Kenji MASE  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2015/09/10
      Vol:
    E98-D No:12
      Page(s):
    2308-2316

    Intelligent information systems captivate people's attention. Examples of such systems include driving support vehicles capable of sensing driver state and communication robots capable of interacting with humans. Modeling how people search visual information is indispensable for designing these kinds of systems. In this paper, we focus on human visual attention, which is closely related to visual search behavior. We propose a computational model to estimate human visual attention while carrying out a visual target search task. Existing models estimate visual attention using the ratio between a representative value of visual feature of a target stimulus and that of distractors or background. The models, however, can not often achieve a better performance for difficult search tasks that require a sequentially spotlighting process. For such tasks, the linear separability effect of a visual feature distribution should be considered. Hence, we introduce this effect to spatially localized activation. Concretely, our top-down model estimates target-specific visual attention using Fisher's variance ratio between a visual feature distribution of a local region in the field of view and that of a target stimulus. We confirm the effectiveness of our computational model through a visual search experiment.

  • The Fault-Tolerant Hamiltonian Problems of Crossed Cubes with Path Faults

    Hon-Chan CHEN  Tzu-Liang KUNG  Yun-Hao ZOU  Hsin-Wei MAO  

     
    PAPER-Switching System

      Pubricized:
    2015/09/15
      Vol:
    E98-D No:12
      Page(s):
    2116-2122

    In this paper, we investigate the fault-tolerant Hamiltonian problems of crossed cubes with a faulty path. More precisely, let P denote any path in an n-dimensional crossed cube CQn for n ≥ 5, and let V(P) be the vertex set of P. We show that CQn-V(P) is Hamiltonian if |V(P)|≤n and is Hamiltonian connected if |V(P)| ≤ n-1. Compared with the previous results showing that the crossed cube is (n-2)-fault-tolerant Hamiltonian and (n-3)-fault-tolerant Hamiltonian connected for arbitrary faults, the contribution of this paper indicates that the crossed cube can tolerate more faulty vertices if these vertices happen to form some specific types of structures.

  • Parameterization of High-Dimensional Perfect Sequences over a Composition Algebra over R

    Takao MAEDA  Yodai WATANABE  Takafumi HAYASHI  

     
    PAPER-Sequence

      Vol:
    E98-A No:12
      Page(s):
    2439-2445

    To analyze the structure of a set of high-dimensional perfect sequences over a composition algebra over R, we developed the theory of Fourier transforms of the set of such sequences. We define the discrete cosine transform and the discrete sine transform, and we show that there exists a relationship between these transforms and a convolution of sequences. By applying this property to a set of perfect sequences, we obtain a parameterization theorem. Using this theorem, we show the equivalence between the left perfectness and right perfectness of sequences. For sequences of real numbers, we obtain the parameterization without restrictions on the parameters.

  • A Fast Settling All Digital PLL Using Temperature Compensated Oscillator Tuning Word Estimation Algorithm

    Keisuke OKUNO  Shintaro IZUMI  Kana MASAKI  Hiroshi KAWAGUCHI  Masahiko YOSHIMOTO  

     
    PAPER-Circuit Design

      Vol:
    E98-A No:12
      Page(s):
    2592-2599

    This report describes an all-digital phase-locked loop (ADPLL) using a temperature compensated settling time reduction technique. The novelty of this work is autonomous oscillation control word estimation without a look-up table or memory circuits. The proposed ADPLL employs a multi-phase digitally controlled oscillator (DCO). In the proposed estimation method, the optimum oscillator tuning word (OTW) is estimated from the DCO frequency characteristic in the setup phase of ADPLL. The proposed ADPLL, which occupies 0.27×0.36mm2, is fabricated by a 65 nm CMOS process. The temperature compensation PLL controller (TCPC) is implemented using an FPGA. Although the proposed method has 20% area overhead, measurement results show that the 47% settling time is reduced. The average settling time at 25°C is 3µs. The average reduction energy is at least 42% from 0°C to 100°C.

  • Disavowable Public Key Encryption with Non-Interactive Opening

    Ai ISHIDA  Keita EMURA  Goichiro HANAOKA  Yusuke SAKAI  Keisuke TANAKA  

     
    PAPER-Cryptography and Information Security

      Vol:
    E98-A No:12
      Page(s):
    2446-2455

    The primitive called public key encryption with non-interactive opening (PKENO) is a class of public key encryption (PKE) with additional functionality. By using this, a receiver of a ciphertext can prove that the ciphertext is an encryption of a specified message in a publicly verifiable manner. In some situation that a receiver needs to claim that a ciphertext is NOT decrypted to a specified message, if he/she proves the fact by using PKENO straightforwardly, the real message of the ciphertext is revealed and a verifier checks that it is different from the specified message about which the receiver wants to prove. However, this naive solution is problematic in terms of privacy. Inspired by this problem, we propose the notion of disavowable public key encryption with non-interactive opening (disavowable PKENO) where, with respect to a ciphertext and a message, the receiver of the ciphertext can issue a proof that the plaintext of the ciphertext is NOT the message. Also, we give a concrete construction. Specifically, a disavowal proof in our scheme consists of 61 group elements. The proposed disavowable PKENO scheme is provably secure in the standard model under the decisional linear assumption and strong unforgeability of the underlying one-time signature scheme.

  • Lines of Comments as a Noteworthy Metric for Analyzing Fault-Proneness in Methods

    Hirohisa AMAN  Sousuke AMASAKI  Takashi SASAKI  Minoru KAWAHARA  

     
    PAPER-Software Engineering

      Pubricized:
    2015/09/04
      Vol:
    E98-D No:12
      Page(s):
    2218-2228

    This paper focuses on the power of comments to predict fault-prone programs. In general, comments along with executable statements enhance the understandability of programs. However, comments may also be used to mask the lack of readability in the program, therefore well-written comments are referred to as “deodorant to mask code smells” in the field of code refactoring. This paper conducts an empirical analysis to examine whether Lines of Comments (LCM) written inside a method's body is a noteworthy metric for analyzing fault-proneness in Java methods. The empirical results show the following two findings: (1) more-commented methods (the methods having more comments than the amount estimated by size and complexity of the methods) are about 1.6 - 2.8 times more likely to be faulty than the others, and (2) LCM can be a useful factor in fault-prone method prediction models along with the method size and the method complexity.

  • A Study of Physical Design Guidelines in ThruChip Inductive Coupling Channel

    Li-Chung HSU  Junichiro KADOMOTO  So HASEGAWA  Atsutake KOSUGE  Yasuhiro TAKE  Tadahiro KURODA  

     
    PAPER-Physical Level Design

      Vol:
    E98-A No:12
      Page(s):
    2584-2591

    ThruChip interface (TCI) is an emerging wireless interface in three-dimensional (3-D) integrated circuit (IC) technology. However, the TCI physical design guidelines remain unclear. In this paper, a ThruChip test chip is designed and fabricated for design guidelines exploration. Three inductive coupling interface physical design scenarios, baseline, power mesh, and dummy metal fill, are deployed in the test chip. In the baseline scenario, the test chip measurement results show that thinning chip or enlarging coil dimension can further reduce TCI power. The power mesh scenario shows that the eddy current on power mesh can dramatically reduce magnetic pulse signal and thus possibly cause TCI to fail. A power mesh splitting method is proposed to effectively suppress eddy current impact while minimizing power mesh structure impact. The simulation results show that the proposed method can recover 77% coupling coefficient loss while only introducing additional 0.5% IR-drop. In dummy metal fill case, dummy metal fill enclosed within TCI coils have no impact on TCI transmission and thus are ignorable.

1901-1920hit(8214hit)