The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] (42807hit)

6861-6880hit(42807hit)

  • Up-Stream Dispatching of Power by Density of Power Packet

    Shinya NAWATA  Ryo TAKAHASHI  Takashi HIKIHARA  

     
    LETTER-Systems and Control

      Vol:
    E99-A No:12
      Page(s):
    2581-2584

    Power packet is a unit of electric power transferred by a pulse with an information tag. This letter discusses up-stream dispatching of required power at loads to sources through density modulation of power packet. Here, power is adjusted at a proposed router which dispatches power packets according to the tags. It is analyzed by averaging method and numerically verified.

  • Threshold of Overflow Probability Using Smooth Max-Entropy in Lossless Fixed-to-Variable Length Source Coding for General Sources

    Shota SAITO  Toshiyasu MATSUSHIMA  

     
    LETTER-Source Coding and Data Compression

      Vol:
    E99-A No:12
      Page(s):
    2286-2290

    We treat lossless fixed-to-variable length source coding under general sources for finite block length setting. We evaluate the threshold of the overflow probability for prefix and non-prefix codes in terms of the smooth max-entropy. We clarify the difference of the thresholds between prefix and non-prefix codes for finite block length. Further, we discuss our results under the asymptotic block length setting.

  • Is Caching a Key to Energy Reduction of NDN Networks?

    Junji TAKEMASA  Yuki KOIZUMI  Toru HASEGAWA  

     
    PAPER

      Vol:
    E99-B No:12
      Page(s):
    2489-2497

    Energy efficiency is an important requirement to forth-coming NDN (Named Data Networking) networks and caching inherent to NDN is a main driver of energy reduction in such networks. This paper addresses the research question “Does caching really reduce the energy consumption of the entire network?”. To answer the question, we precisely estimate how caching reduces energy consumption of forth-coming commercial NDN networks by carefully considering configurations of NDN routers. This estimation reveals that energy reduction due to caching depends on energy-proportionality of NDN routers.

  • Energy Efficient Information Retrieval for Content Centric Networks in Disaster Environment

    Yusaku HAYAMIZU  Tomohiko YAGYU  Miki YAMAMOTO  

     
    PAPER

      Vol:
    E99-B No:12
      Page(s):
    2509-2519

    Communication infrastructures under the influence of the disaster strike, e.g., earthquake, will be partitioned due to the significant damage of network components such as base stations. The communication model of the Internet bases on a location-oriented ID, i.e., IP address, and depends on the DNS (Domain Name System) for name resolution. Therefore such damage remarkably deprives the reachability to the information. To achieve robustness of information retrieval in disaster situation, we try to apply CCN/NDN (Content-Centric Networking/Named-Data Networking) to information networks fragmented by the disaster strike. However, existing retransmission control in CCN is not suitable for the fragmented networks with intermittent links due to the timer-based end-to-end behavior. Also, the intermittent links cause a problem for cache behavior. In order to resolve these technical issues, we propose a new packet forwarding scheme with the dynamic routing protocol which resolves retransmission control problem and cache control scheme suitable for the fragmented networks. Our simulation results reveal that the proposed caching scheme can stably store popular contents into cache storages of routers and improve cache hit ratio. And they also reveal that our proposed packet forwarding method significantly improves traffic load, energy consumption and content retrieval delay in fragmented networks.

  • Joint Optimization of Peak-to-Average Power Ratio and Spectral Leakage in NC-OFDM

    Peng WEI  Lilin DAN  Yue XIAO  Shaoqian LI  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2016/06/21
      Vol:
    E99-B No:12
      Page(s):
    2592-2599

    High peak-to-average power ratio (PAPR) and spectral leakage are two main problems of orthogonal frequency division multiplexing (OFDM) systems. For alleviating the above problems, this paper proposes a joint model which efficiently suppresses both PAPR and spectral leakage, by combining serial peak cancellation (SPC) and time-domain N-continuous OFDM (TD-NC-OFDM) in an iterative way. Furthermore, we give an analytical expression of the proposed joint model to analyze the mutual effects between SPC and TD-NC-OFDM. Lastly, simulation results also support that the joint optimization model can obtain notable PAPR reduction and sidelobe suppression performance with low implementation cost.

  • The Improvement of the Processes of a Class of Graph-Cut-Based Image Segmentation Algorithms

    Shengxiao NIU  Gengsheng CHEN  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2016/09/14
      Vol:
    E99-D No:12
      Page(s):
    3053-3059

    In this paper, an analysis of the basic process of a class of interactive-graph-cut-based image segmentation algorithms indicates that it is unnecessary to construct n-links for all adjacent pixel nodes of an image before calculating the maximum flow and the minimal cuts. There are many pixel nodes for which it is not necessary to construct n-links at all. Based on this, we propose a new algorithm for the dynamic construction of all necessary n-links that connect the pixel nodes explored by the maximum flow algorithm. These n-links are constructed dynamically and without redundancy during the process of calculating the maximum flow. The Berkeley segmentation dataset benchmark is used to prove that this method can reduce the average running time of segmentation algorithms on the premise of correct segmentation results. This improvement can also be applied to any segmentation algorithm based on graph cuts.

  • A New Algorithm for Reducing Components of a Gaussian Mixture Model

    Naoya YOKOYAMA  Daiki AZUMA  Shuji TSUKIYAMA  Masahiro FUKUI  

     
    PAPER

      Vol:
    E99-A No:12
      Page(s):
    2425-2434

    In statistical methods, such as statistical static timing analysis, Gaussian mixture model (GMM) is a useful tool for representing a non-Gaussian distribution and handling correlation easily. In order to repeat various statistical operations such as summation and maximum for GMMs efficiently, the number of components should be restricted around two. In this paper, we propose a method for reducing the number of components of a given GMM to two (2-GMM). Moreover, since the distribution of each component is represented often by a linear combination of some explanatory variables, we propose a method to compute the covariance between each explanatory variable and the obtained 2-GMM, that is, the sensitivity of 2-GMM to each explanatory variable. In order to evaluate the performance of the proposed methods, we show some experimental results. The proposed methods minimize the normalized integral square error of probability density function of 2-GMM by the sacrifice of the accuracy of sensitivities of 2-GMM.

  • New Non-Asymptotic Bounds on Numbers of Codewords for the Fixed-Length Lossy Compression

    Tetsunao MATSUTA  Tomohiko UYEMATSU  

     
    PAPER-Source Coding and Data Compression

      Vol:
    E99-A No:12
      Page(s):
    2116-2129

    In this paper, we deal with the fixed-length lossy compression, where a fixed-length sequence emitted from the information source is encoded into a codeword, and the source sequence is reproduced from the codeword with a certain distortion. We give lower and upper bounds on the minimum number of codewords such that the probability of exceeding a given distortion level is less than a given probability. These bounds are characterized by using the α-mutual information of order infinity. Further, for i.i.d. binary sources, we provide numerical examples of tight upper bounds which are computable in polynomial time in the blocklength.

  • Two-Level Popularity-Oriented Cache Replacement Policy for Video Delivery over CCN

    Haipeng LI  Hidenori NAKAZATO  

     
    PAPER

      Vol:
    E99-B No:12
      Page(s):
    2532-2540

    We introduce a novel cache replacement policy to improve the entire network performance of video delivery over content-centric networking (CCN). In the case of the CCN structure, we argue that: 1) for video multiplexing scenario, general cache strategies that ignore the intrinsic linear time characteristic of video requests are unable to make better use of the cache resources, and 2) it is inadequate to simply extend the existing research conclusions of file-oriented popularity to chunk-by-chunk popularity, which are widely used in CCN. Unlike previous works in this field, the proposed policy in this study, named two-level popularity-oriented time-to-hold cache replacement policy (TLP-TTH), is designed on the basis of the following principles. Firstly, the proposed cache replacement strategy is customized for video delivery by carefully considering the essential auto-correlated request feature of video chunks within a video file. Furthermore, the popularity in video delivery is subdivided into two levels, namely chunk-level access probability and file-level popularity, in order to efficiently utilize cache resources. We evaluated the proposed policy in both a hierarchical topology and a real network based hybrid topology, and took viewers departure into consideration as well. The results validate that for video delivery over CCN, TLP-TTH policy improves the network performance from several aspects. In particular, we observed that the proposed policy not only increases the cache hit ratio at the edge of the network but the cache utilization at the intermediate routers is also improved markedly. Further, with respect to the video popularity variation scenario, the cache hit ratio of TLP-TTH policy responds sensitively to maintain efficient cache utilization.

  • Asymptotic Behavior of Error Probability in Continuous-Time Gaussian Channels with Feedback

    Shunsuke IHARA  

     
    PAPER-Shannon Theory

      Vol:
    E99-A No:12
      Page(s):
    2107-2115

    We investigate the coding scheme and error probability in information transmission over continuous-time additive Gaussian noise channels with feedback. As is known, the error probability can be substantially reduced by using feedback, namely, under the average power constraint, the error probability may decrease more rapidly than the exponential of any order. Recently Gallager and Nakibolu proposed, for discrete-time additive white Gaussian noise channels, a feedback coding scheme such that the resulting error probability Pe(N) at time N decreases with an exponential order αN which is linearly increasing with N. The multiple-exponential decay of the error probability has been studied mostly for white Gaussian channels, so far. In this paper, we treat continuous-time Gaussian channels, where the Gaussian noise processes are not necessarily white nor stationary. The aim is to prove a stronger result on the multiple-exponential decay of the error probability. More precisely, for any positive constant α, there exists a feedback coding scheme such that the resulting error probability Pe(T) at time T decreases more rapidly than the exponential of order αT as T→∞.

  • Performance Improvement of Error-Resilient 3D DWT Video Transmission Using Invertible Codes

    Kotoku OMURA  Shoichiro YAMASAKI  Tomoko K. MATSUSHIMA  Hirokazu TANAKA  Miki HASEYAMA  

     
    PAPER-Video Coding

      Vol:
    E99-A No:12
      Page(s):
    2256-2265

    Many studies have applied the three-dimensional discrete wavelet transform (3D DWT) to video coding. It is known that corruptions of the lowest frequency sub-band (LL) coefficients of 3D DWT severely affect the visual quality of video. Recently, we proposed an error resilient 3D DWT video coding method (the conventional method) that employs dispersive grouping and an error concealment (EC). The EC scheme of our conventional method adopts a replacement technique of the lost LL coefficients. In this paper, we propose a new 3D DWT video transmission method in order to enhance error resilience. The proposed method adopts an error correction scheme using invertible codes to protect LL coefficients. We use half-rate Reed-Solomon (RS) codes as invertible codes. Additionally, to improve performance by using the effect of interleave, we adopt a new configuration scheme at the RS encoding stage. The evaluation by computer simulation compares the performance of the proposed method with that of other EC methods, and indicates the advantage of the proposed method.

  • Average Coding Rate of a Multi-Shot Tunstall Code with an Arbitrary Parsing Tree Sequence

    Mitsuharu ARIMURA  

     
    LETTER-Source Coding and Data Compression

      Vol:
    E99-A No:12
      Page(s):
    2281-2285

    Average coding rate of a multi-shot Tunstall code, which is a variation of variable-to-fixed length (VF) lossless source codes, for stationary memoryless sources is investigated. A multi-shot VF code parses a given source sequence to variable-length blocks and encodes them to fixed-length codewords. If we consider the situation that the parsing count is fixed, overall multi-shot VF code can be treated as a one-shot VF code. For this setting of Tunstall code, the compression performance is evaluated using two criterions. The first one is the average coding rate which is defined as the codeword length divided by the average block length. The second one is the expectation of the pointwise coding rate. It is proved that both of the above average coding rate converge to the entropy of a stationary memoryless source under the assumption that the geometric mean of the leaf counts of the multi-shot Tunstall parsing trees goes to infinity.

  • Range Limiter Using Connection Bounding Box for SA-Based Placement of Mixed-Grained Reconfigurable Architecture

    Takashi KISHIMOTO  Wataru TAKAHASHI  Kazutoshi WAKABAYASHI  Hiroyuki OCHI  

     
    PAPER

      Vol:
    E99-A No:12
      Page(s):
    2328-2334

    In this paper, we propose a novel placement algorithm for mixed-grained reconfigurable architectures (MGRAs). MGRA consists of coarse-grained and fine-grained clusters, in order to implement a combined digital systems of high-speed data paths with multi-bit operands and random logic circuits for state machines and bit-wise operations. For accelerating simulated annealing based FPGA placement algorithm, range limiter has been proposed to control the distance of two blocks to be interchanged. However, it is not applicable to MGRAs due to the heterogeneous structure of MGRAs. Proposed range limiter using connection bounding box effectively keeps the size of range limiter to encourage moves across fine-grain blocks in non-adjacent clusters. From experimental results, the proposed method achieved 47.8% reduction of cost in the best case compared with conventional methods.

  • A Highly-Adaptable and Small-Sized In-Field Power Analyzer for Low-Power IoT Devices

    Ryosuke KITAYAMA  Takashi TAKENAKA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER

      Vol:
    E99-A No:12
      Page(s):
    2348-2362

    Power analysis for IoT devices is strongly required to protect attacks from malicious attackers. It is also very important to reduce power consumption itself of IoT devices. In this paper, we propose a highly-adaptable and small-sized in-field power analyzer for low-power IoT devices. The proposed power analyzer has the following advantages: (A) The proposed power analyzer realizes signal-averaging noise reduction with synchronization signal lines and thus it can reduce wide frequency range of noises; (B) The proposed power analyzer partitions a long-term power analysis process into several analysis segments and measures voltages and currents of each analysis segment by using small amount of data memories. By combining these analysis segments, we can obtain long-term analysis results; (C) The proposed power analyzer has two amplifiers that amplify current signals adaptively depending on their magnitude. Hence maximum readable current can be increased with keeping minimum readable current small enough. Since all of (A), (B) and (C) do not require complicated mechanisms nor circuits, the proposed power analyzer is implemented on just a 2.5cm×3.3cm board, which is the smallest size among the other existing power analyzers for IoT devices. We have measured power and energy consumption of the AES encryption process on the IoT device and demonstrated that the proposed power analyzer has only up to 1.17% measurement errors compared to a high-precision oscilloscope.

  • A Bit-Write-Reducing and Error-Correcting Code Generation Method by Clustering ECC Codewords for Non-Volatile Memories

    Tatsuro KOJO  Masashi TAWADA  Masao YANAGISAWA  Nozomu TOGAWA  

     
    PAPER

      Vol:
    E99-A No:12
      Page(s):
    2398-2411

    Non-volatile memories are paid attention to as a promising alternative to memory design. Data stored in them still may be destructed due to crosstalk and radiation. We can restore the data by using error-correcting codes which require extra bits to correct bit errors. Further, non-volatile memories consume ten to hundred times more energy than normal memories in bit-writing. When we configure them using error-correcting codes, it is quite necessary to reduce writing bits. In this paper, we propose a method to generate a bit-write-reducing code with error-correcting ability. We first pick up an error-correcting code which can correct t-bit errors. We cluster its codeswords and generate a cluster graph satisfying the S-bit flip conditions. We assign a data to be written to each cluster. In other words, we generate one-to-many mapping from each data to the codewords in the cluster. We prove that, if the cluster graph is a complete graph, every data in a memory cell can be re-written into another data by flipping at most S bits keeping error-correcting ability to t bits. We further propose an efficient method to cluster error-correcting codewords. Experimental results show that the bit-write-reducing and error-correcting codes generated by our proposed method efficiently reduce energy consumption. This paper proposes the world-first theoretically near-optimal bit-write-reducing code with error-correcting ability based on the efficient coding theories.

  • Low Complexity Reed-Solomon Decoder Design with Pipelined Recursive Euclidean Algorithm

    Kazuhito ITO  

     
    PAPER

      Vol:
    E99-A No:12
      Page(s):
    2453-2462

    A Reed-Solomon (RS) decoder is designed based on the pipelined recursive Euclidean algorithm in the key equation solution. While the Euclidean algorithm uses less Galois multipliers than the modified Euclidean (ME) and reformulated inversionless Berlekamp-Massey (RiBM) algorithms, division between two elements in Galois field is required. By implementing the division with a multi-cycle Galois inverter and a serial Galois multiplier, the proposed key equation solver architecture achieves lower complexity than the conventional ME and RiBM based architectures. The proposed RS (255,239) decoder reduces the hardware complexity by 25.9% with 6.5% increase in decoding latency.

  • A Highly Efficient Switched-Capacitor Voltage Boost Converter with Nano-Watt MPPT Controller for Low-Voltage Energy Harvesting

    Toshihiro OZAKI  Tetsuya HIROSE  Takahiro NAGAI  Keishi TSUBAKI  Nobutaka KUROKI  Masahiro NUMA  

     
    PAPER

      Vol:
    E99-A No:12
      Page(s):
    2491-2499

    This paper presents a fully integrated voltage boost converter consisting of a charge pump (CP) and maximum power point tracking (MPPT) controller for ultra-low power energy harvesting. The converter is based on a conventional CP circuit and can deliver a wide range of load current by using nMOS and pMOS driver circuits for highly efficient charge transfer operation. The MPPT controller we propose dissipates nano-watt power to extract maximum power regardless of the harvester's power generation conditions and load current. The measurement results demonstrated that the circuit converted a 0.49-V input to a 1.46-V output with 73% power conversion efficiency when the output power was 348µW. The circuit can operate at an extremely low input voltage of 0.21V.

  • Hardware-Efficient Local Extrema Detection for Scale-Space Extrema Detection in SIFT Algorithm

    Kazuhito ITO  Hiroki HAYASHI  

     
    LETTER

      Vol:
    E99-A No:12
      Page(s):
    2507-2510

    In this paper a hardware-efficient local extrema detection (LED) method used for scale-space extrema detection in the SIFT algorithm is proposed. By reformulating the reuse of the intermediate results in taking the local maximum and minimum, the necessary operations in LED are reduced without degrading the detection accuracy. The proposed method requires 25% to 35% less logic resources than the conventional method when implemented in an FPGA with a slight increase in latency.

  • A Deep Neural Network Based Quasi-Linear Kernel for Support Vector Machines

    Weite LI  Bo ZHOU  Benhui CHEN  Jinglu HU  

     
    PAPER-Neural Networks and Bioengineering

      Vol:
    E99-A No:12
      Page(s):
    2558-2565

    This paper proposes a deep quasi-linear kernel for support vector machines (SVMs). The deep quasi-linear kernel can be constructed by using a pre-trained deep neural network. To realize this goal, a multilayer gated bilinear classifier is first designed to mimic the functionality of the pre-trained deep neural network, by generating the gate control signals using the deep neural network. Then, a deep quasi-linear kernel is derived by applying an SVM formulation to the multilayer gated bilinear classifier. In this way, we are able to further implicitly optimize the parameters of the multilayer gated bilinear classifier, which are a set of duplicate but independent parameters of the pre-trained deep neural network, by using an SVM optimization. Experimental results on different data sets show that SVMs with the proposed deep quasi-linear kernel have an ability to take advantage of the pre-trained deep neural networks and outperform SVMs with RBF kernels.

  • Signal Power Estimation Based on Orthogonal Projection and Oblique Projection

    Norisato SUGA  Toshihiro FURUKAWA  

     
    LETTER-Digital Signal Processing

      Vol:
    E99-A No:12
      Page(s):
    2571-2575

    In this letter, we show the new signal power estimation method base on the subspace projection. This work mainly contributes to the SINR estimation problem because, in this research, the signal power estimation is implicitly or explicitly performed. The difference between our method and the conventional method related to this topic is the exploitation of the subspace character of the signals constructing the observed signal. As tools to perform subspace operation, we apply orthogonal projection and oblique projection which can extracts desired parameters. In the proposed scheme, the statistics of the projected observed signal by these projection are used to estimate the parameters.

6861-6880hit(42807hit)