The search functionality is under construction.

Keyword Search Result

[Keyword] compression(349hit)

1-20hit(349hit)

  • Input Data Format for Sparse Matrix in Quantum Annealing Emulator

    Sohei SHIMOMAI  Kei UEDA  Shinji KIMURA  

     
    PAPER-Algorithms and Data Structures

      Pubricized:
    2023/09/25
      Vol:
    E107-A No:3
      Page(s):
    557-565

    Recently, Quantum Annealing (QA) has attracted attention as an efficient algorithm for combinatorial optimization problems. In QA, the input data size becomes large and its reduction is important for accelerating by the hardware emulation since the usable memory size and its bandwidth are limited. The paper proposes the compression method of input sparse matrices for QA emulator. The proposed method uses the sparseness of the coefficient matrix and the reappearance of the same values. An independent table is introduced and data are compressed by the search and registration method of two consecutive data in the value table. The proposed method is applied to Traveling Salesman Problem (TSP) with 32, 64 and 96 cities and Nurse Scheduling Problem (NSP). The proposed method could reduce the amount of data by 1/40 for 96 city TSP and could manage 96 city TSP on the hardware emulator. When applied to NSP, we confirmed the effectiveness of the proposed method by the compression ratio ranging from 1/4 to 1/11.8. The data reduction is also useful for the simulation/emulation performance when using the compressed data directly and 1.9 times faster speed can be found on 96 city TSP than the CSR-based method.

  • An Efficient Bayes Coding Algorithm for Changing Context Tree Model

    Koshi SHIMADA  Shota SAITO  Toshiyasu MATSUSHIMA  

     
    PAPER-Source Coding and Data Compression

      Pubricized:
    2023/08/24
      Vol:
    E107-A No:3
      Page(s):
    448-457

    The context tree model has the property that the occurrence probability of symbols is determined from a finite past sequence and is a broader class of sources that includes i.i.d. or Markov sources. This paper proposes a non-stationary source with context tree models that change from interval to interval. The Bayes code for this source requires weighting of the posterior probabilities of the context tree models and change points, so the computational complexity of it usually increases to exponential order. Therefore, the challenge is how to reduce the computational complexity. In this paper, we propose a special class of prior probability distribution of context tree models and change points and develop an efficient Bayes coding algorithm by combining two existing Bayes coding algorithms. The algorithm minimizes the Bayes risk function of the proposed source in this paper, and the computational complexity of the proposed algorithm is polynomial order. We investigate the behavior and performance of the proposed algorithm by conducting experiments.

  • Properties of k-Bit Delay Decodable Codes

    Kengo HASHIMOTO  Ken-ichi IWATA  

     
    PAPER-Source Coding and Data Compression

      Pubricized:
    2023/09/07
      Vol:
    E107-A No:3
      Page(s):
    417-447

    The class of k-bit delay decodable codes, source codes allowing decoding delay of at most k bits for k≥0, can attain a shorter average codeword length than Huffman codes. This paper discusses the general properties of the class of k-bit delay decodable codes with a finite number of code tables and proves two theorems which enable us to limit the scope of codes to be considered when discussing optimal k-bit delay decodable codes.

  • A Fundamental Limit of Variable-Length Compression with Worst-Case Criteria in Terms of Side Information

    Sho HIGUCHI  Yuta SAKAI  

     
    PAPER-Source Coding and Data Compression

      Pubricized:
    2023/07/03
      Vol:
    E107-A No:3
      Page(s):
    384-392

    In this study, we consider the data compression with side information available at both the encoder and the decoder. The information source is assigned to a variable-length code that does not have to satisfy the prefix-free constraints. We define several classes of codes whose codeword lengths and error probabilities satisfy worse-case criteria in terms of side-information. As a main result, we investigate the exact first-order asymptotics with second-order bounds scaled as Θ(√n) as blocklength n increases under the regime of nonvanishing error probabilities. To get this result, we also derive its one-shot bounds by employing the cutoff operation.

  • Pipelined ADPCM Compression for HDR Synthesis on an FPGA

    Masahiro NISHIMURA  Taito MANABE  Yuichiro SHIBATA  

     
    PAPER-VLSI Design Technology and CAD

      Pubricized:
    2023/08/31
      Vol:
    E107-A No:3
      Page(s):
    531-539

    This paper presents an FPGA implementation of real-time high dynamic range (HDR) synthesis, which expresses a wide dynamic range by combining multiple images with different exposures using image pyramids. We have implemented a pipeline that performs streaming processing on images without using external memory. However, implementation for high-resolution images has been difficult due to large memory usage for line buffers. Therefore, we propose an image compression algorithm based on adaptive differential pulse code modulation (ADPCM). Compression modules based on the algorithm can be easily integrated into the pipeline. When the image resolution is 4K and the pyramid depth is 7, memory usage can be halved from 168.48% to 84.32% by introducing the compression modules, resulting in better quality.

  • Content-Adaptive Optimization Framework for Universal Deep Image Compression

    Koki TSUBOTA  Kiyoharu AIZAWA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2023/10/24
      Vol:
    E107-D No:2
      Page(s):
    201-211

    While deep image compression performs better than traditional codecs like JPEG on natural images, it faces a challenge as a learning-based approach: compression performance drastically decreases for out-of-domain images. To investigate this problem, we introduce a novel task that we call universal deep image compression, which involves compressing images in arbitrary domains, such as natural images, line drawings, and comics. Furthermore, we propose a content-adaptive optimization framework to tackle this task. This framework adapts a pre-trained compression model to each target image during testing for addressing the domain gap between pre-training and testing. For each input image, we insert adapters into the decoder of the model and optimize the latent representation extracted by the encoder and the adapter parameters in terms of rate-distortion, with the adapter parameters transmitted per image. To achieve the evaluation of the proposed universal deep compression, we constructed a benchmark dataset containing uncompressed images of four domains: natural images, line drawings, comics, and vector arts. We compare our proposed method with non-adaptive and existing adaptive compression methods, and the results show that our method outperforms them. Our code and dataset are publicly available at https://github.com/kktsubota/universal-dic.

  • Research on Lightweight Acoustic Scene Perception Method Based on Drunkard Methodology

    Wenkai LIU  Lin ZHANG  Menglong WU  Xichang CAI  Hongxia DONG  

     
    PAPER-Artificial Intelligence, Data Mining

      Pubricized:
    2023/10/23
      Vol:
    E107-D No:1
      Page(s):
    83-92

    The goal of Acoustic Scene Classification (ASC) is to simulate human analysis of the surrounding environment and make accurate decisions promptly. Extracting useful information from audio signals in real-world scenarios is challenging and can lead to suboptimal performance in acoustic scene classification, especially in environments with relatively homogeneous backgrounds. To address this problem, we model the sobering-up process of “drunkards” in real-life and the guiding behavior of normal people, and construct a high-precision lightweight model implementation methodology called the “drunkard methodology”. The core idea includes three parts: (1) designing a special feature transformation module based on the different mechanisms of information perception between drunkards and ordinary people, to simulate the process of gradually sobering up and the changes in feature perception ability; (2) studying a lightweight “drunken” model that matches the normal model's perception processing process. The model uses a multi-scale class residual block structure and can obtain finer feature representations by fusing information extracted at different scales; (3) introducing a guiding and fusion module of the conventional model to the “drunken” model to speed up the sobering-up process and achieve iterative optimization and accuracy improvement. Evaluation results on the official dataset of DCASE2022 Task1 demonstrate that our baseline system achieves 40.4% accuracy and 2.284 loss under the condition of 442.67K parameters and 19.40M MAC (multiply-accumulate operations). After adopting the “drunkard” mechanism, the accuracy is improved to 45.2%, and the loss is reduced by 0.634 under the condition of 551.89K parameters and 23.6M MAC.

  • Adaptive Lossy Data Compression Extended Architecture for Memory Bandwidth Conservation in SpMV

    Siyi HU  Makiko ITO  Takahide YOSHIKAWA  Yuan HE  Hiroshi NAKAMURA  Masaaki KONDO  

     
    PAPER

      Pubricized:
    2023/07/20
      Vol:
    E106-D No:12
      Page(s):
    2015-2025

    Widely adopted by machine learning and graph processing applications nowadays, sparse matrix-Vector multiplication (SpMV) is a very popular algorithm in linear algebra. This is especially the case for fully-connected MLP layers, which dominate many SpMV computations and play a substantial role in diverse services. As a consequence, a large fraction of data center cycles is spent on SpMV kernels. Meanwhile, despite having efficient storage options against sparsity (such as CSR or CSC), SpMV kernels still suffer from the problem of limited memory bandwidth during data transferring because of the memory hierarchy of modern computing systems. In more detail, we find that both integer and floating-point data used in SpMV kernels are handled plainly without any necessary pre-processing. Therefore, we believe bandwidth conservation techniques, such as data compression, may dramatically help SpMV kernels when data is transferred between the main memory and the Last Level Cache (LLC). Furthermore, we also observe that convergence conditions in some typical scientific computation benchmarks (based on SpMV kernels) will not be degraded when adopting lower precision floating-point data. Based on these findings, in this work, we propose a simple yet effective data compression scheme that can be extended to general purpose computing architectures or HPC systems preferably. When it is adopted, a best-case speedup of 1.92x is made. Besides, evaluations with both the CG kernel and the PageRank algorithm indicate that our proposal introduces negligible overhead on both the convergence speed and the accuracy of final results.

  • Neural Network-Based Post-Processing Filter on V-PCC Attribute Frames

    Keiichiro TAKADA  Yasuaki TOKUMO  Tomohiro IKAI  Takeshi CHUJOH  

     
    LETTER

      Pubricized:
    2023/07/13
      Vol:
    E106-D No:10
      Page(s):
    1673-1676

    Video-based point cloud compression (V-PCC) utilizes video compression technology to efficiently encode dense point clouds providing state-of-the-art compression performance with a relatively small computation burden. V-PCC converts 3-dimensional point cloud data into three types of 2-dimensional frames, i.e., occupancy, geometry, and attribute frames, and encodes them via video compression. On the other hand, the quality of these frames may be degraded due to video compression. This paper proposes an adaptive neural network-based post-processing filter on attribute frames to alleviate the degradation problem. Furthermore, a novel training method using occupancy frames is studied. The experimental results show average BD-rate gains of 3.0%, 29.3% and 22.2% for Y, U and V respectively.

  • Quality Enhancement of Conventional Compression with a Learned Side Bitstream

    Takahiro NARUKO  Hiroaki AKUTSU  Koki TSUBOTA  Kiyoharu AIZAWA  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/04/25
      Vol:
    E106-D No:8
      Page(s):
    1296-1299

    We propose Quality Enhancement via a Side bitstream Network (QESN) technique for lossy image compression. The proposed QESN utilizes the network architecture of deep image compression to produce a bitstream for enhancing the quality of conventional compression. We also present a loss function that directly optimizes the Bjontegaard delta bit rate (BD-BR) by using a differentiable model of a rate-distortion curve. Experimental results show that QESN improves the rate by 16.7% in the BD-BR compared to Better Portable Graphics.

  • Unified 6G Waveform Design Based on DFT-s-OFDM Enhancements

    Juan LIU  Xiaolin HOU  Wenjia LIU  Lan CHEN  Yoshihisa KISHIYAMA  Takahiro ASAI  

     
    PAPER-Wireless Communication Technologies

      Pubricized:
    2022/12/05
      Vol:
    E106-B No:6
      Page(s):
    528-537

    To achieve the extreme high data rate and extreme coverage extension requirements of 6G wireless communication, new spectrum in sub-THz (100-300GHz) and non-terrestrial network (NTN) are two of the macro trends of 6G candidate technologies, respectively. However, non-linearity of power amplifiers (PA) is a critical challenge for both sub-THz and NTN. Therefore, high power efficiency (PE) or low peak to average power ratio (PAPR) waveform design becomes one of the most significant 6G research topics. Meanwhile, high spectral efficiency (SE) and low out-of-band emission (OOBE) are still important key performance indicators (KPIs) for 6G waveform design. Single-carrier waveform discrete Fourier transform spreading orthogonal frequency division multiplexing (DFT-s-OFDM) has achieved many research interests due to its high PE, and it has been supported in 5G New Radio (NR) when uplink coverage is limited. So DFT-s-OFDM can be regarded as a candidate waveform for 6G. Many enhancement schemes based on DFT-s-OFDM have been proposed, including null cyclic prefix (NCP)/unique word (UW), frequency-domain spectral shaping (FDSS), and time-domain compression and expansion (TD-CE), etc. However, there is no unified framework to be compatible with all the enhancement schemes. This paper firstly provides a general description of the 6G candidate waveforms based on DFT-s-OFDM enhancement. Secondly, the more flexible TD-CE supporting methods for unified non-orthogonal waveform (uNOW) are proposed and discussed. Thirdly, a unified waveform framework based on DFT-s-OFDM structure is proposed. By designing the pre-processing and post-processing modules before and after DFT in the unified waveform framework, the three technical methods (NCP/UW, FDSS, and TD-CE) can be integrated to improve three KPIs of DFT-s-OFDM simultaneously with high flexibility. Then the implementation complexity of the 6G candidate waveforms are analyzed and compared. Performance of different DFT-s-OFDM enhancement schemes is investigated by link level simulation, which reveals that uNOW can achieve the best PAPR performance among all the 6G candidate waveforms. When considering PA back-off, uNOW can achieve 124% throughput gain compared to traditional DFT-s-OFDM.

  • A Compression Router for Low-Latency Network-on-Chip

    Naoya NIWA  Yoshiya SHIKAMA  Hideharu AMANO  Michihiro KOIBUCHI  

     
    PAPER-Computer System

      Pubricized:
    2022/11/08
      Vol:
    E106-D No:2
      Page(s):
    170-180

    Network-on-Chips (NoCs) are important components for scalable many-core processors. Because the performance of parallel applications is usually sensitive to the latency of NoCs, reducing it is a primary requirement. In this study, a compression router that hides the (de)compression-operation delay is proposed. The compression router (de)compresses the contents of the incoming packet before the switch arbitration is completed, thus shortening the packet length without latency penalty and reducing the network injection-and-ejection latency. Evaluation results show that the compression router improves up to 33% of the parallel application performance (conjugate gradients (CG), fast Fourier transform (FT), integer sort (IS), and traveling salesman problem (TSP)) and 63% of the effective network throughput by 1.8 compression ratio on NoC. The cost is an increase in router area and its energy consumption by 0.22mm2 and 1.6 times compared to the conventional virtual-channel router. Another finding is that off-loading the decompressor onto a network interface decreases the compression-router area by 57% at the expense of the moderate increase in communication latency.

  • Boosting the Performance of Interconnection Networks by Selective Data Compression

    Naoya NIWA  Hideharu AMANO  Michihiro KOIBUCHI  

     
    PAPER

      Pubricized:
    2022/07/12
      Vol:
    E105-D No:12
      Page(s):
    2057-2065

    This study presents a selective data-compression interconnection network to boost its performance. Data compression virtually increases the effective network bandwidth. One drawback of data compression is a long latency to perform (de-)compression operation at a compute node. In terms of the communication latency, we explore the trade-off between the compression latency overhead and the reduced injection latency by shortening the packet length by compression algorithms. As a result, we present to selectively apply a compression technique to a packet. We perform a compression operation to long packets and it is also taken when network congestion is detected at a source compute node. Through a cycle-accurate network simulation, the selective compression method using the above compression algorithms improves by up to 39% the network throughput with a moderate increase in the communication latency of short packets.

  • NCDSearch: Sliding Window-Based Code Clone Search Using Lempel-Ziv Jaccard Distance

    Takashi ISHIO  Naoto MAEDA  Kensuke SHIBUYA  Kenho IWAMOTO  Katsuro INOUE  

     
    PAPER-Software Engineering

      Pubricized:
    2022/02/08
      Vol:
    E105-D No:5
      Page(s):
    973-981

    Software developers may write a number of similar source code fragments including the same mistake in software products. To remove such faulty code fragments, developers inspect code clones if they found a bug in their code. While various code clone detection methods have been proposed to identify clones of either code blocks or functions, those tools do not always fit the code inspection task because a faulty code fragment may be much smaller than code blocks, e.g. a single line of code. To enable developers to search code clones of such a small faulty code fragment in a large-scale software product, we propose a method using Lempel-Ziv Jaccard Distance, which is an approximation of Normalized Compression Distance. We conducted an experiment using an existing research dataset and a user survey in a company. The result shows our method efficiently reports cloned faulty code fragments and the performance is acceptable for software developers.

  • Lempel-Ziv Factorization in Linear-Time O(1)-Workspace for Constant Alphabets

    Weijun LIU  

     
    PAPER-Fundamentals of Information Systems

      Pubricized:
    2021/08/30
      Vol:
    E104-D No:12
      Page(s):
    2145-2153

    Computing the Lempel-Ziv Factorization (LZ77) of a string is one of the most important problems in computer science. Nowadays, it has been widely used in many applications such as data compression, text indexing and pattern discovery, and already become the heart of many file compressors like gzip and 7zip. In this paper, we show a linear time algorithm called Xone for computing the LZ77, which has the same space requirement with the previous best space requirement for linear time LZ77 factorization called BGone. Xone greatly improves the efficiency of BGone. Experiments show that the two versions of Xone: XoneT and XoneSA are about 27% and 31% faster than BGoneT and BGoneSA, respectively.

  • Spatial Compression of Sensing Information for Exploiting the Vacant Frequency Resource Using Radio Sensors

    Kenichiro YAMAMOTO  Osamu TAKYU  Keiichiro SHIRAI  Yasushi FUWA  

     
    PAPER

      Pubricized:
    2021/03/30
      Vol:
    E104-B No:10
      Page(s):
    1217-1226

    Recently, broadband wireless communication has been significantly enhanced; thus, frequency spectrum scarcity has become an extremely serious problem. Spatial frequency reuse based on spectrum databases has attracted significant attention. The spectrum database collects wireless environment information, such as the radio signal strength indicator (RSSI), estimates the propagation coefficient for the propagation loss and shadow effect, and finds a vacant area where the secondary system uses the frequency spectrum without harmful interference to the primary system. Wireless sensor networks are required to collect the RSSI from a radio environmental monitor. However, a large number of RSSI values should be gathered because numerous sensors are spread over the wireless environment. In this study, a data compression technique based on spatial features, such as buildings and houses, is proposed. Using computer simulation and experimental evaluation, we confirm that the proposed compression method successfully reduces the size of the RSSI and restores the original RSSI in the recovery process.

  • Compression Scan Strategy For Fast Refresh Rate on SXGA OLEDoS Microdisplay

    Aiying GUO  Feng RAN  Jianhua ZHANG  

     
    PAPER-Electronic Circuits

      Pubricized:
    2021/02/26
      Vol:
    E104-C No:9
      Page(s):
    455-462

    In order to upgrade the refresh rate about High-Resolution (1280×1024) OLED-on-Silicon (OLEDoS) microdisplay, this paper discusses one compression scan strategy by reducing scan time redundancy. This scan strategy firstly compresses the low-bit gray level scan serial as one unit; second, the scan unit is embedded into the high-bit gray level serial and new scan sequence is generated. Furthermore, micro-display platform is designed to verify the scan strategy performance. The experiment shows that this scan strategy can deal with 144Hz refresh rate, which is obviously faster than the traditional scan strategy.

  • Extended-Domain Golomb Code and Symmetry of Relative Redundancy

    Ryosuke SUGIURA  Yutaka KAMAMOTO  Takehiro MORIYA  

     
    PAPER-Coding Theory

      Pubricized:
    2021/02/08
      Vol:
    E104-A No:8
      Page(s):
    1033-1042

    This paper presents extended-domain Golomb (XDG) code, an extension of Golomb code for sparse geometric sources as well as a generalization of extended-domain Golomb-Rice (XDGR) code, based on the idea of almost instantaneous fixed-to-variable length (AIFV) codes. Showing that the XDGR encoding can be interpreted as extended usage of the code proposed in the previous works, this paper discusses the following two facts: The proposed XDG code can be constructed as an AIFV code relating to Golomb code as XDGR code does to Rice code; XDG and Golomb codes are symmetric in the sense of relative redundancy. The proposed XDG code can be efficiently used for losslessly compressing geometric sources too sparse for the conventional Golomb and Rice codes. According to the symmetry, its relative redundancy is guaranteed to be as low as Golomb code compressing non-sparse geometric sources. Awing to this fact, the parameter of the proposed XDG code, which is more finely tunable than the conventional XDGR code, can be optimized for given inputs using the conventional techniques. Therefore, it is expected to be more useful for many coding applications that deal with geometric sources at low bit rates.

  • Efficient Hardware Accelerator for Compressed Sparse Deep Neural Network

    Hao XIAO  Kaikai ZHAO  Guangzhu LIU  

     
    LETTER-Computer System

      Pubricized:
    2021/02/19
      Vol:
    E104-D No:5
      Page(s):
    772-775

    This work presents a DNN accelerator architecture specifically designed for performing efficient inference on compressed and sparse DNN models. Leveraging the data sparsity, a runtime processing scheme is proposed to deal with the encoded weights and activations directly in the compressed domain without decompressing. Furthermore, a new data flow is proposed to facilitate the reusage of input activations across the fully-connected (FC) layers. The proposed design is implemented and verified using the Xilinx Virtex-7 FPGA. Experimental results show it achieves 1.99×, 1.95× faster and 20.38×, 3.04× more energy efficient than CPU and mGPU platforms, respectively, running AlexNet.

  • GAN-Based Image Compression Using Mutual Information for Optimizing Subjective Image Similarity

    Shinobu KUDO  Shota ORIHASHI  Ryuichi TANIDA  Seishi TAKAMURA  Hideaki KIMATA  

     
    PAPER-Image Processing and Video Processing

      Pubricized:
    2020/12/02
      Vol:
    E104-D No:3
      Page(s):
    450-460

    Recently, image compression systems based on convolutional neural networks that use flexible nonlinear analysis and synthesis transformations have been developed to improve the restoration accuracy of decoded images. Although these methods that use objective metric such as peak signal-to-noise ratio and multi-scale structural similarity for optimization attain high objective results, such metric may not reflect human visual characteristics and thus degrade subjective image quality. A method using a framework called a generative adversarial network (GAN) has been reported as one of the methods aiming to improve the subjective image quality. It optimizes the distribution of restored images to be close to that of natural images; thus it suppresses visual artifacts such as blurring, ringing, and blocking. However, since methods of this type are optimized to focus on whether the restored image is subjectively natural or not, components that are not correlated with the original image are mixed into the restored image during the decoding process. Thus, even though the appearance looks natural, subjective similarity may be degraded. In this paper, we investigated why the conventional GAN-based compression techniques degrade subjective similarity, then tackled this problem by rethinking how to handle image generation in the GAN framework between image sources with different probability distributions. The paper describes a method to maximize mutual information between the coding features and the restored images. Experimental results show that the proposed mutual information amount is clearly correlated with subjective similarity and the method makes it possible to develop image compression systems with high subjective similarity.

1-20hit(349hit)