The search functionality is under construction.

Keyword Search Result

[Keyword] video(613hit)

1-20hit(613hit)

  • 2D Human Skeleton Action Recognition Based on Depth Estimation Open Access

    Lei WANG  Shanmin YANG  Jianwei ZHANG  Song GU  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2024/02/27
      Vol:
    E107-D No:7
      Page(s):
    869-877

    Human action recognition (HAR) exhibits limited accuracy in video surveillance due to the 2D information captured with monocular cameras. To address the problem, a depth estimation-based human skeleton action recognition method (SARDE) is proposed in this study, with the aim of transforming 2D human action data into 3D format to dig hidden action clues in the 2D data. SARDE comprises two tasks, i.e., human skeleton action recognition and monocular depth estimation. The two tasks are integrated in a multi-task manner in end-to-end training to comprehensively utilize the correlation between action recognition and depth estimation by sharing parameters to learn the depth features effectively for human action recognition. In this study, graph-structured networks with inception blocks and skip connections are investigated for depth estimation. The experimental results verify the effectiveness and superiority of the proposed method in skeleton action recognition that the method reaches state-of-the-art on the datasets.

  • A VVC Dependent Quantization Optimization Based on the Parallel Viterbi Algorithm and Its FPGA Implementation Open Access

    Qinghua SHENG  Yu CHENG  Xiaofang HUANG  Changcai LAI  Xiaofeng HUANG  Haibin YIN  

     
    PAPER-Computer System

      Pubricized:
    2024/03/04
      Vol:
    E107-D No:7
      Page(s):
    797-806

    Dependent Quantization (DQ) is a new quantization tool introduced in the Versatile Video Coding (VVC) standard. While it provides better rate-distortion calculation accuracy, it also increases the computational complexity and hardware cost compared to the widely used scalar quantization. To address this issue, this paper proposes a parallel-dependent quantization hardware architecture using Verilog HDL language. The architecture preprocesses the coefficients with a scalar quantizer and a high-frequency filter, and then further segments and processes the coefficients in parallel using the Viterbi algorithm. Additionally, the weight bit width of the rate-distortion calculation is reduced to decrease the quantization cycle and computational complexity. Finally, the final quantization of the TU is determined through sequential scanning and judging of the rate-distortion cost. Experimental results show that the proposed algorithm reduces the quantization cycle by an average of 56.96% compared to VVC’s reference platform VTM, with a Bjøntegaard delta bit rate (BDBR) loss of 1.03% and 1.05% under the Low-delay P and Random Access configurations, respectively. Verification on the AMD FPGA development platform demonstrates that the hardware implementation meets the quantization requirements for 1080P@60Hz video hardware encoding.

  • Real-Time Video Matting Based on RVM and Mobile ViT Open Access

    Chengyu WU  Jiangshan QIN  Xiangyang LI  Ao ZHAN  Zhengqiang WANG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2024/01/29
      Vol:
    E107-D No:6
      Page(s):
    792-796

    Real-time matting is a challenging research in deep learning. Conventional CNN (Convolutional Neural Networks) approaches are easy to misjudge the foreground and background semantic and have blurry matting edges, which result from CNN’s limited concentration on global context due to receptive field. We propose a real-time matting approach called RMViT (Real-time matting with Vision Transformer) with Transformer structure, attention and content-aware guidance to solve issues above. The semantic accuracy improves a lot due to the establishment of global context and long-range pixel information. The experiments show our approach exceeds a 30% reduction in error metrics compared with existing real-time matting approaches.

  • A Case Study on Recommender Systems in Online Conferences: Behavioral Analysis through A/B Testing Open Access

    Ayano OKOSO  Keisuke OTAKI  Yoshinao ISHII  Satoshi KOIDE  

     
    PAPER

      Pubricized:
    2024/01/16
      Vol:
    E107-D No:5
      Page(s):
    650-658

    Owing to the COVID-19 pandemic, many academic conferences are now being held online. Our study focuses on online video conferences, where participants can watch pre-recorded embedded videos on a conference website. In online video conferences, participants must efficiently find videos that match their interests among many candidates. There are few opportunities to encounter videos that they may not have planned to watch but may be of interest to them unless participants actively visit the conference. To alleviate these problems, the introduction of a recommender system seems promising. In this paper, we implemented typical recommender systems for the online video conference with 4,000 participants and analyzed users’ behavior through A/B testing. Our results showed that users receiving recommendations based on collaborative filtering had a higher continuous video-viewing rate and spent longer on the website than those without recommendations. In addition, these users were exposed to broader videos and tended to view more from categories that are usually less likely to view together. Furthermore, the impact of the recommender system was most significant among users who spent less time on the site.

  • Traffic Reduction for Speculative Video Transmission in Cloud Gaming Systems Open Access

    Takumasa ISHIOKA  Tatsuya FUKUI  Toshihito FUJIWARA  Satoshi NARIKAWA  Takuya FUJIHASHI  Shunsuke SARUWATARI  Takashi WATANABE  

     
    PAPER-Network

      Vol:
    E107-B No:5
      Page(s):
    408-418

    Cloud gaming systems allow users to play games that require high-performance computational capability on their mobile devices at any location. However, playing games through cloud gaming systems increases the Round-Trip Time (RTT) due to increased network delay. To simulate a local gaming experience for cloud users, we must minimize RTTs, which include network delays. The speculative video transmission pre-generates and encodes video frames corresponding to all possible user inputs and sends them to the user before the user’s input. The speculative video transmission mitigates the network, whereas a simple solution significantly increases the video traffic. This paper proposes tile-wise delta detection for traffic reduction of speculative video transmission. More specifically, the proposed method determines a reference video frame from the generated video frames and divides the reference video frame into multiple tiles. We calculate the similarity between each tile of the reference video frame and other video frames based on a hash function. Based on calculated similarity, we determine redundant tiles and do not transmit them to reduce traffic volume in minimal processing time without implementing a high compression ratio video compression technique. Evaluations using commercial games showed that the proposed method reduced 40-50% in traffic volume when the SSIM index was around 0.98 in certain genres, compared with the speculative video transmission method. Furthermore, to evaluate the feasibility of the proposed method, we investigated the effectiveness of network delay reduction with existing computational capability and the requirements in the future. As a result, we found that the proposed scheme may mitigate network delay by one to two frames, even with existing computational capability under limited conditions.

  • Dance-Conditioned Artistic Music Generation by Creative-GAN Open Access

    Jiang HUANG  Xianglin HUANG  Lifang YANG  Zhulin TAO  

     
    PAPER-Multimedia Environment Technology

      Pubricized:
    2023/08/23
      Vol:
    E107-A No:5
      Page(s):
    836-844

    We present a novel adversarial, end-to-end framework based on Creative-GAN to generate artistic music conditioned on dance videos. Our proposed framework takes the visual and motion posture data as input, and then adopts a quantized vector as the audio representation to generate complex music corresponding to input. However, the GAN algorithm just imitate and reproduce works what humans have created, instead of generating something new and creative. Therefore, we newly introduce Creative-GAN, which extends the original GAN framework to two discriminators, one is to determine whether it is real music, and the other is to classify music style. The paper shows that our proposed Creative-GAN can generate novel and interesting music which is not found in the training dataset. To evaluate our model, a comprehensive evaluation scheme is introduced to make subjective and objective evaluation. Compared with the advanced methods, our experimental results performs better in measureing the music rhythm, generation diversity, dance-music correlation and overall quality of generated music.

  • VTD-FCENet: A Real-Time HD Video Text Detection with Scale-Aware Fourier Contour Embedding Open Access

    Wocheng XIAO  Lingyu LIANG  Jianyong CHEN  Tao WANG  

     
    LETTER-Image Recognition, Computer Vision

      Pubricized:
    2023/12/07
      Vol:
    E107-D No:4
      Page(s):
    574-578

    Video text detection (VTD) aims to localize text instances in videos, which has wide applications for downstream tasks. To deal with the variances of different scenes and text instances, multiple models and feature fusion strategies were typically integrated in existing VTD methods. A VTD method consisting of sophisticated components can efficiently improve detection accuracy, but may suffer from a limitation for real-time applications. This paper aims to achieve real-time VTD with an adaptive lightweight end-to-end framework. Different from previous methods that represent text in a spatial domain, we model text instances in the Fourier domain. Specifically, we propose a scale-aware Fourier Contour Embedding method, which not only models arbitrary shaped text contours of videos as compact signatures, but also adaptively select proper scales for features in a backbone in the training stage. Then, we construct VTD-FCENet to achieve real-time VTD, which encodes temporal correlations of adjacent frames with scale-aware FCE in a lightweight and adaptive manner. Quantitative evaluations were conducted on ICDAR2013 Video, Minetto and YVT benchmark datasets, and the results show that our VTD-FCENet not only obtains the state-of-the-arts or competitive detection accuracy, but also allows real-time text detection on HD videos simultaneously.

  • Practical Application of an e-Learning Support System Incorporating a Fill-in-the-Blank Question-Type Concept Map Open Access

    Takumi HASEGAWA  Tessai HAYAMA  

     
    PAPER

      Pubricized:
    2024/01/15
      Vol:
    E107-D No:4
      Page(s):
    477-485

    E-learning, which can be used anywhere and at any time, is very convenient and has been introduced to improve learning efficiency. However, securing a completion rate has been a major challenge. Recently, the learning forms of e-learning require learners to be introspective, deliberate, and logical and have proven to be incompatible with many learners with low completion rates. Thus, we developed an e-learning system that incorporates a fill-in-the-blank question-type concept map to deepen learners' understanding of learning contents while watching learning videos. The developed system promotes active learning reflectively and logically by allowing learners to answer blank question labels on concept maps from video content and labels associated with the blank question labels. We confirmed in the laboratory experiment by comparing with a conventional video-based learning system that the developed system encouraged a learner to do more system operations for rechecking the learning content and to better understand the learning contents while watching the learning video. As the next step, a field experiment is needed to investigate the usefulness and effectiveness of the developed system in actual environments in order to boost the practicality of the developed system. In this study, we introduced the developed system into the two class of the uviversity course and investigated the level of understanding to the learning contents, the system operations, and the usefulness of the developed system by comparing with those in the laboratory experiment. The results showed that the developed system provided to support the understanding to learning content and the usefulness of each function in the field experiment, as in the laboratory experiment. On the other hand, the students in the field experiment gave lower usefulness of the developed system than those in the lab experiment, suggesting that the students who attempted to thoroughly understand the learning contents in the field experiment were fewer than those in the lab experiment from their system operations during the learning.

  • Quality and Transferred Data Based Video Bitrate Control Method for Web-Conferencing Open Access

    Masahiro YOKOTA  Kazuhisa YAMAGISHI  

     
    PAPER-Multimedia Systems for Communications

      Pubricized:
    2023/10/13
      Vol:
    E107-B No:1
      Page(s):
    272-285

    In this paper, the quality and transferred data based video bitrate control method for web-conferencing services is proposed, aiming to reduce transferred data by suppressing excessive quality. In web-conferencing services, the video bitrate is generally controlled in accordance with the network conditions (e.g., jitter and packet loss rate) to improve users' quality. However, in such a control, the bitrate is excessively high when the network conditions is sufficiently high (e.g., high throughput and low jitter), which causes an increased transferred data volume. The increased volume of data transferred leads to increased operational costs, such as network costs for service providers. To solve this problem, we developed a method to control the video bitrate of each user to achieve the required quality determined by the service provider. This method is implemented in an actual web-conferencing system and evaluated under various conditions. It was shown that the bitrate could be controlled in accordance with the required quality to reduce the transferred data volume.

  • Social Relation Atmosphere Recognition with Relevant Visual Concepts

    Ying JI  Yu WANG  Kensaku MORI  Jien KATO  

     
    PAPER

      Pubricized:
    2023/06/02
      Vol:
    E106-D No:10
      Page(s):
    1638-1649

    Social relationships (e.g., couples, opponents) are the foundational part of society. Social relation atmosphere describes the overall interaction environment between social relationships. Discovering social relation atmosphere can help machines better comprehend human behaviors and improve the performance of social intelligent applications. Most existing research mainly focuses on investigating social relationships, while ignoring the social relation atmosphere. Due to the complexity of the expressions in video data and the uncertainty of the social relation atmosphere, it is even difficult to define and evaluate. In this paper, we innovatively analyze the social relation atmosphere in video data. We introduce a Relevant Visual Concept (RVC) from the social relationship recognition task to facilitate social relation atmosphere recognition, because social relationships contain useful information about human interactions and surrounding environments, which are crucial clues for social relation atmosphere recognition. Our approach consists of two main steps: (1) we first generate a group of visual concepts that preserve the inherent social relationship information by utilizing a 3D explanation module; (2) the extracted relevant visual concepts are used to supplement the social relation atmosphere recognition. In addition, we present a new dataset based on the existing Video Social Relation Dataset. Each video is annotated with four kinds of social relation atmosphere attributes and one social relationship. We evaluate the proposed method on our dataset. Experiments with various 3D ConvNets and fusion methods demonstrate that the proposed method can effectively improve recognition accuracy compared to end-to-end ConvNets. The visualization results also indicate that essential information in social relationships can be discovered and used to enhance social relation atmosphere recognition.

  • Neural Network-Based Post-Processing Filter on V-PCC Attribute Frames

    Keiichiro TAKADA  Yasuaki TOKUMO  Tomohiro IKAI  Takeshi CHUJOH  

     
    LETTER

      Pubricized:
    2023/07/13
      Vol:
    E106-D No:10
      Page(s):
    1673-1676

    Video-based point cloud compression (V-PCC) utilizes video compression technology to efficiently encode dense point clouds providing state-of-the-art compression performance with a relatively small computation burden. V-PCC converts 3-dimensional point cloud data into three types of 2-dimensional frames, i.e., occupancy, geometry, and attribute frames, and encodes them via video compression. On the other hand, the quality of these frames may be degraded due to video compression. This paper proposes an adaptive neural network-based post-processing filter on attribute frames to alleviate the degradation problem. Furthermore, a novel training method using occupancy frames is studied. The experimental results show average BD-rate gains of 3.0%, 29.3% and 22.2% for Y, U and V respectively.

  • Decentralized Incentive Scheme for Peer-to-Peer Video Streaming using Solana Blockchain

    Yunqi MA  Satoshi FUJITA  

     
    PAPER-Information Network

      Pubricized:
    2023/07/13
      Vol:
    E106-D No:10
      Page(s):
    1686-1693

    Peer-to-peer (P2P) technology has gained popularity as a way to enhance system performance. Nodes in a P2P network work together by providing network resources to one another. In this study, we examine the use of P2P technology for video streaming and develop a distributed incentive mechanism to prevent free-riding. Our proposed solution combines WebTorrent and the Solana blockchain and can be accessed through a web browser. To incentivize uploads, some of the received video chunks are encrypted using AES. Smart contracts on the blockchain are used for third-party verification of uploads and for managing access to the video content. Experimental results on a test network showed that our system can encrypt and decrypt chunks in about 1/40th the time it takes using WebRTC, without affecting the quality of video streaming. Smart contracts were also found to quickly verify uploads in about 860 milliseconds. The paper also explores how to effectively reward virtual points for uploads.

  • Quantitative Estimation of Video Forgery with Anomaly Analysis of Optical Flow

    Wan Yeon LEE  Yun-Seok CHOI  Tong Min KIM  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/05/19
      Vol:
    E106-D No:10
      Page(s):
    1757-1760

    We propose a quantitative measurement technique of video forgery that eliminates the decision burden of subtle boundary between normal and tampered patterns. We also propose the automatic adjustment scheme of spatial and temporal target zones, which maximizes the abnormality measurement of forged videos. Evaluation shows that the proposed scheme provides manifest detection capability against both inter-frame and intra-frame forgeries.

  • Reconfigurable Pedestrian Detection System Using Deep Learning for Video Surveillance

    M.K. JEEVARAJAN  P. NIRMAL KUMAR  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/06/09
      Vol:
    E106-D No:9
      Page(s):
    1610-1614

    We present a reconfigurable deep learning pedestrian detection system for surveillance systems that detect people with shadows in different lighting and heavily occluded conditions. This work proposes a region-based CNN, combined with CMOS and thermal cameras to obtain human features even under poor lighting conditions. The main advantage of a reconfigurable system with respect to processor-based systems is its high performance and parallelism when processing large amount of data such as video frames. We discuss the details of hardware implementation in the proposed real-time pedestrian detection algorithm on a Zynq FPGA. Simulation results show that the proposed integrated approach of R-CNN architecture with cameras provides better performance in terms of accuracy, precision, and F1-score. The performance of Zynq FPGA was compared to other works, which showed that the proposed architecture is a good trade-off in terms of quality, accuracy, speed, and resource utilization.

  • An Efficient Reference Image Sharing Method for the Image-Division Parallel Video Encoding Architecture

    Ken NAKAMURA  Yuya OMORI  Daisuke KOBAYASHI  Koyo NITTA  Kimikazu SANO  Masayuki SATO  Hiroe IWASAKI  Hiroaki KOBAYASHI  

     
    PAPER

      Pubricized:
    2022/11/29
      Vol:
    E106-C No:6
      Page(s):
    312-320

    This paper proposes an efficient reference image sharing method for the image-division parallel video encoding architecture. This method efficiently reduces the amount of data transfer by using pre-transfer with area prediction and on-demand transfer with a transfer management table. Experimental results show that the data transfer can be reduced to 19.8-35.3% of the conventional method on average without major degradation of coding performance. This makes it possible to reduce the required bandwidth of the inter-chip transfer interface by saving the amount of data transfer.

  • Ultra-Low-Latency 8K-Video-Transmission System Utilizing Whitebox Transponder with Disaggregation Configuration

    Yasuhiro MOCHIDA  Daisuke SHIRAI  Koichi TAKASUGI  

     
    PAPER

      Pubricized:
    2022/12/16
      Vol:
    E106-C No:6
      Page(s):
    321-330

    The demand for low-latency transmission of large-capacity video, such as 4K and 8K, is increasing for various applications such as live-broadcast program production, sports viewing, and medical care. In the broadcast industry, low-latency video transmission is required in remote production, an emerging workflow for outside broadcasting. For ideal remote production, long-distance transmission of uncompressed 8K60p video signals, ultra-low latency less than 16.7 ms, and PTP synchronization through network are required; however, no existing video-transmission system fully satisfy these requirements. We focused on optical transport technologies capable of long-distance and large-capacity communication, which were previously used only in telecommunication-carrier networks. To fully utilize optical transport technologies, we propose the first-ever video-transmission system architecture capable of sending and receiving uncompressed 8K video directly through large-capacity optical paths. A transmission timing control in seamless protection switching is also proposed to improve the tolerance to network impairment. As a means of implementation, we focused on whitebox transponder, an emerging type of optical transponder with a disaggregation configuration. The disaggregation configuration enables flexible configuration changes, additional implementations, and cost reduction by separating various functions of optical transponders and controlling them with a standardized interface. We implemented the ultra-low-latency video-transmission system utilizing whitebox transponder Galileo. We developed a hardware plug-in unit for video transmission (VideoPIU), and software to control the VideoPIU. In the video-transmission experiments with 120-km optical fiber, we confirmed that it was capable of transmitting uncompressed 8K60p video stably in 1.3 ms latency and highly accurate PTP synchronization through the optical network, which was required in the ideal remote production. In addition, the application to immersive sports viewing is also presented. Consequently, excellent potential to support the unprecedented applications is demonstrated.

  • A Shallow SNN Model for Embedding Neuromorphic Devices in a Camera for Scalable Video Surveillance Systems

    Kazuhisa FUJIMOTO  Masanori TAKADA  

     
    PAPER-Biocybernetics, Neurocomputing

      Pubricized:
    2023/03/13
      Vol:
    E106-D No:6
      Page(s):
    1175-1182

    Neuromorphic computing with a spiking neural network (SNN) is expected to provide a complement or alternative to deep learning in the future. The challenge is to develop optimal SNN models, algorithms, and engineering technologies for real use cases. As a potential use cases for neuromorphic computing, we have investigated a person monitoring and worker support with a video surveillance system, given its status as a proven deep neural network (DNN) use case. In the future, to increase the number of cameras in such a system, we will need a scalable approach that embeds only a few neuromorphic devices in a camera. Specifically, this will require a shallow SNN model that can be implemented in a few neuromorphic devices while providing a high recognition accuracy comparable to a DNN with the same configuration. A shallow SNN was built by converting ResNet, a proven DNN for image recognition, and a new configuration of the shallow SNN model was developed to improve its accuracy. The proposed shallow SNN model was evaluated with a few neuromorphic devices, and it achieved a recognition accuracy of more than 80% with about 1/130 less energy consumption than that of a GPU with the same configuration of DNN as that of SNN.

  • Metadata-Based Quality-Estimation Model for Tile-Based Omnidirectional Video Streaming Open Access

    Yuichiro URATA  Masanori KOIKE  Kazuhisa YAMAGISHI  Noritsugu EGI  

     
    PAPER-Multimedia Systems for Communications

      Pubricized:
    2022/11/15
      Vol:
    E106-B No:5
      Page(s):
    478-488

    In this paper, a metadata-based quality-estimation model is proposed for tile-based omnidirectional video streaming services, aiming to realize quality monitoring during service provision. In the tile-based omnidirectional video (ODV) streaming services, the ODV is divided into tiles, and the high-quality tiles and the low-quality tiles are distributed in accordance with the user's viewing direction. When the user changes the viewing direction, the user temporarily watches video with the low-quality tiles. In addition, the longer the time (delay time) until the high-quality tile for the new viewing direction is downloaded, the longer the viewing time of video with the low-quality tile, and thus the delay time affects quality. From the above, the video quality of the low-quality tiles and the delay time significantly impact quality, and these factors need to be considered in the quality-estimation model. We develop quality-estimation models by extending the conventional quality-estimation models for 2D adaptive streaming. We also show that the quality-estimation model using the bitrate, resolution, and frame rate of high- and low-quality tiles and that the delay time has sufficient estimation accuracy based on the results of subjective quality evaluation experiments.

  • Effective Language Representations for Danmaku Comment Classification in Nicovideo

    Hiroyoshi NAGAO  Koshiro TAMURA  Marie KATSURAI  

     
    PAPER

      Pubricized:
    2023/01/16
      Vol:
    E106-D No:5
      Page(s):
    838-846

    Danmaku commenting has become popular for co-viewing on video-sharing platforms, such as Nicovideo. However, many irrelevant comments usually contaminate the quality of the information provided by videos. Such an information pollutant problem can be solved by a comment classifier trained with an abstention option, which detects comments whose video categories are unclear. To improve the performance of this classification task, this paper presents Nicovideo-specific language representations. Specifically, we used sentences from Nicopedia, a Japanese online encyclopedia of entities that possibly appear in Nicovideo contents, to pre-train a bidirectional encoder representations from Transformers (BERT) model. The resulting model named Nicopedia BERT is then fine-tuned such that it could determine whether a given comment falls into any of predefined categories. The experiments conducted on Nicovideo comment data demonstrated the effectiveness of Nicopedia BERT compared with existing BERT models pre-trained using Wikipedia or tweets. We also evaluated the performance of each model in an additional sentiment classification task, and the obtained results implied the applicability of Nicopedia BERT as a feature extractor of other social media text.

  • Wider Depth Dynamic Range Using Occupancy Map Correction for Immersive Video Coding

    Sung-Gyun LIM  Dong-Ha KIM  Kwan-Jung OH  Gwangsoon LEE  Jun Young JEONG  Jae-Gon KIM  

     
    LETTER-Image Processing and Video Processing

      Pubricized:
    2023/02/10
      Vol:
    E106-D No:5
      Page(s):
    1102-1105

    The MPEG Immersive Video (MIV) standard for immersive video coding provides users with an immersive sense of 6 degrees of freedom (6DoF) of view position and orientation by efficiently compressing multiview video acquired from different positions in a limited 3D space. In the MIV reference software called Test Model for Immersive Video (TMIV), the number of pixels to be compressed and transmitted is reduced by removing inter-view redundancy. Therefore, the occupancy information that indicates whether each pixel is valid or invalid must also be transmitted to the decoder for viewport rendering. The occupancy information is embedded in a geometry atlas and transmitted to the decoder side. At this time, to prevent occupancy errors that may occur during the compression of the geometry atlas, a guard band is set in the depth dynamic range. Reducing this guard band can improve the rendering quality by allowing a wider dynamic range for depth representation. Therefore, in this paper, based on the analysis of occupancy error of the current TMIV, two methods of occupancy error correction which allow depth dynamic range extension in the case of computer-generated (CG) sequences are presented. The experimental results show that the proposed method gives an average 2.2% BD-rate bit saving for CG compared to the existing TMIV.

1-20hit(613hit)