The search functionality is under construction.

Author Search Result

[Author] Lei SUN(9hit)

1-9hit
  • Two Classes of 1-Resilient Prime-Variable Rotation Symmetric Boolean Functions

    Lei SUN  Fang-Wei FU  Xuan GUANG  

     
    LETTER-Cryptography and Information Security

      Vol:
    E100-A No:3
      Page(s):
    902-907

    Recent research has shown that the class of rotation symmetric Boolean functions is beneficial to cryptographics. In this paper, for an odd prime p, two sufficient conditions for p-variable rotation symmetric Boolean functions to be 1-resilient are obtained, and then several concrete constructions satisfying the conditions are presented. This is the first time that resilient rotation symmetric Boolean functions have been systematically constructed. In particular, we construct a class of 2-resilient rotation symmetric Boolean functions when p=2m+1 for m ≥ 4. Moreover, several classes of 1-order correlation immune rotation symmetric Boolean functions are also got.

  • Fast Mode and Depth Decision for HEVC Intra Prediction Based on Edge Detection and Partition Reconfiguration

    Gaoxing CHEN  Lei SUN  Zhenyu LIU  Takeshi IKENAGA  

     
    PAPER

      Vol:
    E97-A No:11
      Page(s):
    2130-2138

    High efficiency video coding (HEVC) is a video compression standard that outperforms the predecessor H.264/AVC by doubling the compression efficiency. To enhance the intra prediction accuracy, 35 intra prediction modes were used in the prediction units (PUs), with partition sizes ranging from 4 × 4 to 64 × 64 in HEVC. However, the manifold prediction modes dramatically increase the encoding complexity. This paper proposes a fast mode- and depth-decision algorithm based on edge detection and reconfiguration to alleviate the large computational complexity in intra prediction with trivial degradation in accuracy. For mode decision, we propose pixel gradient statistics (PGS) and mode refinement (MR). PGS uses pixel gradient information to assist in selecting the prediction mode after rough mode decision (RMD). MR uses the neighboring mode information to select the best PU mode (BPM). For depth decision, we propose a partition reconfiguration algorithm to replace the original partitioning order with a more reasonable structure, by using the smoothness of the coding unit as a criterion in deciding the prediction depth. Smoothness detection is based on the PGS result. Experiment results show that the proposed method saves about 41.50% of the original processing time with little degradation (BD bitrate increased by 0.66% and BDPSNR decreased by 0.060dB) in the coding gain.

  • Low-Complexity Coarse-Level Mode-Mapping Based H.264/AVC to H.264/SVC Spatial Transcoding for Video Conferencing

    Lei SUN  Jie LENG  Jia SU  Yiqing HUANG  Hiroomi MOTOHASHI  Takeshi IKENAGA  

     
    PAPER-Video Processing

      Vol:
    E95-D No:5
      Page(s):
    1313-1323

    Scalable Video Coding (SVC) was standardized as an extension of H.264/AVC with the intention to provide flexible adaptation to heterogeneous networks and different end-user requirements, which provides great scalability in multi-point applications such as video conferencing. However, due to the existence of H.264/AVC-based systems, transcoding between AVC and SVC becomes necessary. Most existing works focus on temporal transcoding, quality transcoding or SVC-to-AVC spatial transcoding while the straightforward re-encoding method requires high computational cost. This paper proposes a low-complexity AVC-to-SVC spatial transcoder based on coarse-level mode mapping for video conferencing scenes. First, to omit unnecessary motion estimations (ME) for layers with reduced resolution, an ME skipping scheme based on AVC mode distribution is proposed with an adaptive search range. Then a probability-profile based scheme is proposed for further mode skipping. After that 3 coarse-level mode-mapping methods are presented for fast mode decision and the adaptive usage of the 3 methods is discussed. Finally, motion vector (MV) refinement is introduced for further lower-layer time reduction. As for the top layer, direct encapsulation is proposed to preserve better quality and another scheme involving inter-layer predictions is also provided for bandwidth-crucial applications. Simulation results show that proposed transcoder achieves up to 92.6% time reduction without significant coding efficiency loss compared to re-encoding method.

  • Low-Complexity Hybrid-Domain H.264/SVC to H.264/AVC Spatial Transcoding with Drift Compensation for Videoconferencing

    Lei SUN  Zhenyu LIU  Takeshi IKENAGA  

     
    PAPER-Image Processing

      Vol:
    E96-A No:11
      Page(s):
    2142-2153

    As an extension of H.264/AVC, Scalable Video Coding (SVC) provides the ability to adapt to heterogeneous networks and user-end requirements, which offers great scalability in multi-point applications such as videoconferencing. However, transcoding between SVC and AVC becomes necessary due to the existence of legacy AVC-based systems. The straightforward full re-encoding method requires great computational cost, and the fast SVC-to-AVC spatial transcoding techniques have not been thoroughly investigated yet. This paper proposes a low-complexity hybrid-domain SVC-to-AVC spatial transcoder with drift compensation, which provides even better coding efficiency than the full re-encoding method. The macroblocks (MBs) of input SVC bitstream are divided into two types, and each type is suitable for pixel- or transform-domain processing respectively. In the pixel-domain transcoding, a fast re-encoding method is proposed based on mode mapping and motion vector (MV) refinement. In the transform-domain transcoding, the quantized transform coefficients together with other motion data are reused directly to avoid re-quantization loss. The drift problem caused by proposed transcoder is solved by compensation techniques for I frame and P frame respectively. Simulation results show that proposed transcoder achieves averagely 96.4% time reduction compared with the full re-encoding method, and outperforms the reference methods in coding efficiency.

  • Content Based Coarse to Fine Adaptive Interpolation Filter for High Resolution Video Coding

    Jia SU  Yiqing HUANG  Lei SUN  Shinichi SAKAIDA  Takeshi IKENAGA  

     
    PAPER-Image

      Vol:
    E94-A No:10
      Page(s):
    2013-2021

    With the increasing demand of high video quality and large image size, adaptive interpolation filter (AIF) addresses these issues and conquers the time varying effects resulting in increased coding efficiency, comparing with recent H.264 standard. However, currently most AIF algorithms are based on either frame level or macroblock (MB) level, which are not flexible enough for different video contents in a real codec system, and most of them are facing a severe time consuming problem. This paper proposes a content based coarse to fine AIF algorithm, which can adapt to video contents by adding different filters and conditions from coarse to fine. The overall algorithm has been mainly made up by 3 schemes: frequency analysis based frame level skip interpolation, motion vector modeling based region level interpolation, and edge detection based macroblock level interpolation. According to the experiments, AIF are discovered to be more effective in the high frequency frames, therefore, the condition to skip low frequency frames for generating AIF coefficients has been set. Moreover, by utilizing the motion vector information of previous frames the region level based interpolation has been designed, and Laplacian of Gaussian based macroblock level interpolation has been proposed to drive the interpolation process from coarse to fine. Six 720p and six 1080p video sequences which cover most typical video types have been tested for evaluating the proposed algorithm. The experimental results show that the proposed algorithm reduce total encoding time about 41% for 720p and 25% for 1080p sequences averagely, comparing with Key Technology Areas (KTA) Enhanced AIF algorithm, while obtains a BDPSNR gain up to 0.004 and 3.122 BDBR reduction.

  • A Mode Mapping and Optimized MV Conjunction Based H.264/SVC to H.264/AVC Transcoder with Medium-Grain Quality Scalability for Videoconferencing

    Lei SUN  Zhenyu LIU  Takeshi IKENAGA  

     
    PAPER

      Vol:
    E97-A No:2
      Page(s):
    501-509

    Scalable Video Coding (SVC) is an extension of H.264/AVC, aiming to provide the ability to adapt to heterogeneous networks or requirements. It offers great flexibility for bitstream adaptation in multi-point applications such as videoconferencing. However, transcoding between SVC and AVC is necessary due to the existence of legacy AVC-based systems. The straightforward re-encoding method requires great computational cost, and delay-sensitive applications like videoconferencing require much faster transcoding scheme. This paper proposes a 3-stage fast SVC-to-AVC transcoder with medium-grain quality scalability (MGS) for videoconferencing applications. Hierarchical-P structured SVC bitstream is transcoded into IPPP structured AVC bitstream with multiple reference frames. In the first stage, mode decision is accelerated by proposed SVC-to-AVC mode mapping scheme. In the second stage, INTER motion estimation is accelerated by an optimized motion vector (MV) conjunction method to predict the MV with a reduced search range. In the last stage, hadamard-based all zero block (AZB) detection is utilized for early termination. Simulation results show that proposed transcoder achieves very similar coding efficiency to the optimal result, but with averagely 89.6% computational time saving.

  • Architecture, Implementation, and Experiments of Programmable Network Using OpenFlow Open Access

    Hideyuki SHIMONISHI  Shuji ISHII  Lei SUN  Yoshihiko KANAUMI  

     
    INVITED PAPER

      Vol:
    E94-B No:10
      Page(s):
    2715-2722

    We propose a flexible and scalable architecture for a network controller platform used for OpenFlow. The OpenFlow technology was proposed as a means for researchers, network service creators, and others to easily design, test, and virtually deploy their innovative ideas in a large network infrastructure, which will accelerate research activities on Future Internet architectures. The technology enables the independent evolution of the network control plane and the data plane. Rather than having programmability within each network node, the separated OpenFlow controller provides network control through pluggable software. Our proposed network controller architecture will enable researchers to use their own software to control their own virtual networks. Flexibility and scalability were achieved by designing the network controller as a modularized and distributed system on a cluster of servers. Testing showed that a group of servers can efficiently cooperate to serve as a scalable OpenFlow controller. Testing using the nationwide JGN2plus network demonstrated that high-definition video can be delivered through OpenFlow-based point-to-point and point-to-multipoint paths.

  • On the Nonlinearity and Affine Equivalence Classes of C-F Functions

    Lei SUN  Fangwei FU  Xuang GUANG  

     
    LETTER-Cryptography and Information Security

      Vol:
    E99-A No:6
      Page(s):
    1251-1254

    Since 2008, three different classes of Boolean functions with optimal algebraic immunity have been proposed by Carlet and Feng [2], Wang et al.[8] and Chen et al.[3]. We call them C-F functions, W-P-K-X functions and C-T-Q functions for short. In this paper, we propose three affine equivalent classes of Boolean functions containing C-F functions, W-P-K-X functions and C-T-Q functions as a subclass, respectively. Based on the affine equivalence relation, we construct more classes of Boolean functions with optimal algebraic immunity. Moreover, we deduce a new lower bound on the nonlinearity of C-F functions, which is better than all the known ones.

  • A Drift-Constrained Frequency-Domain Ultra-Low-Delay H.264/SVC to H.264/AVC Transcoder with Medium-Grain Quality Scalability for Videoconferencing

    Lei SUN  Zhenyu LIU  Takeshi IKENAGA  

     
    PAPER

      Vol:
    E96-A No:6
      Page(s):
    1253-1263

    Scalable Video Coding (SVC) is an extension of H.264/AVC, aiming to provide the ability to adapt to heterogeneous networks or requirements. It offers great flexibility for bitstream adaptation in multi-point applications such as videoconferencing. However, transcoding between SVC and AVC is necessary due to the existence of legacy AVC-based systems. The straightforward re-encoding method requires great computational cost, and delay-sensitive applications like videoconferencing require much faster transcoding scheme. This paper proposes an ultra-low-delay SVC-to-AVC MGS (Medium-Grain quality Scalability) transcoder for videoconferencing applications. Transcoding is performed in pure frequency domain with partial decoding/encoding in order to achieve significant speed-up. Three fast transcoding methods in frequency domain are proposed for macroblocks with different coding modes in non-KEY pictures. KEY pictures are transcoded by reusing the base layer motion data, and error propagation is constrained between KEY pictures. Simulation results show that proposed transcoder achieves averagely 38.5 times speed-up compared with the re-encoding method, while introducing merely 0.71 dB BDPSNR coding quality loss for videoconferencing sequences as compared with the re-encoding algorithm.