The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] real-time video(7hit)

1-7hit
  • Tile-Image Merging and Delivering for Virtual Camera Services on Tiled-Display for Real-Time Remote Collaboration

    Giseok CHOE  Jongho NANG  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E93-D No:7
      Page(s):
    1944-1956

    The tiled-display system has been used as a Computer Supported Cooperative Work (CSCW) environment, in which multiple local (and/or remote) participants cooperate using some shared applications whose outputs are displayed on a large-scale and high-resolution tiled-display, which is controlled by a cluster of PC's, one PC per display. In order to make the collaboration effective, each remote participant should be aware of all CSCW activities on the titled display system in real-time. This paper presents a capturing and delivering mechanism of all activities on titled-display system to remote participants in real-time. In the proposed mechanism, the screen images of all PC's are periodically captured and delivered to the Merging Server that maintains separate buffers to store the captured images from the PCs. The mechanism selects one tile image from each buffer, merges the images to make a screen shot of the whole tiled-display, clips a Region of Interest (ROI), compresses and streams it to remote participants in real-time. A technical challenge in the proposed mechanism is how to select a set of tile images, one from each buffer, for merging so that the tile images displayed at the same time on the tiled-display can be properly merged together. This paper presents three selection algorithms; a sequential selection algorithm, a capturing time based algorithm, and a capturing time and visual consistency based algorithm. It also proposes a mechanism of providing several virtual cameras on tiled-display system to remote participants by concurrently clipping several different ROI's from the same merged tiled-display images, and delivering them after compressing with video encoders requested by the remote participants. By interactively changing and resizing his/her own ROI, a remote participant can check the activities on the tiled-display effectively. Experiments on a 32 tiled-display system show that the proposed merging algorithm can build a tiled-display image stream synchronously, and the ROI-based clipping and delivering mechanism can provide individual views on the tiled-display system to multiple remote participants in real-time.

  • Overlay Real-Time Video Multicast System

    Ho Jong KANG  Hyung Rai OH  Hwangjun SONG  

     
    PAPER-Network

      Vol:
    E93-B No:4
      Page(s):
    879-888

    In this paper, we present an effective overlay real-time video multicast system over the Internet. The proposed system effectively integrates overlay multicast technology and video compression technology. Overlay multicast tree and target bit rate are determined to satisfy the given average delay constraint, and H.263+ rate control is implemented to enhance the human visual perceptual quality over the multicast tree. Finally, experimental results are provided to show the performance of the proposed overlay video multicast system over the Internet.

  • Design and Implementation of a Real-Time Video-Based Rendering System Using a Network Camera Array

    Yuichi TAGUCHI  Keita TAKAHASHI  Takeshi NAEMURA  

     
    PAPER-Image Processing and Video Processing

      Vol:
    E92-D No:7
      Page(s):
    1442-1452

    We present a real-time video-based rendering system using a network camera array. Our system consists of 64 commodity network cameras that are connected to a single PC through a gigabit Ethernet. To render a high-quality novel view, our system estimates a view-dependent per-pixel depth map in real time by using a layered representation. The rendering algorithm is fully implemented on the GPU, which allows our system to efficiently perform capturing and rendering processes as a pipeline by using the CPU and GPU independently. Using QVGA input video resolution, our system renders a free-viewpoint video at up to 30 frames per second, depending on the output video resolution and the number of depth layers. Experimental results show high-quality images synthesized from various scenes.

  • Practical, Real-Time, and Robust Watermarking on the Spatial Domain for High-Definition Video Contents

    Kyung-Su KIM  Hae-Yeoun LEE  Dong-Hyuck IM  Heung-Kyu LEE  

     
    PAPER-Watermarking

      Vol:
    E91-D No:5
      Page(s):
    1359-1368

    Commercial markets employ digital right management (DRM) systems to protect valuable high-definition (HD) quality videos. DRM system uses watermarking to provide copyright protection and ownership authentication of multimedia contents. We propose a real-time video watermarking scheme for HD video in the uncompressed domain. Especially, our approach is in aspect of practical perspectives to satisfy perceptual quality, real-time processing, and robustness requirements. We simplify and optimize human visual system mask for real-time performance and also apply dithering technique for invisibility. Extensive experiments are performed to prove that the proposed scheme satisfies the invisibility, real-time processing, and robustness requirements against video processing attacks. We concentrate upon video processing attacks that commonly occur in HD quality videos to display on portable devices. These attacks include not only scaling and low bit-rate encoding, but also malicious attacks such as format conversion and frame rate change.

  • A VGA 30-fps Realtime Optical-Flow Processor Core for Moving Picture Recognition

    Yuichiro MURACHI  Yuki FUKUYAMA  Ryo YAMAMOTO  Junichi MIYAKOSHI  Hiroshi KAWAGUCHI  Hajime ISHIHARA  Masayuki MIYAMA  Yoshio MATSUDA  Masahiko YOSHIMOTO  

     
    PAPER

      Vol:
    E91-C No:4
      Page(s):
    457-464

    This paper describes an optical-flow processor core for real-time video recognition. The processor is based on the Pyramidal Lucas and Kanade (PLK) algorithm. It features a smaller chip area, higher pixel rate, and higher accuracy than conventional optical-flow processors. Introduction of search range limitation and the Carman filter to the original PLK algorithm improve the optical-flow accuracy, and reduce the processor hardware cost. Furthermore, window interleaving and window overlap methods reduces the necessary clock frequency of the processor by 70%, allowing low-power characteristics. We first verified the PLK algorithm and architecture with a proto-typed FPGA implementation. Then, we designed a VLSI processor that can handle a VGA 30-fps image sequence at a clock frequency of 332 MHz. The core size and power consumption are estimated at 3.503.00 mm2 and 600 mW, respectively, in a 90-nm process technology.

  • A Buffer Occupancy-Based Adaptive Flow Control and Recovery Scheme for Real-Time Stored MPEG Video Transport over Internet

    Yeali S. SUN  Fu-Ming TSOU  Meng Chang CHEN  

     
    PAPER-Media Management

      Vol:
    E81-B No:11
      Page(s):
    1974-1987

    As the current Internet becomes popular in information access, demands for real-time display and playback of continuous media are ever increasing. The applications include real-time audio/video clips embedded in WWW, electronic commerce, and video-on-demand. In this paper, we present a new control protocol R3CP for real-time applications that transmit stored MPEG video stream over a lossy and best-effort based network environment like the Internet. Several control mechanisms are used: a) packet framing based on the meta data; b) adaptive queue-length based rate control scheme; c) data preloading; and d) look-ahead pre-retransmission for lost packet recovery. Different from many adaptive rate control schemes proposed in the past, the proposed flow control is to ensure continuous, periodic playback of video frames by keeping the receiver buffer queue length at a target value to minimize the probability that player finds an empty buffer. Contrary to the widespread belief that "Retransmission of lost packets is unnecessary for real-time applications," we show the effective use of combining look-ahead pre-retransmission control with proper data preloading and adaptive rate control scheme to improve the real-time playback performance. The performance of the proposed protocol is studied via simulation using actual video traces and actual delay traces collected from the Internet. The simulation results show that R3CP can significantly improve frame playback performance especially for transmission paths with poor packet delivery condition.

  • Effective Algorithms for Multicast Video Transport to Meet Various QoS Requirements

    Kentarou FUKUDA  Naoki WAKAMIYA  Masayuki MURATA  Hideo MIYAHARA  

     
    PAPER-Multicasting

      Vol:
    E81-B No:8
      Page(s):
    1599-1607

    In this paper, we propose flow aggregation algorithms for multicast video transport. Because of heterogeneities of network/client environments and users' preference on the perceived video quality, various QoS requirements must be simultaneously guaranteed even for the single video source in the multicast connection. It is easy but ineffective to provide many video streams according to each user's request. Our flow aggregation algorithm arranges similar QoS requirements of clients into a single QoS requirement, by which the required number of video streams that the video server prepares can be decreased. Then the total amount of the required bandwidth can be reduced by sharing the same video stream among a number of clients. Our flow aggregation algorithm has two variants, which are suitable to sender-initiated and receiver-initiated multicast connections, respectively. Proposed algorithms are evaluated and compared through simulation. Then we show that the server-initiated flow aggregation (an ideal case in our approach) is most effective, but the receiver-initiated flow aggregation can also offer a reasonably effective mechanism.