The search functionality is under construction.

Keyword Search Result

[Keyword] real-time image processing(4hit)

1-4hit
  • Real-Time Image Processing Based on Service Function Chaining Using CPU-FPGA Architecture

    Yuta UKON  Koji YAMAZAKI  Koyo NITTA  

     
    PAPER-Network System

      Pubricized:
    2019/08/05
      Vol:
    E103-B No:1
      Page(s):
    11-19

    Advanced information-processing services based on cloud computing are in great demand. However, users want to be able to customize cloud services for their own purposes. To provide image-processing services that can be optimized for the purpose of each user, we propose a technique for chaining image-processing functions in a CPU-field programmable gate array (FPGA) coupled server architecture. One of the most important requirements for combining multiple image-processing functions on a network, is low latency in server nodes. However, large delay occurs in the conventional CPU-FPGA architecture due to the overheads of packet reordering for ensuring the correctness of image processing and data transfer between the CPU and FPGA at the application level. This paper presents a CPU-FPGA server architecture with a real-time packet reordering circuit for low-latency image processing. In order to confirm the efficiency of our idea, we evaluated the latency of histogram of oriented gradients (HOG) feature calculation as an offloaded image-processing function. The results show that the latency is about 26 times lower than that of the conventional CPU-FPGA architecture. Moreover, the throughput decreased by less than 3.7% under the worst-case condition where 90 percent of the packets are randomly swapped at a 40-Gbps input rate. Finally, we demonstrated that a real-time video monitoring service can be provided by combining image processing functions using our architecture.

  • Boundary-Active-Only Adaptive Power-Reduction Scheme for Region-Growing Video-Segmentation

    Takashi MORIMOTO  Hidekazu ADACHI  Osamu KIRIYAMA  Tetsushi KOIDE  Hans Jurgen MATTAUSCH  

     
    LETTER-Image Processing and Video Processing

      Vol:
    E89-D No:3
      Page(s):
    1299-1302

    This letter presents a boundary-active-only (BAO) power reduction technique for cell-network-based region-growing video segmentation. The key approach is an adaptive situation-dependent power switching of each network cell, namely only cells at the boundary of currently grown regions are activated, and all the other cells are kept in low-power stand-by mode. The effectiveness of the proposed technique is experimentally confirmed with CMOS test-chips having small-scale cell networks of up to 4133 cells, where an average of only 1.7% of the cells remains active after application of the proposed approach. About 85% power reduction is thus achievable without sacrificing real-time processing.

  • High Speed 3D Reconstruction by Spatio-Temporal Division of Video Image Processing

    Yoshinari KAMEDA  Takeo TAODA  Michihiko MINOH  

     
    PAPER

      Vol:
    E83-D No:7
      Page(s):
    1422-1428

    A high speed 3D shape reconstruction method with multiple video cameras and multiple computers on LAN is presented. The video cameras are set to surround the real 3D space where people exist. Reconstructed 3D space is displayed in voxel format and users can see the space from any viewpoint with a VR viewer. We implemented a prototype system that can work out the 3D reconstruction with the speed of 10.55 fps in 313 ms delay.

  • An Integration Algorithm for Stereo, Motion and Color in Real-Time Applications

    Hiroshi ARAKAWA  Minoru ETOH  

     
    PAPER

      Vol:
    E78-D No:12
      Page(s):
    1615-1620

    This paper describes a statistical integration algorithm for color, motion and stereo disparity, and introduces a real-time stereo system that can tell us where and what objects are moving. Regarding the integration algorithm, motion estimation and depth estimation are simultaneously performed by a clustering process based on motion, stereo disparity, color, and pixel position. As a result of the clustering, an image is decomposed into region fragments. Eath fragment is characterized by distribution parameters of spatiotemporal intensity gradients, stereo difference, color and pixel positions. Motion vectors and stereo disparities for each fragment are obtained from those distribution parameters. The real-time stereo system can view the objects with the distribution parameters over frames. The implementation and experiments show that we can utilize the proposed algorithm in real-time applications such as surveillance and human-computer interaction.