The search functionality is under construction.
The search functionality is under construction.

Keyword Search Result

[Keyword] style transfer(3hit)

1-3hit
  • FSAMT: Face Shape Adaptive Makeup Transfer Open Access

    Haoran LUO  Tengfei SHAO  Shenglei LI  Reiko HISHIYAMA  

     
    PAPER-Image Recognition, Computer Vision

      Pubricized:
    2024/04/02
      Vol:
    E107-D No:8
      Page(s):
    1059-1069

    Makeup transfer is the process of applying the makeup style from one picture (reference) to another (source), allowing for the modification of characters’ makeup styles. To meet the diverse makeup needs of individuals or samples, the makeup transfer framework should accurately handle various makeup degrees, ranging from subtle to bold, and exhibit intelligence in adapting to the source makeup. This paper introduces a “3-level” adaptive makeup transfer framework, addressing facial makeup through two sub-tasks: 1. Makeup adaptation, utilizing feature descriptors and eyelid curve algorithms to classify 135 organ-level face shapes; 2. Makeup transfer, achieved by learning the reference picture from three branches (color, highlight, pattern) and applying it to the source picture. The proposed framework, termed “Face Shape Adaptive Makeup Transfer” (FSAMT), demonstrates superior results in makeup transfer output quality, as confirmed by experimental results.

  • Multi-Style Shape Matching GAN for Text Images Open Access

    Honghui YUAN  Keiji YANAI  

     
    PAPER

      Pubricized:
    2023/12/27
      Vol:
    E107-D No:4
      Page(s):
    505-514

    Deep learning techniques are used to transform the style of images and produce diverse images. In the text style transformation field, many previous studies attempted to generate stylized text using deep learning networks. However, to achieve multiple style transformations for text images, the methods proposed in previous studies require learning multiple networks or cannot be guided by style images. Thus, in this study we focused on multistyle transformation of text images using style images to guide the generation of results. We propose a multiple-style transformation network for text style transfer, which we refer to as the Multi-Style Shape Matching GAN (Multi-Style SMGAN). The proposed method generates multiple styles of text images using a single model by training the model only once, and allows users to control the text style according to style images. The proposed method implements conditions to the network such that all styles can be distinguished effectively in the network, and the generation of each styled text can be controlled according to these conditions. The proposed network is optimized such that the conditional information can be transmitted effectively throughout the network. The proposed method was evaluated experimentally on a large number of text images, and the results show that the trained model can generate multiple-style text in realtime according to the style image. In addition, the results of a user survey study indicate that the proposed method produces higher quality results compared to existing methods.

  • Domain Adaptive Cross-Modal Image Retrieval via Modality and Domain Translations

    Rintaro YANAGI  Ren TOGO  Takahiro OGAWA  Miki HASEYAMA  

     
    PAPER

      Pubricized:
    2020/11/30
      Vol:
    E104-A No:6
      Page(s):
    866-875

    Various cross-modal retrieval methods that can retrieve images related to a query sentence without text annotations have been proposed. Although a high level of retrieval performance is achieved by these methods, they have been developed for a single domain retrieval setting. When retrieval candidate images come from various domains, the retrieval performance of these methods might be decreased. To deal with this problem, we propose a new domain adaptive cross-modal retrieval method. By translating a modality and domains of a query and candidate images, our method can retrieve desired images accurately in a different domain retrieval setting. Experimental results for clipart and painting datasets showed that the proposed method has better retrieval performance than that of other conventional and state-of-the-art methods.