Visual question answering (VQA) is a task of answering a visual question that is a pair of question and image. Some visual questions are ambiguous and some are clear, and it may be appropriate to change the ambiguity of questions from situation to situation. However, this issue has not been addressed by any prior work. We propose a novel task, rephrasing the questions by controlling the ambiguity of the questions. The ambiguity of a visual question is defined by the use of the entropy of the answer distribution predicted by a VQA model. The proposed model rephrases a source question given with an image so that the rephrased question has the ambiguity (or entropy) specified by users. We propose two learning strategies to train the proposed model with the VQA v2 dataset, which has no ambiguity information. We demonstrate the advantage of our approach that can control the ambiguity of the rephrased questions, and an interesting observation that it is harder to increase than to reduce ambiguity.
Kento TERAO
Hiroshima University
Toru TAMAKI
Hiroshima University
Bisser RAYTCHEV
Hiroshima University
Kazufumi KANEDA
Hiroshima University
Shin'ichi SATOH
National Institute of Informatics
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Kento TERAO, Toru TAMAKI, Bisser RAYTCHEV, Kazufumi KANEDA, Shin'ichi SATOH, "Rephrasing Visual Questions by Specifying the Entropy of the Answer Distribution" in IEICE TRANSACTIONS on Information,
vol. E103-D, no. 11, pp. 2362-2370, November 2020, doi: 10.1587/transinf.2020EDP7089.
Abstract: Visual question answering (VQA) is a task of answering a visual question that is a pair of question and image. Some visual questions are ambiguous and some are clear, and it may be appropriate to change the ambiguity of questions from situation to situation. However, this issue has not been addressed by any prior work. We propose a novel task, rephrasing the questions by controlling the ambiguity of the questions. The ambiguity of a visual question is defined by the use of the entropy of the answer distribution predicted by a VQA model. The proposed model rephrases a source question given with an image so that the rephrased question has the ambiguity (or entropy) specified by users. We propose two learning strategies to train the proposed model with the VQA v2 dataset, which has no ambiguity information. We demonstrate the advantage of our approach that can control the ambiguity of the rephrased questions, and an interesting observation that it is harder to increase than to reduce ambiguity.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.2020EDP7089/_p
Copy
@ARTICLE{e103-d_11_2362,
author={Kento TERAO, Toru TAMAKI, Bisser RAYTCHEV, Kazufumi KANEDA, Shin'ichi SATOH, },
journal={IEICE TRANSACTIONS on Information},
title={Rephrasing Visual Questions by Specifying the Entropy of the Answer Distribution},
year={2020},
volume={E103-D},
number={11},
pages={2362-2370},
abstract={Visual question answering (VQA) is a task of answering a visual question that is a pair of question and image. Some visual questions are ambiguous and some are clear, and it may be appropriate to change the ambiguity of questions from situation to situation. However, this issue has not been addressed by any prior work. We propose a novel task, rephrasing the questions by controlling the ambiguity of the questions. The ambiguity of a visual question is defined by the use of the entropy of the answer distribution predicted by a VQA model. The proposed model rephrases a source question given with an image so that the rephrased question has the ambiguity (or entropy) specified by users. We propose two learning strategies to train the proposed model with the VQA v2 dataset, which has no ambiguity information. We demonstrate the advantage of our approach that can control the ambiguity of the rephrased questions, and an interesting observation that it is harder to increase than to reduce ambiguity.},
keywords={},
doi={10.1587/transinf.2020EDP7089},
ISSN={1745-1361},
month={November},}
Copy
TY - JOUR
TI - Rephrasing Visual Questions by Specifying the Entropy of the Answer Distribution
T2 - IEICE TRANSACTIONS on Information
SP - 2362
EP - 2370
AU - Kento TERAO
AU - Toru TAMAKI
AU - Bisser RAYTCHEV
AU - Kazufumi KANEDA
AU - Shin'ichi SATOH
PY - 2020
DO - 10.1587/transinf.2020EDP7089
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E103-D
IS - 11
JA - IEICE TRANSACTIONS on Information
Y1 - November 2020
AB - Visual question answering (VQA) is a task of answering a visual question that is a pair of question and image. Some visual questions are ambiguous and some are clear, and it may be appropriate to change the ambiguity of questions from situation to situation. However, this issue has not been addressed by any prior work. We propose a novel task, rephrasing the questions by controlling the ambiguity of the questions. The ambiguity of a visual question is defined by the use of the entropy of the answer distribution predicted by a VQA model. The proposed model rephrases a source question given with an image so that the rephrased question has the ambiguity (or entropy) specified by users. We propose two learning strategies to train the proposed model with the VQA v2 dataset, which has no ambiguity information. We demonstrate the advantage of our approach that can control the ambiguity of the rephrased questions, and an interesting observation that it is harder to increase than to reduce ambiguity.
ER -