In this paper, a novel framework for extracting visual feature-based keyword relationships from an image database is proposed. From the characteristic that a set of relevant keywords tends to have common visual features, the keyword relationships in a target image database are extracted by using the following two steps. First, the relationship between each keyword and its corresponding visual features is modeled by using a classifier. This step enables detection of visual features related to each keyword. In the second step, the keyword relationships are extracted from the obtained results. Specifically, in order to measure the relevance between two keywords, the proposed method removes visual features related to one keyword from training images and monitors the performance of the classifier obtained for the other keyword. This measurement is the biggest difference from other conventional methods that focus on only keyword co-occurrences or visual similarities. Results of experiments conducted using an image database showed the effectiveness of the proposed method.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Marie KATSURAI, Takahiro OGAWA, Miki HASEYAMA, "A Novel Framework for Extracting Visual Feature-Based Keyword Relationships from an Image Database" in IEICE TRANSACTIONS on Fundamentals,
vol. E95-A, no. 5, pp. 927-937, May 2012, doi: 10.1587/transfun.E95.A.927.
Abstract: In this paper, a novel framework for extracting visual feature-based keyword relationships from an image database is proposed. From the characteristic that a set of relevant keywords tends to have common visual features, the keyword relationships in a target image database are extracted by using the following two steps. First, the relationship between each keyword and its corresponding visual features is modeled by using a classifier. This step enables detection of visual features related to each keyword. In the second step, the keyword relationships are extracted from the obtained results. Specifically, in order to measure the relevance between two keywords, the proposed method removes visual features related to one keyword from training images and monitors the performance of the classifier obtained for the other keyword. This measurement is the biggest difference from other conventional methods that focus on only keyword co-occurrences or visual similarities. Results of experiments conducted using an image database showed the effectiveness of the proposed method.
URL: https://global.ieice.org/en_transactions/fundamentals/10.1587/transfun.E95.A.927/_p
Copy
@ARTICLE{e95-a_5_927,
author={Marie KATSURAI, Takahiro OGAWA, Miki HASEYAMA, },
journal={IEICE TRANSACTIONS on Fundamentals},
title={A Novel Framework for Extracting Visual Feature-Based Keyword Relationships from an Image Database},
year={2012},
volume={E95-A},
number={5},
pages={927-937},
abstract={In this paper, a novel framework for extracting visual feature-based keyword relationships from an image database is proposed. From the characteristic that a set of relevant keywords tends to have common visual features, the keyword relationships in a target image database are extracted by using the following two steps. First, the relationship between each keyword and its corresponding visual features is modeled by using a classifier. This step enables detection of visual features related to each keyword. In the second step, the keyword relationships are extracted from the obtained results. Specifically, in order to measure the relevance between two keywords, the proposed method removes visual features related to one keyword from training images and monitors the performance of the classifier obtained for the other keyword. This measurement is the biggest difference from other conventional methods that focus on only keyword co-occurrences or visual similarities. Results of experiments conducted using an image database showed the effectiveness of the proposed method.},
keywords={},
doi={10.1587/transfun.E95.A.927},
ISSN={1745-1337},
month={May},}
Copy
TY - JOUR
TI - A Novel Framework for Extracting Visual Feature-Based Keyword Relationships from an Image Database
T2 - IEICE TRANSACTIONS on Fundamentals
SP - 927
EP - 937
AU - Marie KATSURAI
AU - Takahiro OGAWA
AU - Miki HASEYAMA
PY - 2012
DO - 10.1587/transfun.E95.A.927
JO - IEICE TRANSACTIONS on Fundamentals
SN - 1745-1337
VL - E95-A
IS - 5
JA - IEICE TRANSACTIONS on Fundamentals
Y1 - May 2012
AB - In this paper, a novel framework for extracting visual feature-based keyword relationships from an image database is proposed. From the characteristic that a set of relevant keywords tends to have common visual features, the keyword relationships in a target image database are extracted by using the following two steps. First, the relationship between each keyword and its corresponding visual features is modeled by using a classifier. This step enables detection of visual features related to each keyword. In the second step, the keyword relationships are extracted from the obtained results. Specifically, in order to measure the relevance between two keywords, the proposed method removes visual features related to one keyword from training images and monitors the performance of the classifier obtained for the other keyword. This measurement is the biggest difference from other conventional methods that focus on only keyword co-occurrences or visual similarities. Results of experiments conducted using an image database showed the effectiveness of the proposed method.
ER -