We extend the Nonparametric Discriminant Analysis (NDA) algorithm to a semi-supervised dimensionality reduction technique, called Semi-supervised Nonparametric Discriminant Analysis (SNDA). SNDA preserves the inherent advantages of NDA, that is, relaxing the Gaussian assumption required for the traditional LDA-based methods. SNDA takes advantage of both the discriminating power provided by the NDA method and the locality-preserving power provided by the manifold learning. Specifically, the labeled data points are used to maximize the separability between different classes and both the labeled and unlabeled data points are used to build a graph incorporating neighborhood information of the data set. Experiments on synthetic as well as real datasets demonstrate the effectiveness of the proposed approach.
The copyright of the original papers published on this site belongs to IEICE. Unauthorized use of the original or translated papers is prohibited. See IEICE Provisions on Copyright for details.
Copy
Xianglei XING, Sidan DU, Hua JIANG, "Semi-Supervised Nonparametric Discriminant Analysis" in IEICE TRANSACTIONS on Information,
vol. E96-D, no. 2, pp. 375-378, February 2013, doi: 10.1587/transinf.E96.D.375.
Abstract: We extend the Nonparametric Discriminant Analysis (NDA) algorithm to a semi-supervised dimensionality reduction technique, called Semi-supervised Nonparametric Discriminant Analysis (SNDA). SNDA preserves the inherent advantages of NDA, that is, relaxing the Gaussian assumption required for the traditional LDA-based methods. SNDA takes advantage of both the discriminating power provided by the NDA method and the locality-preserving power provided by the manifold learning. Specifically, the labeled data points are used to maximize the separability between different classes and both the labeled and unlabeled data points are used to build a graph incorporating neighborhood information of the data set. Experiments on synthetic as well as real datasets demonstrate the effectiveness of the proposed approach.
URL: https://global.ieice.org/en_transactions/information/10.1587/transinf.E96.D.375/_p
Copy
@ARTICLE{e96-d_2_375,
author={Xianglei XING, Sidan DU, Hua JIANG, },
journal={IEICE TRANSACTIONS on Information},
title={Semi-Supervised Nonparametric Discriminant Analysis},
year={2013},
volume={E96-D},
number={2},
pages={375-378},
abstract={We extend the Nonparametric Discriminant Analysis (NDA) algorithm to a semi-supervised dimensionality reduction technique, called Semi-supervised Nonparametric Discriminant Analysis (SNDA). SNDA preserves the inherent advantages of NDA, that is, relaxing the Gaussian assumption required for the traditional LDA-based methods. SNDA takes advantage of both the discriminating power provided by the NDA method and the locality-preserving power provided by the manifold learning. Specifically, the labeled data points are used to maximize the separability between different classes and both the labeled and unlabeled data points are used to build a graph incorporating neighborhood information of the data set. Experiments on synthetic as well as real datasets demonstrate the effectiveness of the proposed approach.},
keywords={},
doi={10.1587/transinf.E96.D.375},
ISSN={1745-1361},
month={February},}
Copy
TY - JOUR
TI - Semi-Supervised Nonparametric Discriminant Analysis
T2 - IEICE TRANSACTIONS on Information
SP - 375
EP - 378
AU - Xianglei XING
AU - Sidan DU
AU - Hua JIANG
PY - 2013
DO - 10.1587/transinf.E96.D.375
JO - IEICE TRANSACTIONS on Information
SN - 1745-1361
VL - E96-D
IS - 2
JA - IEICE TRANSACTIONS on Information
Y1 - February 2013
AB - We extend the Nonparametric Discriminant Analysis (NDA) algorithm to a semi-supervised dimensionality reduction technique, called Semi-supervised Nonparametric Discriminant Analysis (SNDA). SNDA preserves the inherent advantages of NDA, that is, relaxing the Gaussian assumption required for the traditional LDA-based methods. SNDA takes advantage of both the discriminating power provided by the NDA method and the locality-preserving power provided by the manifold learning. Specifically, the labeled data points are used to maximize the separability between different classes and both the labeled and unlabeled data points are used to build a graph incorporating neighborhood information of the data set. Experiments on synthetic as well as real datasets demonstrate the effectiveness of the proposed approach.
ER -