Rechercher

[YRC17] Fully convolutional network with superpixel parsing for fashion Web image segmentation

Conférence Internationale avec comité de lecture : International Conference on Multimedia Modeling (MMM2017), January 2017, Vol. 10132(-), pp.139-151, Series LNCS, Reykjavik, Iceland, (DOI: 10.1007/978-3-319-51811-4_12)

Mots clés: Fully convolutional network, deep learning, superpixel, fashion

Résumé: In this paper we introduce a new method for extracting deformable clothing items from still images by extending the output of a Fully Convolutional Neural Network (FCN) to infer context from local units (superpixels). To achieve this we optimize an energy function, which combines the large scale structure of the image with the local low-level visual descriptions of superpixels, over the space of all possible pixel labelings. To asses our method we compare it to the unmodified FCN network used as a baseline, as well as to the well-known Paper Doll and Co-parsing methods for fashion images.

Commentaires: -

BibTeX

@inproceedings {
YRC17,
title="{Fully convolutional network with superpixel parsing for fashion Web image segmentation}",
author=" L. YANG and H. Rodrigues and M. Crucianu and M. Ferecatu ",
booktitle="{International Conference on Multimedia Modeling (MMM2017)}",
year=2017,
edition="-",
month="January",
series="LNCS",
volume=10132,
issue=-,
pages="139-151",
address="Reykjavik, Iceland",
note="{-}",
doi="10.1007/978-3-319-51811-4_12",
}