The ability to recognise a deformable object's shape (e.g. humans and animals), regardless of the pose of the object, is an important requirement for modern shape retrieval methods. One approach to this problem is to transform each deforming model into a canonical form, reducing the problem to a rigid shape retrieval task. Many different methods for automatically computing a canonical form from a 3D mesh have been proposed, and methods using such approaches along with rigid retrieval systems have performed well on shape retrieval benchmarks. Most of these methods have not been evaluated using the same dataset, or used for retrieval with the same rigid retrieval system, so their relative performance is unclear. We propose a new benchmark to provide a meaningful comparison of existing and new canonical form methods for non-rigid shape retrieval.

To participate in this track, please register your participation by emailing


David Pickup - Cardiff University, UK.
Xianfang Sun - Cardiff University, UK.
Paul L. Rosin - Cardiff University, UK.
Ralph R. Martin - Cardiff University, UK.
Zhiquan Cheng - Avatar Science (Hunan) Company, China.


Our track uses a new dataset, made by combining a selection of models from two existing databases. The two datasets are the SHREC'11 non-rigid dataset [2], and the SHREC'14 non-rigid humans dataset [1]. Some of the models contain holes, such as eyes and mouth, and some contain self intersecting triangles. The file format of the meshes is .obj. The data is split into a training and a testing set. We provide a classification file (.cla) for the training data, and will provide a classification file for the test data once the track has been completed. The format of a .cla is explained here.

The complete dataset is available to download here.

Please note that the data we provide must be used for research purposes only.
If you use our data, please cite us and the other sources of the data:

author = {Pickup, D. and Sun, X. and Rosin, P. L. and Martin, R. R. and Cheng, Z. and Lian, Z. and Aono, M. and Ben Hamza, A. and Bronstein, A. and Bronstein, M. and Bu, S. and Castellani, U. and Cheng, S. and Garro, V. and Giachetti, A. and Godil, A. and Han, J. and Johan, H. and Lai, L. and Li, B. and Li, C. and Li, H. and Litman, R. and Liu, X. and Liu, Z. and Lu, Y. and Tatsuma, A. and Ye, J.},
title = {S{H}{R}{E}{C}'14 track: Shape Retrieval of Non-Rigid 3D Human Models},
booktitle = {Proceedings of the 7th Eurographics workshop on 3D Object Retrieval},
series = {EG 3DOR'14},
year = {2014},
numpages = {10},
publisher = {Eurographics Association}

author = {Lian, Z. and Godil, A. and Bustos, B. and Daoudi, M. and Hermans, J. and Kawamura, S. and Kurita, Y. and Lavou{\'e}, G. and Nguyen, H. V. and Ohbuchi, R. and Ohkita, Y. and Ohishi, Y. and Porikli, F. and Reuter, M. and Sipiran, I. and Smeets, D. and Suetens, P. and Tabia, H. and Vandermeulen, D.},
title = {S{H}{R}{E}{C}'11 track: shape retrieval on non-rigid 3D watertight meshes},
booktitle = {Proceedings of the 4th Eurographics conference on 3D Object Retrieval},
series = {EG 3DOR'11},
year = {2011},
pages = {79--88},
numpages = {10},
publisher = {Eurographics Association}


There are two tasks participants can partake in:

  1. Submit a canonical form for each mesh within the dataset, along with a description of the method used.
  2. Perform shape retrieval for each set of canonical forms submitted for Task 1, along with a description of the method used. The retrieval task is formally defined as
    Return a list of all models, ordered by decreasing shape similarity to a query model.
    Each model in the dataset should be used as a separate query model.

Our track will have two rounds of submissions. For the first round of submissions, participants will submit their entries for Task 1 (Deadline 29th January 2015). All the entries of Task 1 will be sent out to all the participants of Task 2 (2nd February 2015), and the results of this task will be collected in the second round of submissions (Deadline 14th February 2015).

Participants may submit results for just 1, or for both of the tasks.

Task 1 Submission Format

All canonical forms should be meshes in the .obj format. The filename of each canonical form should be the same as that of the original mesh. Canonical forms should be submitted for both the training and testing sets.

Task 2 Submission Format

Participants should submit one file for the retrieval result of each set of canonical forms. The file should be named surname_method_setID.txt, where surname should be replaced with the first author's surname, method with the name of the retrieval method, and setID with the ID of the corresponding set of canonical forms. The IDs will be provided by us along with the canonical form data.

The models in each dataset are numbered from 0 to N-1, where N is the number of models in the dataset. Line i of the results file should contain the retrieval results of using model i as the query. So line 0 should contain the results for model 0, etc. Each line of retrieval results should list all the remaining N-1 models in the dataset, ordered by their similarity to the query model. The order should be from most similar, to least similar. The models should be separated by a space.

An example result for performing Task 2 is available here.

Retrieval Evaluation

We will evaluate the retrieval results using the nearest neighbour, first tier, second tier, and discounted cumulative gain measures.

The evaluation code can be downloaded here.


[1] D. Pickup, X. Sun, P. L. Rosin, R. R. Martin, Z. Cheng, Z. Lian, M. Aono, A. Ben Hamza, A. Bronstein, M. Bronstein, S. Bu, U. Castellani, S. Cheng, V. Garro, A. Giachetti, A. Godil, J. Han, H. Johan, L. Lai, B. Li, C. Li, H. Li, R. Litman, X. Liu, Z. Liu, Y. Lu, A. Tatsuma, and J. Ye. SHREC'14 track: Shape retrieval of non-rigid 3d human models. In Proceedings of the 7th Eurographics workshop on 3D Object Retrieval, EG 3DOR'14. Eurographics Association, 2014.

[2] Z. Lian, A. Godil, B. Bustos, M. Daoudi, J. Hermans, S. Kawamura, Y. Kurita, G. Lavou ́, H. V. Nguyen, R. Ohbuchi, Y. Ohkita, Y. Ohishi, F. Porikli, M. Reuter, I. Sipiran, D. Smeets, P. Suetens, H. Tabia, and D. Vandermeulen. SHREC'11 track: shape retrieval on non-rigid 3d watertight meshes. In Proceedings of the 4th Eurographics conference on 3D Object Retrieval, EG 3DOR'11. Eurographics Association, 2011.