The Track

The problem of finding similar or relevant objects to a given query input is a fundamental task in multimedia databases. An exact search in this context is in general meaningless, because two objects in the dataset are identical only in the case when they are digital copies. Two objects obtained from the same source (e.g., by 3D scanning the same real-world object twice) will result in different but similar digitized models. In addition to retrieval, algorithms for similarity search can be used for implementing multimedia mining task like clustering and classification. Therefore, it is relevant to study effective methods for representing and searching multimedia objects.

Among similarity search problems, one of particular interest is the partial retrieval on 3D models. In this task, the query input is a partial 3D view, and the problem is finding the corresponding part in a complete 3D model or 3D scene. The partial retrieval task is known to be hard, as it has been shown by a previous SHREC track on this problem [1].

The aim of this SHREC track is to compare the matching performance of local 3D shape descriptors for partial retrieval. We base our evaluation in the newly proposed ShapeBench [2], a benchmarking methodology specifically aimed for comparing the effectiveness of local descriptors. This methodology allows us implementing a large scale, fully reproducible benchmark for 3D local descriptors.

Challenges

For the evaluation of the local 3D shape descriptors, we will use ShapeBench with combinations of filters from the following list:

Clutter

Pile randomly chosen objects on top the input scene. A physics simulator ensures that the geometry behaves in a realistic manner.

Independent variable: the area of clutter objects intersecting the support volume, divided by the area intersecting the support volume that belongs to the object being recognized.

Occlusion

Choose a random viewing direction from which the scene is viewed, and all geometry not visible from that point of view is removed.

Independent variable: the area of the remaining mesh intersecting the support volume divided by the area of the unmodified mesh intersecting the support volume.

Alternate triangulation

Try to keep the same mesh but displace the triangle vertices in order to simulate the same mesh being captured multiple times.

Independent variable: distance to the closest corresponding vertex in the modified mesh.

Gaussian noise

Simulate various sources of noise introduced in the capturing process.

Independent variable: standard deviation of noise function.

We may consider using additional filters for the benchmark, depending on their availability.

Evaluation

We will compute the Descriptor Distance Index (DDI) metric used by Shapebench, for measuring the effect for each applied filter or combination of filters on a local 3D shape descriptor.

We define three different levels of increasing difficulty. Level 1 are single filters only applied on the scene object, level 2 are combinations of different filters potentially applied on both objects, and level 3 aims to simulate a combination of a number of challenges encountered in real world environments. When occlusion is applied on both the model and scene objects, the angle between the viewpoints is controlled to simulate varying degrees of overlap.

LevelFilters applied on modelFilters applied on scene
Level 1Occlusion
Clutter
Gaussian noise
Level 2Occlusion + Gaussian noise
OcclusionOcclusion
Occlusion + Fixed Gaussian noiseOcclusion + Fixed Gaussian noise
Occlusion + Fixed Gaussian noiseOcclusion + Fixed Gaussian noise
OcclusionOcclusion + Clutter
Occlusion + Fixed Gaussian noiseOcclusion + Clutter + Fixed Gaussian noise
Level 3Occlusion + Variant of clutter with fewer clutter objects + Fixed Gaussian noise + Alternate triangulation

Procedure

People interested in participating in this track must register by sending an email to the organisers using the "Contact us" button found on this page. The registration will help to keep track of the contest.

Please download ShapeBench from the Github repository. Install it locally, and check that you are using the configuration file for SHREC 2025. The repository contains a script that assists in running the benchmark, and documentation is provided on how new methods can be integrated.

ShapeBench produces JSON files as output. Please submit the output files corresponding to your run by sending an email to Bart Iver van Blokland using the Contact Us button on this page, together with a brief description of the method/local descriptor used for obtaining the results. Other queries, such as technical support for installing, compiling, and running the benchmark can also be directed there. Assistance with running the benchmark on the full dataset can also be arranged.

GRSI Replicability Stamp

The benchmark has been developed in a manner by which all results produced by it are fully replicable. A results file can be supplied to the benchmark, and it is automatically able to replicate the results that are contained within. As long as submitted methods are implemented deterministically, We intend to hold a vote among all participants, and if it is unanimously decided that all source code should be made available, we will apply for the stamp.

References

  • [1] Ivan Sipiran, Rafael Meruane, Benjamin Bustos, Tobias Schreck, Henry Johan, Bo Li, and Yijuan Lu. SHREC'13 Track: Large-scale partial shape retrieval using simulated range images. In Proc. 6th Eurographics Workshop on 3D Object Retrieval (3DOR'13), pages 81-88. Eurographics Association, 2013. link
  • [2] Bart Iver van Blokland. ShapeBench: A new approach to benchmarking local 3D shape descriptors. Computers & Graphics 124:104052, 2024. link
  • [3] Matt Deitke, Dustin Schwenk, Jordi Salvador, Luca Weihs, Oscar Michel, Eli VanderBilt, Ludwig Schmidt, Kiana Ehsani, Aniruddha Kembhavi, Ali Farhadi. Objaverse: A universe of annotated 3D objects. In Proc. IEEE/CVF Conference on Computer Vision and Pattern Recognition, pages 13142-13153. 2023. link