The DISA Lab is running several demonstrations of visual similarity search in large image collections. In each of the application, visual descriptors (features) have been extracted from every image and these descriptors are organized by our similarity indexes. The system can quickly answer queries like “give me the most similar images to this example”. The demos differ in the image collection, the type of visual descriptors and the indexing structure.
Visual search on 20 million images from Profiset collection using descriptors from deep convolutional neural networks… read more
Image visual search on 100 million images from CoPhIR dataset using five MPEG-7 visual descriptors… read more