Ciprian-Mihai Ceausescu
Adv. Artif. Intell. Mach. Learn., XX (XX):-
1. Ciprian-Mihai Ceausescu: University of Bucharest, Romania
DOI: 10.54364/AAIML.2026.62301
Article History: Received on: 27-Jan-26, Accepted on: 02-May-26, Published on: 09-May-26
Corresponding Author: Ciprian-Mihai Ceausescu
Email: ciprian-mihai.ceausescu@drd.unibuc.ro
Citation: Ciprian-Mihai Ceausescu Content-Based Image Retrieval in Histopathology and Gastrointestinal Endoscopy: A Comparative Study of Deep Models. Advances in Artificial Intelligence and Machine Learning..2026. (Ahead of Print) https://dx.doi.org/10.54364/AAIML.2026.62301
In the medical domain, finding the right diagnosis rarely happens using just one piece of information. The specialists have to put together several indicators to provide an answer to a specific patient. Content-based image retrieval is becoming a relevant task in the medical image analysis field. It allows the doctors to search within large medical image databases for cases that look visually similar, pull those cases up, and directly compare them with the current patient's record. In this work, we propose a comparison protocol between several pretrained backbone models on three different medical imaging datasets: lung histopathology, colon histopathology, and wireless capsule endoscopy videos. The models we take into account include classic convolutional networks, newer transformer-based architectures, self-supervised approaches, multimodal encoders, and even some models originally built for segmentation tasks. We propose an evaluation framework across diverse medical domains (from tiny microscope slides of tissue to wide-angle endoscopic views inside the gut). The image processing and embedding retrieval are performed using the same pipeline, gaining a better overview of how well different pretrained representations transfer between these domains. From a methodological point of view, our study tries to fix a common gap in the literature: a consistent comparison of backbones for medical retrieval on histopathology and gastrointestinal datasets. From a practical point of view, the results should help specialists decide which models tend to work best for medical image search tasks. Our findings provide valuable insights when building retrieval-based tools to support diagnosis, train new doctors, or simply browse and understand large clinical datasets. Overall, the benchmark can serve as a starting point for future work on medical image retrieval and for adapting large pretrained models to different clinical imaging domains.