Title: Content-Based Gastric Image Retrieval Using Fusion of Deep Learning Features with Dimensionality Reduction
Loading...
Date
Authors
Journal Title
Journal ISSN
Volume Title
Publisher
Springer
Abstract
The rapid expansion of medical imaging repositories in hospitals has introduced significant challenges in managing and retrieving relevant data, which may contribute to diagnostic errors. Content-based medical image retrieval (CBMIR) offers a solution to these challenges by enabling efficient querying of vast datasets. This research introduces an efficient method, ResNetFuse, which leverages pre-trained deep convolutional neural networks (DCNNs), ResNet-18 and ResNet-50, for feature extraction. In ResNetFuse, the features from both networks are fused via concatenation, resulting in substantial improvements in retrieval performance. However, this fusion increases the dimensionality of the features, that leads to increase in the storage and time for retrieval process. To address the high dimensionality issue, here we used t-distributed stochastic neighbour embedding (t-SNE). The proposed ResNetFuse + t-SNE method is rigorously evaluated on the KVASIR benchmark dataset. Experimental results demonstrate that ResNetFuse + t-SNE surpasses state-of-the-art techniques across performance metrics, achieving a mean average precision (mAP) of 96.15% for the retrieval of 10 images. Additionally, the method achieves an 87.5% reduction in feature dimensionality compared to ResNetFuse alone, facilitating more compact and efficient image indexing without sacrificing retrieval accuracy. These findings underscore the efficacy of ResNetFuse + t-SNE in improving retrieval performance while reducing computational complexity, making it particularly suitable for resource-constrained environments. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2025.
