Title:
Content based medical image retrieval using deep learning and handcrafted features in dimensionality reduction framework

Loading...
Thumbnail Image

Date

Journal Title

Journal ISSN

Volume Title

Publisher

Elsevier Ltd

Abstract

Content-based medical image retrieval (CBMIR) is an approach utilized for extracting pertinent medical images from extensive databases by focusing on their visual attributes instead of relying on textual information. This method entails examining the visual qualities of medical images, including texture, shape, intensity, and spatial relationships, in order to detect resemblances and patterns. This study addresses two major challenges in CBMIR: effective image representation and dimensionality reduction. The semantic gap between human interpretation and machine-generated features is tackled using handcrafted techniques and deep convolutional neural networks (DCNNs) using transfer learning for extracting features. Additionally, dimensionality reduction methods: Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Uniform Manifold Approximation and Projection (UMAP), and t-distributed Stochastic Neighbor Embedding (t-SNE), are evaluated to optimize performance in terms of accuracy, speed, scalability, and memory usage. The performance of the CBMIR system is evaluated using four datasets: Medical MNIST, KVASIR, PH2, and MESSIDOR and evaluated using metrics: Precision, Recall, and F1-score. Results show that proposed method, HOG + t-SNE, maintains constant performance with mean average precision (mAP) 99.85% compared to the full-dimension feature based technique on Medical MNIST, and the performance of the DCNNs with various dimensionality reduction methods is evaluated on KVASIR, MESSIDOR and PH2 datasets and found that our proposed method, GoogleNet + t-SNE, achieves mAP of 95.32%, 92.33%, and 91.34% respectively. © 2025

Description

Citation

Collections

Endorsement

Review

Supplemented By

Referenced By