Repository logo
Institutional Repository
Communities & Collections
Browse
Quick Links
  • Central Library
  • Digital Library
  • BHU Website
  • BHU Theses @ Shodhganga
  • BHU IRINS
  • Login
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Ruchilekha"

Filter results by typing the first few letters
Now showing 1 - 8 of 8
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    PublicationConference Paper
    A Comparative Study of Feature Extraction Techniques and Similarity Measures for Image Retrieval
    (Institute of Electrical and Electronics Engineers Inc., 2022) Mona Singh; Suneel Kumar; Ruchilekha; Manoj Kumar Singh
    With the growing popularity of using massive amount image database in several applications, it is critical to develop an autonomous and efficient retrieval system to search the relevant images from entire database. The method of obtaining the relevant images from huge image libraries by extracting their content features is known as content-based image retrieval (CBIR). In this paper, a comparative study is performed while acquiring various methods of traditional feature extraction, such as Color moment, Gabor wavelet, Discrete wavelet transform (DWT), Local binary pattern (LBP), Gray level co-occurrence matrix (GLCM), and Histogram of orientation (HOG), to present an efficient and more accurate CBIR system. The experiment is demonstrated on two benchmark datasets, namely Wang (color images) and Medical MNIST (grayscale images), with different visual effects. To retrieve relevant images of a query image, three distinct distance metrics, such as Cosine, City block, and Euclidean, are used to examine the similarity between the query image and the database images. The experiment is evaluated using two performance metrics: precision and recall, to compare the efficacy of various approach. We achieve the best results as average precision of 65.65% and average recall of 6.57 on a scale of 10 using Color moment features via Euclidean distance metric in case of WANG dataset, while 99.89% and 9.99 on a scale of 10 for average precision and average recall using HOG features via City block distance metric in case of Medical MNIST dataset. © 2022 IEEE.
  • Loading...
    Thumbnail Image
    PublicationArticle
    A deep learning approach for subject-dependent & subject-independent emotion recognition using brain signals with dimensional emotion model
    (Elsevier Ltd, 2023) Ruchilekha; Manoj Kumar Singh; Mona Singh
    This paper aims to design a deep-learning based approach in combination with machine learning classifiers for two different perspectives. In first perspective, the performance is evaluated when training and testing are performed on same subject called as subject–dependent evaluation criteria. In second perspective, the performance is evaluated when training and testing are performed on different subjects called as subject–independent evaluation criteria. For each perspective, three label cases are made using valence, arousal, and dominance for recognizing human emotions: i) Binary/ 2-class, ii) Quad/ 4-class, and iii) Octal/ 8-class classifications. The experiment is performed on two publicly available datasets DEAP and DREAMER. For emotion recognition, firstly the brain signals are processed and then features are extracted using our proposed deep convolutional neural network (DCNN) architecture. These extracted features are used for emotion recognition using classifiers namely Naive Bayes (NB), decision tree (DT), k-Nearest Neighborhood (KNN), Support Vector Machine (SVM), AdaBoost (AB), Random Forest (RF), Neural Networks (NN), Long-short term memory (LSTM), and Bidirectional-LSTM (BiLSTM). The experimental results give more robust classification for subject-independent emotion recognition in comparison to subject-dependent emotion recognition, with DCNN + NN for binary and DCNN + SVM for quad & octal classification. Moreover, experimental results show that arousal and dominance play an important role in emotion recognition in contrary to valence and arousal as reported in literature. © 2023 Elsevier Ltd
  • Loading...
    Thumbnail Image
    PublicationArticle
    DFFnet: delay feature fusion network for efficient content-based image retrieval
    (Springer Science and Business Media Deutschland GmbH, 2025) Suneel Kumar; Ruchilekha; Manoj Kumar Singh; Manoj Kumar Mishra
    Due to advancement of affordable imaging devices, a huge number of images are generated for different applications. An efficient method for retrieving the appropriate images corresponding to the query image from a huge repository is still awaited. Thus, content-based image retrieval (CBIR) systems have been developed. One of the issues that directly threatens the effectiveness of CBIR system is a semantic gap. In this paper, we introduce a Delay Feature Fusion Network (DFFnet) in the framework of SqueezeNet architecture. Our proposed model fuses the past layer’s features with the current layer’s features by utilizing a transpose convolution operation followed by depth-concatenation. This integration preserves the crucial information that may be lost during the forward pass. After extracting image features, we apply the t-SNE (t-Distributed Stochastic Neighbor Embedding) method. This technique allows us to project the high-dimensional image features into a lower-dimensional space, enabling compact image indexing and potentially improving the overall performance of CBIR system. Notably, we observed that as the number of retrieval rates increases, our proposed method experiences minimal impact. By leveraging the DFFnet and employing t-SNE, our approach aims to enhance image indexing and achieve improved performance for image retrieval tasks. The performance of DFFnet with t-SNE and without t-SNE are evaluated on benchmark datasets—Corel, Kadid, and ImageNet. Our proposed DFFnet with t-SNE gives a significant improvement in terms of performance metrics: precision, recall, and F1-score, in comparison to other state-of-the-arts. © The Author(s), under exclusive licence to Springer-Verlag London Ltd., part of Springer Nature 2025.
  • Loading...
    Thumbnail Image
    PublicationConference Paper
    Diagnosing Covid-19 using AI based Medical Image Analysis
    (Association for Computing Machinery, 2022) Varad Srivastava; Ruchilekha
    The pandemic of COVID-19 is currently one of the most significant problems being dealt with, all around the world. It mainly affects the lungs of the infected person which can further result in serious threats. So to avoid this life threatening condition, we have used chest radiological images for COVID-19 detection. This infectious disease is communicable and is spreading rapidly throughout the world. Hence, fast and accurate detection of COVID-19 is mandatory, so one can be given proper treatment well before time. In this paper, the proposed work aims to develop a web application, namely CovSADs(Covid-19 Smart A.I. Diagnosis System), using deep learning approach for faster and efficient detection of COVID-19. This web application uses X-ray and CT scan images for the evaluation. Here, we have developed DeepCovX and DeepCovCT models by incorporating Transfer Learning (TL) approach for COVID-19 detection via chest X-ray and CT scan images respectively. Further, we have used GradCam in case of X-ray to make sure our model is looking at relevant information to make decisions and image-segmentation is used in case of CT scan to extract and localize Region-of-interest (ROI) from binary image. Our proposed models show the accuracy of 95.89% and 98.01% for X-ray and CT scan images respectively. We have obtained specificity of 99.57%, sensitivity of 100%, and AUC of 0.998 in case of X-ray and specificity of 98.80%, sensitivity of 97.06%, and AUC of 0.9875 in case of CT scan images. F1-score is obtained as 0.98 for COVID-19 and 0.98 for Non-COVID-19 in case of CT scan images. Both quantitative and qualitative results demonstrate promising results for COVID-19 detection and extraction of infected lung regions. The primary objective of the web application is to assist the radiologists not only for mass screening but also to help in planning treatment process. © 2022 ACM.
  • Loading...
    Thumbnail Image
    PublicationConference Paper
    Emotion Recognition Using Phase-Locking-Value Based Functional Brain Connections Within-Hemisphere and Cross-Hemisphere
    (Springer Science and Business Media Deutschland GmbH, 2024) Ruchilekha; Varad Srivastava; Manoj Kumar Singh
    Research in cognitive neuroscience has found emotion-induced distinct cognitive variances between the left and right hemispheres of the brain. In this work, we follow up on this idea by using Phase-Locking Value (PLV) to investigate the EEG based hemispherical brain connections for emotion recognition task. Here, PLV features are extracted for two scenarios: Within-hemisphere and Cross-hemisphere, which are further selected using maximum relevance-minimum redundancy (mRmR) and chi-square test mechanisms. By making use of machine learning (ML) classifiers, we have evaluated the results for dimensional model of emotions through making binary classification on valence, arousal and dominance scales, across four frequency bands (theta, alpha, beta and gamma). We achieved the highest accuracies for gamma band when assessed with mRmR feature selection. KNN classifier is most effective among other ML classifiers at this task, and achieves the best accuracy of 79.4%, 79.6%, and 79.1% in case of cross-hemisphere PLVs for valence, arousal, and dominance respectively. Additionally, we find that cross-hemispherical connections are better at predictions on emotion recognition than within-hemispherical ones, albeit only slightly. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
  • Loading...
    Thumbnail Image
    PublicationConference Paper
    Empirical study of emotional state recognition using multimodal fusion framework
    (Association for Computing Machinery, 2023) Ruchilekha; Manoj Kumar Singh
    Recent trends towards emotion recognition are not only limited to text, audio and visual modalities; instead, physiological signals are tempting attention of many researchers in challenging field of human-computer interaction. In this paper, an empirical analysis is conducted with static patterns of physiological signals over time for human emotional state using machine learning approach. It is also investigated in this study that which brain lobe is most efficient to emotions. Here, subjective ratings are transformed to 3-dimensional VAD space which are then grouped into five discrete emotion labels using three clustering techniques via k-means, k-medoids and fuzzy c-means. Various features are extracted from EEG and other peripheral signals separately and are validated with the labels obtained from these clustering mechanisms using traditional classification algorithms. The aim of this study is to evaluate the performance of EEG signals and peripheral signals individually and analyze the results when these modalities are fused together. It is observed that the results obtained with multimodal fusion acquires the highest accuracy of 91.90% with 0.98 AUC score for ensemble subspace-KNN classifier when validated using clustered labels of fuzzy c-means via city-block metric. The study is conducted on DEAP dataset using 32 subjects and 15 named emotions only. It is also observed that temporal regions of brain are mostly correlated to emotions. Hence, these temporal EEG channels can be utilized for human emotion recognition towards human-computer interaction. © 2023 ACM.
  • Loading...
    Thumbnail Image
    PublicationConference Paper
    GenEmo-Net: Generalizable Emotion Recognition Using Brain Functional Connections Based Neural Network
    (Springer Science and Business Media Deutschland GmbH, 2024) Varad Srivastava; Ruchilekha; Manoj Kumar Singh
    The aim of this research is to construct a generalizable and biologically-interpretable emotion recognition model utilizing complex electroencephalogram (EEG) signals for realizing emotional state of human brain. In this paper, the spatial-temporal information of EEG signals is used to extract brain connectivity-based feature, i.e., phase-locking value (PLV), that incorporates phase information between a pair of signals. These functional features are then fed as input to our proposed model (GenEmo-Net), which encompasses of Graph Convolutional Neural Network (GCNN) and Long Short-Term Memory Network (LSTM). It is able to dynamically learn the adjacency matrix that resembles functional connections in the brain, and are combined with the temporal features learnt by LSTM. To validate the generalization ability of our model, the experimental setup combines three emotion databases, namely DEAP, DREAMER, and AMIGOS, which increases variability and reduces biasness among subjects and trials. We evaluated the performance of our proposed model on the combined dataset, which achieved a classification accuracy of 70.98 ± 0.73, 65.47 ± 0.56, and 70.09 ± 0.37 for discrimination of valence, arousal, and dominance, respectively. Notably, our generalized model gives more robust results for emotion recognition tasks when compared to other methods. In addition, the biological interpretation of GenEmo-Net is tested via the final adjacency matrix, learnt at the end of training, for VAD processing units. Above results demonstrate the efficacy of the GenEmo-Net for recognizing human emotions and also highlight substantial variations in the spatial and temporal brain characteristics across distinct emotional states. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
  • Loading...
    Thumbnail Image
    PublicationConference Paper
    Semantic Segmentation Based Image Signature Generation for CBIR
    (Springer Science and Business Media Deutschland GmbH, 2024) Suneel Kumar; Mona Singh; Ruchilekha; Manoj Kumar Singh
    Content-Based Image Retrieval (CBIR) leveraging semantic segmentation integrates semantic understanding with image retrieval, enabling users to search for images based on specific objects or regions within them. This paper presents a methodology for constructing image signatures, a pivotal element in enhancing image representation within a CBIR system. The efficiency and effectiveness of a CBIR system significantly hinge on the quality of the image signature, which serves as a compact and informative representation of raw image data. Our proposed methodology begins with emphasizing clear object or region boundaries through pixel-level semantic segmentation masks. A pretrained semantic segmentation model, such as DeepLab v3+, is employed to generate pixel-wise object class predictions, yielding the necessary segmentation masks. Subsequently, each image is segmented into meaningful regions based on these masks, and relevant features are extracted from each segmented region using a pre-trained Deep Convolutional Neural Network (DCNN) models AlexNet, VGG16 and ResNet-18. During the retrieval phase, when a user queries the system with an image, the query image is segmented using the pre-trained semantic segmentation model, and features are extracted from the segmented regions of the query image. These query features are utilized to search the database for the most similar regions or images. Similarity scores, calculated using Euclidean distance, are used to rank the database entries based on their similarity to the query, allowing for efficient retrieval of the top-k most similar regions or images. We found that for some classes semantic segmented based retrieval better performance in comparison to image based. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2024.
An Initiative by BHU – Central Library
Powered by Dspace