Repository logo
Institutional Repository
Communities & Collections
Browse
Quick Links
  • Central Library
  • Digital Library
  • BHU Website
  • BHU Theses @ Shodhganga
  • BHU IRINS
  • Login
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Shravya Singh"

Filter results by typing the first few letters
Now showing 1 - 4 of 4
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    PublicationArticle
    Application of deep learning models for accurate classification of fluid collections in acute necrotizing pancreatitis on computed tomography: a multicenter study
    (Springer, 2025) Pankaj Kumar Gupta; Ruby Siddiqui; Shravya Singh; Nikita Pradhan; Jimil Shah; Jayanta Samanta; Vaneet Jearth; Anupam Kumar Singh; Harshal Surendra Mandavdhare; Vishal Sharma; Amar Mukund; Chhagan Lal Birda; Ishan Kumar; N. Suresh Kumar; Yashwant Patidar; Ashish Agarwal; Taruna Yadav; Binit Sureka; Anurag Kumar Tiwari; Ashish Verma; Ashish Sravanth Kumar; Saroj Kant Sinha; Usha K. Dutta
    Purpose: To apply CT-based deep learning (DL) models for accurate solid debris-based classification of pancreatic fluid collections (PFC) in acute pancreatitis (AP). Material and methods: This retrospective study comprised four tertiary care hospitals. Consecutive patients with AP and PFCs who had computed tomography (CT) prior to drainage were screened. Those who had magnetic resonance imaging (MRI) or endoscopic ultrasound (EUS) within 20 days of CT were considered for inclusion. Axial CT images were utilized for model training. Images were labelled as those with≤30% solid debris and >30% solid debris based on MRI or EUS. Single center data was used for model training and validation. Data from other three centers comprised the held out external test cohort. We experimented with ResNet 50, Vision transformer (ViT), and MedViT architectures. Results: Overall, we recruited 152 patients (129 training/validation and 23 testing). There were 1334, 334 and 512 images in the training, validation, and test cohorts, respectively. In the overall training and validation cohorts, ViT and MedVit models had high diagnostic performance (sensitivity 92.4–98.7%, specificity 89.7–98.4%, and AUC 0.908–0.980). The sensitivity (85.3–98.6%), specificity (69.4–99.4%), and AUC (0.779–0.984) of all the models was high in all the subgroups in the training and validation cohorts. In the overall external test cohort, MedViT had the best diagnostic performance (sensitivity 75.2%, specificity 75.3%, and AUC 0.753). MedVit had sensitivity, specificity, and AUC of 75.2%, 74.3%, and 0.748, in walled off necrosis and 79%, 74.2%, 75.3%, and 0.767 for collections >5 cm. Conclusion: DL-models have moderate diagnostic performance for solid-debris based classification of WON and collections greater than 5 cm on CT. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2024.
  • Loading...
    Thumbnail Image
    PublicationArticle
    Application of deep learning models for accurate classification of fluid collections in acute necrotizing pancreatitis on computed tomography: a multicenter study
    (Springer, 2024) Pankaj Gupta; Ruby Siddiqui; Shravya Singh; Nikita Pradhan; Jimil Shah; Jayanta Samanta; Vaneet Jearth; Anupam Singh; Harshal Mandavdhare; Vishal Sharma; Amar Mukund; Chhagan Lal Birda; Ishan Kumar; Niraj Kumar; Yashwant Patidar; Ashish Agarwal; Taruna Yadav; Binit Sureka; Anurag Tiwari; Ashish Verma; Ashish Kumar; Saroj K. Sinha; Usha Dutta
    [No abstract available]
  • Loading...
    Thumbnail Image
    PublicationErratum
    Correction to: Application of deep learning models for accurate classification of fluid collections in acute necrotizing pancreatitis on computed tomography: a multicenter study (Abdominal Radiology, (2024), 50, 5, (2258-2267), 10.1007/s00261-024-04607-y)
    (Springer, 2025) Pankaj Kumar Gupta; Ruby Siddiqui; Shravya Singh; Nikita Pradhan; Jimil Shah; Jayanta Samanta; Vaneet Jearth; Anupam Kumar Singh; Harshal Surendra Mandavdhare; Vishal Sharma; Amar Mukund; Chhagan Lal Birda; Ishan Kumar; N. Suresh Kumar; Yashwant Patidar; Ashish Agarwal; Taruna Yadav; Binit Sureka; Anurag Kumar Tiwari; Ashish Verma; Ashish Sravanth Kumar; Saroj Kant Sinha; Usha K. Dutta
    The original version of this article unfortunately contained a mistake. The "Abstract" and "Keywords" sections were missing in the published version. However, now it is corrected. To apply CT-based deep learning (DL) models for accurate solid debris-based classification of pancreatic fluid collections (PFC) in acute pancreatitis (AP). This retrospective study comprised four tertiary care hospitals. Consecutive patients with AP and PFCs who had computed tomography (CT) prior to drainage were screened. Those who had magnetic resonance imaging (MRI) or endoscopic ultrasound (EUS) within 20 days of CT were considered for inclusion. Axial CT images were utilized for model training. Images were labelled as those with ≤ 30% solid debris and > 30% solid debris based on MRI or EUS. Single center data was used for model training and validation. Data from other three centers comprised the held out external test cohort. We experimented with ResNet 50, Vision transformer (ViT), and MedViT architectures. Overall, we recruited 152 patients (129 training/validation and 23 testing). There were 1334, 334 and 512 images in the training, validation, and test cohorts, respectively. In the overall training and validation cohorts, ViT and MedVit models had high diagnostic performance (sensitivity 92.4–98.7%, specificity 89.7–98.4%, and AUC 0.908–0.980). The sensitivity (85.3–98.6%), specificity (69.4–99.4%), and AUC (0.779–0.984) of all the models was high in all the subgroups in the training and validation cohorts. In the overall external test cohort, MedViT had the best diagnostic performance (sensitivity 75.2%, specificity 75.3%, and AUC 0.753). MedVit had sensitivity, specificity, and AUC of 75.2%, 74.3%, and 0.748, in walled off necrosis and 79%, 74.2%, 75.3%, and 0.767 for collections > 5 cm. DL-models have moderate diagnostic performance for solid-debris based classification of WON and collections greater than 5 cm on CT. Keywords acute necrotizing pancreatitis; computed tomography; deep learning The original article has been corrected. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
  • Loading...
    Thumbnail Image
    PublicationArticle
    Deep learning-based segmentation of gallbladder cancer on abdominal computed tomography scans: a multicenter study
    (Springer, 2025) Pankaj Kumar Gupta; Niharika Dutta; Ajay Tomar; Shravya Singh; Sonam Choudhary; Nandita Mehta; Vansha Mehta; Rishabh Sheth; Divyashree Srivastava; Salai Thanihai; Palki Singla; Gaurav Prakash; Thakur Deen Yadav; Lileswar Kaman; Santhosh Irrinki; Harjeet Singh; Niket Shah; Amit Kumar J. Choudhari; Shraddha Patkar; Mahesh Goel; Rajanikant R. Yadav; Archana Gupta; Ishan Kumar; Kajal Seth; Usha K. Dutta; Chetan P. Arora
    Objectives: To train and validate segmentation models for automated segmentation of gallbladder cancer (GBC) lesions from contrast-enhanced CT images. Materials and methods: This retrospective study comprised consecutive patients with pathologically proven treatment naïve GBC who underwent a contrast-enhanced CT scan at four different tertiary care referral hospitals. The training and validation cohort comprised CT scans of 317 patients (center 1). The internal test cohort comprised a temporally independent cohort (n = 29) from center 1 (internal test 1). The external test cohort comprised CT scans from three centers [(n = 85)]. We trained the state-of-the-art 2D and 3D image segmentation models, SAM Adapter, MedSAM, 3D TransUNet, SAM-Med3D, and 3D-nnU-Net, for automated segmentation of the GBC. The models’ performance for GBC segmentation on the test datasets was assessed via dice score and intersection over union (IoU) using manual segmentation as the reference standard. Results: The 2D models performed better than 3D models. Overall, MedSAM achieved the highest dice and IoU scores on both the internal [mean dice (SD) 0.776 (0.106) and mean IoU 0.653 (0.133)] and external [mean dice (SD) 0.763 (0.098) and mean IoU 0.637 (0.116)] test sets. Among the 3D models, TransUNet showed the best segmentation performance with mean dice (SD) and IoU (SD) of 0.479 (0.268) and 0.356 (0.235) in the internal test and 0.409 (0.339) and 0.317 (0.283) in the external test sets. The segmentation performance was not associated with GBC morphology. There was weak correlation between the dice/IoU and the size of the GBC lesions for any segmentation model. Conclusion: We trained 2D and 3D GBC segmentation models on a large dataset and validated these models on external datasets. MedSAM, a 2D prompt-based foundational model, achieved the best segmentation performance. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.
An Initiative by BHU – Central Library
Powered by Dspace