Browsing by Author "B. Samuel Naik"
Now showing 1 - 3 of 3
- Results Per Page
- Sort Options
PublicationArticle Efficacy of botanicals against red pumpkin beetle and their impact on pollinator diversity in pumpkin (Cucurbita pepo L.) cultivation(Indian Academy of Horticultural Sciences, 2024) S.R. Umesh; B. Samuel Naik; V.C. Karthik; K.B. Chethan Kumar; Veershetty; Basavaraj N. Hadimani; Anil Kumar Vyas; R. GangarajThe study aimed to assess insect pollinator diversity in pumpkin (Cucurbita pepo L.) and evaluate the bio-efficacy of botanicals for pest management while considering their impact on pollinators. Seven pollinator species were identified, with Hymenopterans, particularly honey bees, dominating. Apis dorsata (67.40%) was the most common pollinator followed by Apis florea (14.28%). Other pollinators include species from Halictidae, Sphecidae, Syrphidae and Pieridae families. Among the botanicals tested, neem seed kernel extract (5%) and leaf extract (10%) effectively managed the red pumpkin beetle (Raphidopalpa foveicollis) with minimal harm to pollinators. Post-application, pollinator activity slightly increased after three to five days. The study highlights the effectiveness of neem-based botanicals in reducing pest populations while conserving pollinators, emphasizing the value of eco-friendly pest control in promoting sustainable pumpkin farming, improving both yield and crop quality. © 2024, Indian Academy of Horticultural Sciences. All rights reserved.PublicationArticle Meta-transformer: leveraging metaheuristic algorithms for agricultural commodity price forecasting(Springer Nature, 2025) G. H.Harish Nayak; Md Wasi Alam; B. Samuel Naik; B. S. Varshini; Gutta Sai Avinash; Rajeev Ranjan Kumar; Mrinmoy Ray; Kamalesh Narain SinghPredicting agricultural commodity prices is inherently complex due to factors such as perishability, seasonality, and market volatility. To address these challenges, this study proposes a novel framework that combines Transformer models with Metaheuristic Algorithms (MHAs), including the Grey Wolf Optimizer (GWO), Whale Optimization Algorithm (WOA), and Particle Swarm Optimization (PSO) to enhance agricultural price forecasting accuracy. While Transformer architectures are known for their powerful time series modeling capabilities, their performance is highly sensitive to hyperparameter selection, especially in contexts with limited or noisy data. The novelty of this research lies in the automated and adaptive tuning of these hyperparameters using MHAs, enabling improved generalization, faster convergence, and enhanced predictive accuracy. By integrating MHAs, known for their fast convergence and global search efficiency, the proposed models, Transformer-PSO, Transformer-GWO, and Transformer-WOA offer enhanced training efficiency and improved forecasting accuracy. This hybrid modeling approach is applied to predict weekly prices of potatoes in key Northern Indian markets. Results demonstrate that the Transformer-GWO and Transformer-WOA models outperform conventional models such as GARCH by 70–90% across standard evaluation metrics like Root Mean Squared Error (RMSE) and Mean Absolute Error (MAE). By bridging state-of-the-art deep learning architectures with robust optimization strategies, this study contributes a scalable and interpretable solution for agricultural price forecasting. The findings have significant implications for policymakers, market regulators, and farmers by supporting timely interventions, improving market transparency, and enabling data-driven decision-making. © The Author(s) 2025.PublicationArticle Transformer-based deep learning architecture for time series forecasting[Formula presented](Elsevier B.V., 2024) G.H. Harish Nayak; Md Wasi Alam; G. Avinash; Rajeev Ranjan Kumar; Mrinmoy Ray; Samir Barman; K.N. Singh; B. Samuel Naik; Nurnabi Meherul Alam; Prasenjit Pal; Santosha Rathod; Jaiprakash BisenTime series forecasting faces challenges due to the non-stationarity, nonlinearity, and chaotic nature of the data. Traditional deep learning models like RNNs, LSTMs, and GRUs process data sequentially but are inefficient for long sequences. To overcome the limitations of these models, we proposed a transformer-based deep learning architecture utilizing an attention mechanism for parallel processing, enhancing prediction accuracy and efficiency. This paper presents user-friendly code for the implementation of the proposed transformer-based deep learning architecture utilizing an attention mechanism for parallel processing. © 2024
