Browsing by Author "Bablu Kumar"
Now showing 1 - 8 of 8
- Results Per Page
- Sort Options
PublicationArticle A multivariate transformer-based monitor-analyze-plan-execute (MAPE) autoscaling framework for dynamic resource allocation in cloud environment(Springer, 2025) Bablu Kumar; Anshul Verma; Pradeepika VermaThe rapid advancement of cloud technology has heightened the demand for real-time data processing systems that provide accuracy, flexibility, and scalability. Autoscaling manages cloud resources automatically in real-time, employing either reactive or proactive approaches. Reactive autoscaling adjusts resources based on predefined thresholds but can be inefficient during fluctuating workloads. In contrast, proactive autoscaling predicts future workloads, enabling preemptive resource adjustments to optimize performance. This study proposes an autoscaling approach based on the monitor-analyze-plan-execute (MAPE) framework, which emphasizes proactive strategies by integrating feature selection techniques with a multivariate transformer (MV-Transformer) approach. This MV-Transformer approach excels at capturing long-term dependencies and complex interactions among multiple variables while using less memory. The framework enhances resource provisioning, as evidenced by the lowest under-provisioning value of 0.2892240 and duration of time under-provisioning value of 10.6676060, indicating superior performance. Additionally, the MAPE autoscaling framework achieves an elastic speedup of 2.9818, compared to 1.3200 for Bi-LSTM, 1.0230 for LSTM, and 1.0000 for reactive without autoscaling. The proactive MV-Transformer approach demonstrates significant improvements in resource management by evaluating this elastic speedup and resource provisioning metrics against both reactive without autoscaling and other proactive autoscaling approaches. For real-world implementation, docker desktop and Kubernetes were used to dynamically scale VMs based on workload, orchestrated by the MAPE autoscaling framework. This approach also helps in handling high dynamic workloads and overall efficiency in cloud computing, particularly in scaling and de-scaling. Our implementation codes are available at the following GitHub link: https://github.com/BABLU-KUMAR/MV-Transformer-based-MAPE-Autoscaling-Framework/tree/main. © The Author(s), under exclusive licence to Springer-Verlag GmbH Austria, part of Springer Nature 2025.PublicationReview An Extensive Investigation on Lyapunov Optimization-based Task Offloading Techniques in Multi-access Edge Computing(Springer, 2025) Vandna Rani Verma; Pushkar; Bablu Kumar; Anshul Verma; Vishnu Sharma; Pankaj Kumar TripathiTechnological advancements have heightened the demand for real-time applications with minimal energy consumption on resource-constrained devices, often facing storage, computational power, and battery life limitations. Multi-access edge computing mitigates these challenges by offloading data and computational tasks to nearby edge servers, improving task execution efficiency. Despite the progress in task-offloading techniques, real-time processing and energy consumption issues remain. Lyapunov optimization offers a promising approach to address these challenges by optimizing task allocation and resource management in dynamic environments. This paper provides a comprehensive review of task offloading techniques using Lyapunov optimization, focusing on energy consumption and latency. It examines these techniques through classification, theoretical frameworks, and mathematical analyses, while also detailing Lyapunov optimization algorithms, workflows, advantages, and metrics. The paper includes an in-depth comparative analysis of Lyapunov-based algorithms in the context of task offloading, highlighting their benefits and challenges. Finally, it identifies emerging research opportunities and suggests future directions based on recent advancements in the field. © The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd. 2025.PublicationArticle Evaluation of real-time PCR as an alternative for potency testing of Brucella abortus vaccines(Taylor and Francis Ltd., 2021) Tania Gupta; Mayank Rawat; Bablu Kumar; Rajat Varshney; Soumendu Chakravarti; Salauddin QureshiThe aim of the research was to evaluate real-time PCR (qPCR) as an alternate method for quantitative detection of Brucella abortus strain 544 (S544) in the spleen of mice for potency testing of live B. abortus strain 19 (S19) vaccine. IS711 and eryC gene-based qPCR were optimized for calculating copy number. The copy number was further correlated with live Brucella count in the spleen by standard plate count (SPC) method. The mice were immunized with S19 and challenged with S544 on 30th Day post-immunization. The spleen of mice was collected at 15th, 21st, and 30th days post challenge (DPC) for estimation of S19 and S544 load via SPC as well as qPCR. The noteworthy difference was observed between immunized and unimmunized group by both methods at all time points. The maximum correlation between SPC and qPCR method was observed at 15th DPC in both immunized and unimmunized group. Repeated experiments at 15th DPC gave the parallel significant difference between immunized and unimmunized group by both methods. Thus novel, risk-free qPCR method can be used for the indirect culture-free potency evaluation of S19 vaccine in order to preclude the cultivation of zoonotic Brucella organisms from spleen samples. © 2020 Taylor & Francis Group, LLC.PublicationArticle Optimal Cloudlet Selection in Edge Computing for Resource Allocation(Springer, 2023) Bablu Kumar; Mohini Singh; Anshul Verma; Pradeepika VermaMobile and Edge Computing devices have limited resources to perform computationally intensive jobs, and hence, there is a need for task offloading. In Mobile Cloud Computing, cloud servers are placed far from the user devices; as a consequence, many challenges are faced, such as security, limited bandwidth, network latency, and storage. Whereas edge servers are placed near the user devices in Edge Cloud Computing; however, issues of Cloud computing are also faced in Edge computing due to the huge number of devices, which also generates a significant load on edge servers. Some resource optimization approaches help in achieving optimal Cloudlet selection at the edge servers. When users access edge resources, such as CPU, memory, and hard disk, load balancing helps in distributing tasks among edge servers and achieving efficient results. The user devices communicate either within a Cloudlet or between Cloudlets using resource sharing, in which one of the main issues is optimal Cloudlet selection. This paper presents an optimal Cloudlet selection algorithm in which, first of all, an index value for each resource is calculated using parameters like weight, cluster of Cloudlets, availability, and total resource usage. Thereafter, the resource level and available resources of this level are calculated for each Cloudlet. Finally, an algorithm is proposed to help in finding the optimal Cloudlet for the cloud broker. The proposed approach is implemented in Cloud-Sim. The simulation results have shown efficiency of the proposed approach. © 2023, The Author(s), under exclusive licence to Springer Nature Singapore Pte Ltd.PublicationArticle Optimizing resource allocation in cloud-native applications through proactive autoscaling with the InformerAutoScale model(Springer, 2025) Bablu Kumar; Anshul Verma; Pradeepika Verma; Akram BennourCloud-native applications are designed to utilize cloud computing resources efficiently. These applications automatically scale resources by managing containerized copies of files and creating containers, which are handled through pods in Kubernetes. However, they face challenges due to the dynamic workload associated with automatic scaling and de-scaling in cloud environments. This makes it difficult to obtain accurate monitoring information, particularly with reactive autoscaling. This research presents a proactive autoscaling approach through the proposed InformerAutoScale model, which predicts resource requirements for long sequences in cloud-native applications to enable accurate pod scaling and descaling. Experimental results demonstrate that the InformerAutoScale approach effectively reduces resource waste and manages issues such as under and over-provisioning. The real-world implementation was carried out using Docker Desktop and Kubernetes, with scale or scaled pods allocated based on application requests. Proactive autoscaling achieved a 90.66% improvement in scaling efficiency compared to reactive methods. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.PublicationArticle Optimizing resource allocation using proactive scaling with predictive models and custom resources(Elsevier Ltd, 2024) Bablu Kumar; Anshul Verma; Pradeepika VermaKubernetes-based containerized applications heavily rely on distributing network workloads among cluster applications, primarily because of the frequent resource requests and limited set of pods and containers. Kubernetes scaling manages many containerized services using either reactive or proactive autoscaling. Reactive autoscaling cannot foresee future workload and thus cannot compete with proactive autoscaling. In addition to this, reactive autoscaling has several quality of service issues such as high latency, inability to manage workload fluctuations frequently, and insufficient service resource usage. To address these issues, the custom resource utilizes a predictive artificial intelligence scaling method, including Autoregressive Integrated Moving Average (ARIMA), Long Short-Term Memory (LSTM), Bidirectional Long Short-Term Memory (Bi-LSTM), and Transformer models. This custom resource is integrated with the operator reconciliation process and the model control system, utilizing coroutines to asynchronously manage workloads and allocate resources as pods within clusters. To evaluate effectiveness, the NASA-HTTP dataset is utilized to scale containerization as an application of resources and assess the accuracy of under-provisioning and over-provisioning as well in resource management. According to the model control system, the Transformer result predictive model ranks among the top min-heap models, exhibiting the lowest performance metric values compared to other predictive models. Specifically, its mean squared error is 77.3363, its root mean squared error is 8.7941, and its mean absolute error is 6.5930. Finally, facilitated through Docker-based resource management, integrated with Kubernetes, this service significantly manages workload and enhances resource utilization, efficiency, and performance using the proposed model. Incorporating Explainable artificial intelligence into the system improves clarity and comprehension of the decision-making process of predictive models, especially in model control and operator reconciliation. © 2024 Elsevier LtdPublicationArticle Statistical Analysis and Performance Evaluation of a Routing Protocol of Opportunistic Networks Using Design of Experiments Methodology(Springer, 2025) Mohini Singh; Bablu Kumar; Anshul Verma; Pradeepika VermaIn this research work, a comprehensive statistical analysis is performed on the performance of the Encounter Count and Interaction Time-based (ECIT) routing protocol of Opportunistic Networks (OppNETs), using the Design of Experiments (DoE) methodology. The Full Factorial, Plackett-Burman, and Taguchi are utilized to assess the impact of the four control factors (i.e., time-to-live, node density, range, and message generation interval) on the four performance metrics (i.e., delivery probability, delay, overhead, and buffer time) using the MINITAB tool. The probability plots support findings that the distribution of performance metrics for overhead and delivery probability follows a uniform distribution. Analysis of Variance (ANOVA) identifies time-to-live and range as statistically significant factors with substantial contributions to performance variations. The regression models provide accurate predictions of performance metrics, significantly reducing the computational resources required for OppNETs simulations. These insights offer valuable guidance for optimizing routing protocols in OppNETs, ultimately enhancing their efficiency and reliability. The findings show that the range factor significantly impacts most performance metrics, such as delivery probability, delay, and buffer time, for all DoE methods. Notably, the regression analysis shows that delivery probability exhibits a better generalization model, with R-squared and adjusted R-squared values indicating a high model fit, particularly for the Full Factorial (94.18%, 92.07%), Plackett-Burman (95.76%, 93.34%), and Taguchi (95.5%, 89.5%) methods. © The Author(s), under exclusive licence to Springer Science+Business Media, LLC, part of Springer Nature 2025.PublicationConference Paper Structuring and Text Summarization of Indian Legal Documents(Springer Science and Business Media Deutschland GmbH, 2025) Pawan Kumar; Bablu Kumar; Pradeepika Verma; Anshul VermaReading Indian legal texts is often exhaustive. Indian case documents are usually less organized and have more errors than those from other countries. This study aims to help people quickly understand large legal documents. We created a new dataset with 10,000 judgments from the Supreme Court of India, along with their handwritten summaries. The dataset is cleaned to fix legal abbreviations, punctuation errors, and ensure proper sentence structure. Each judgment is annotated with attributes such as case ID, date of judgment, names of the plaintiff and defendants, judge who delivered the final verdict, cited acts, citations, main judgment, and its corresponding headnote. In the results section, we provide statistical analyses of the judgments and their headnotes, offering useful insights for future research. Beyond legal document summarization, potential applications of this dataset include information retrieval, citation analysis, and predicting decisions made by specific judges. © The Author(s), under exclusive license to Springer Nature Switzerland AG 2025.
