Title:
Transformer-based deep learning architecture for time series forecasting[Formula presented]

dc.contributor.authorG.H. Harish Nayak
dc.contributor.authorMd Wasi Alam
dc.contributor.authorG. Avinash
dc.contributor.authorRajeev Ranjan Kumar
dc.contributor.authorMrinmoy Ray
dc.contributor.authorSamir Barman
dc.contributor.authorK.N. Singh
dc.contributor.authorB. Samuel Naik
dc.contributor.authorNurnabi Meherul Alam
dc.contributor.authorPrasenjit Pal
dc.contributor.authorSantosha Rathod
dc.contributor.authorJaiprakash Bisen
dc.date.accessioned2026-02-09T04:26:19Z
dc.date.issued2024
dc.description.abstractTime series forecasting faces challenges due to the non-stationarity, nonlinearity, and chaotic nature of the data. Traditional deep learning models like RNNs, LSTMs, and GRUs process data sequentially but are inefficient for long sequences. To overcome the limitations of these models, we proposed a transformer-based deep learning architecture utilizing an attention mechanism for parallel processing, enhancing prediction accuracy and efficiency. This paper presents user-friendly code for the implementation of the proposed transformer-based deep learning architecture utilizing an attention mechanism for parallel processing. © 2024
dc.identifier.doi10.1016/j.simpa.2024.100716
dc.identifier.issn26659638
dc.identifier.urihttps://doi.org/10.1016/j.simpa.2024.100716
dc.identifier.urihttps://dl.bhu.ac.in/bhuir/handle/123456789/47085
dc.publisherElsevier B.V.
dc.subjectDeep learning
dc.subjectTime series forecasting
dc.subjectTransformer
dc.titleTransformer-based deep learning architecture for time series forecasting[Formula presented]
dc.typePublication
dspace.entity.typeArticle

Files

Collections