Repository logo
Institutional Repository
Communities & Collections
Browse
Quick Links
  • Central Library
  • Digital Library
  • BHU Website
  • BHU Theses @ Shodhganga
  • BHU IRINS
  • Login
  • English
  • العربية
  • বাংলা
  • Català
  • Čeština
  • Deutsch
  • Ελληνικά
  • Español
  • Suomi
  • Français
  • Gàidhlig
  • हिंदी
  • Magyar
  • Italiano
  • Қазақ
  • Latviešu
  • Nederlands
  • Polski
  • Português
  • Português do Brasil
  • Srpski (lat)
  • Српски
  • Svenska
  • Türkçe
  • Yкраї́нська
  • Tiếng Việt
Log In
New user? Click here to register.Have you forgotten your password?
  1. Home
  2. Browse by Author

Browsing by Author "Samir Barman"

Filter results by typing the first few letters
Now showing 1 - 1 of 1
  • Results Per Page
  • Sort Options
  • Loading...
    Thumbnail Image
    PublicationArticle
    Transformer-based deep learning architecture for time series forecasting[Formula presented]
    (Elsevier B.V., 2024) G.H. Harish Nayak; Md Wasi Alam; G. Avinash; Rajeev Ranjan Kumar; Mrinmoy Ray; Samir Barman; K.N. Singh; B. Samuel Naik; Nurnabi Meherul Alam; Prasenjit Pal; Santosha Rathod; Jaiprakash Bisen
    Time series forecasting faces challenges due to the non-stationarity, nonlinearity, and chaotic nature of the data. Traditional deep learning models like RNNs, LSTMs, and GRUs process data sequentially but are inefficient for long sequences. To overcome the limitations of these models, we proposed a transformer-based deep learning architecture utilizing an attention mechanism for parallel processing, enhancing prediction accuracy and efficiency. This paper presents user-friendly code for the implementation of the proposed transformer-based deep learning architecture utilizing an attention mechanism for parallel processing. © 2024
An Initiative by BHU – Central Library
Powered by Dspace