You are now in the main content area

Seminar: Catching the Catches: Unlocking Stability in Reservoir Computing

Date
March 21, 2025
Time
12:00 PM EDT - 1:00 PM EDT
Location
KHE 225
Open To
Physics students, faculty members, adjuncts, post-docs, staff

Student: Eric Conenna

Supervisor: Dr. Sean Cornelius

Abstract

We are integrating AI into more aspects of daily life because it is very good at predicting things we normally cannot, for example the future (or at least, the future of time-based systems). One method used to make these predictions is Reservoir computing (RC). RC is a computational framework based on Neural Networks that are used to create “digital twins” of real-world complex systems. They have shown great success in predicting dynamical systems, but come with a catch. To learn the system, key information must already be known. Compounding the issue, RC’s experience similar issues as other high-dimensional models, where training on large datasets can result in model instability. We hypothesize that by introspectively altering how the training of the model works, and incentivizing the training to promote accuracy for multiple predictions at once, we will increase stability and keep all benefits without reducing prediction accuracy. Focusing on a recently introduced variant of RC called “Next Generation Reservoir Computing” (NGRC), we will introduce self-modeling to its training through multi-task learning and promote accurate results for two separate predictions at the same time. Specifically, it will predict both future and past states, helping to ensure consistency and stability. By learning to predict multiple outputs simultaneously, the model can regulate itself more effectively resulting in a more accurate and stable digital twin. This will give a faster learning, more stable, and less intensive way to create digital twins than we currently have. With this we will then be able to improve the models that predict important real world dynamic and chaotic systems.