AI systems are learning to predict time series data, which is data that measures the same variables at different points in time to spot trends. This could lead to new ways to forecast ATM withdrawals, predict medical outcomes, and track other important changes over time.
A study published in the arXiv preprint server, shows that large language models (LLMs) can be trained to forecast time series data without any fine-tuning. This means that LLMs can be used to predict future values in a time series dataset without having to be trained on that specific dataset.
The study’s authors, Nate Gruver of New York University and colleagues from NYU and Carnegie Mellon, trained GPT-3, an LLM developed by OpenAI, to predict the next event in a time series similar to predicting the next word in a sentence.
They found that GPT-3 was able to achieve higher likelihoods than dedicated time series models on a variety of time series forecasting tasks, including predicting ATM withdrawals and forecasting medical outcomes.
They can be used to forecast multiple time series simultaneously. This is because LLMs are trained on massive datasets of text and code, which gives them the ability to learn the relationships between different variables. The LLMs can also forecast time series data with longer horizons than traditional time series models. This is because LLMs can learn the long-term patterns in time series data.
The sources for this piece include an article in ZDNet.