Site icon IT World Canada

Bloomberg launches BloombergGPT

Bloomberg has created its own large-scale generative artificial intelligence (AI) model, BloombergGPT.

This model is a large language model (LLM) designed to “know” everything the entire company “knows.” BloombergGPT is specifically trained on a wide range of financial data, including the largest domain-specific dataset yet constructed, consisting of more than 363 billion tokens of Bloomberg’s financial data, news, filings, press releases, web-scraped financial documents, and social media.

The company has also trained BloombergGPT on another 345 billion tokens from general-purpose datasets obtained from elsewhere, including hundreds of English news sources, excluding those written by Bloomberg journalists, to maintain factuality and reduce bias. The Pile, which includes everything from YouTube captions to Project Gutenberg and a complete copy of Wikipedia, has also been included.

BloombergGPT will assist in improving existing financial NLP tasks, such as sentiment analysis, named entity recognition, news classification, and question answering. It is also able to translate natural language requests into the Bloomberg Query Language, a task which is tightly connected to Bloomberg’s needs. Additionally, the model can suggest Bloomberg-style headlines for news stories.

The new model is the first step in the development and application of this new technology for the financial industry, bringing the full potential of AI to the financial domain. This will unlock new opportunities for marshalling the vast quantities of data available on the Bloomberg Terminal to better help the firm’s customers.

BloombergGPT is trained on a corpus of more than 700 billion tokens, larger than OpenAI’s GPT-3 that was trained on about 500 billion tokens in 2020. The company’s research paper details the development of BloombergGPT and is written by Bloomberg’s Shijie Wu, Ozan İrsoy, Steven Lu, Vadim Dabravolski, Mark Dredze, Sebastian Gehrmann, Prabhanjan Kambadur, David Rosenberg, and Gideon Mann.

The sources for this piece include an article in NiemanLab.

Exit mobile version