Research Highlights: LLMs Can Process a lot more Text Than We Thought

A team of researchers at AI21 Labs, the company behind generative text AI platforms Human or NotWordtune, and Jurassic 2, has identified a new method to overcome a challenge that most Large Language Models (LLMs) grapple with – a limit as to how much text they can process before it becomes too expensive and impractical. 

The findings emerged from a study, in which the researchers showed that two simple changes to the attention mechanism enabled LLMs to tap into its inherent ability to read multiple pieces of text simultaneously, therefore bypassing the problem in the first place. The team illustrated through extensive testing that these models have the built-in ability for “parallel reading,” which makes the processing of many texts more efficient and accurate. 

Imagine you own a hotel and would like to categorize its reviews according to various parameters like cleanliness, check-in, and amenities. Previously, a large LLM would eventually run-into problems scanning all the reviews in their entirety and trying to put them into multiple categories. But by allowing the LLM to scan the texts simultaneously at various intervals, the LLM can improve its ability to categorize existing, and future, reviews. 

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW