Content
summary Summary

ChatGPT can read 4,096 tokens, LongNet a billion. This could enable Transformer models that can process entire parts of the Web simultaneously.

The sequence length of transformer models plays an important role in training and especially in deployment: Larger sequence lengths allow a large context window in which, for example, a language model can process and generate more text, or a vision transformer can capture more information in an image.

A major problem with scaling sequence length is that the relationship between sequence length and required compute is quadratic in the standard Transformer architecture, so the required compute quickly explodes.

LongNet processes 250,000 times more tokens than ChatGPT

However, larger sequence lengths can be achieved by various optimizations: OpenAI's ChatGPT has a context window of 4,096 tokens, which is about 3,000 words, but there are variants of GPT-3.5-turbo with just about 8,000 tokens, and the largest GPT-4 model has about 32,000 tokens. With Claude, Anthropic offers a commercially available model with about 100,000 tokens.

Ad
Ad

With LongNet, Microsoft is now demonstrating a method that scales linearly and, according to the team, can scale to a billion tokens, which is 250,000 times longer than that of ChatGPT. That's about 750,000,000 words or 2,000,000 pages.

The team achieves this leap through an adapted attention mechanism they call "dilated attention". Here, the attention allocation decreases exponentially as the distance between tokens grows, so that the network looks at relationships between nearby tokens as closely as a standard attention mechanism, but applies coarser attention patterns to tokens that are farther apart.

LongNet to enable processing of web-sized datasets

In a test, the team uses LongNet to train a speech generation model with up to 32,000 tokens and compares it to classical transformer-based approaches. According to the team, LongNet demonstrates known scaling laws of classical transformer models; for example, the perplexity of the model decreases as it gets larger.

In the future, LongNet could enable the processing of web-sized datasets, the team said. The large context window also provides a large memory and receptive field for models, which is relevant to their interaction with people or the world. A larger context window also contains more complex causality and reasoning paths that models could exploit in training data, which could lead to better generalizing models. LongNet also makes it possible to explore the limits of in-context learning, the team said, "which has the potential to be a paradigm shift for many-shot learning, as an extremely long context may help the models alleviate catastrophic forgetting."

LongNet is just a proof of concept for now

Whether LongNet can actually deliver on these promises is unclear; the paper lacks comparisons with modern language models such as GPT-4 32k, and truly meaningful metrics such as accuracy or human evaluations. In this respect, LongNet is initially a feasibility study; whether such gigantic sequence lengths bring real advantages must now be shown in follow-up work.

Recommendation

In the future, the team plans to use LongNet for other applications, such as multimodal large language models or genomic data modeling.

Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Ad
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.
Support our independent, free-access reporting. Any contribution helps and secures our future. Support now:
Bank transfer
Summary
  • Microsoft's LongNet can handle up to a billion tokens, compared to ChatGPT's 4.096 tokens. This could allow Transformer models, for example, to process large portions of the Internet simultaneously.
  • LongNet achieves this through a customized attention mechanism called dilated attention, where the attention between two tokens decreases exponentially as they are further apart.
  • While there are promising use cases, such as processing web-scale datasets and exploring the limits of contextual learning, LongNet is currently only a proof of concept. More research is needed to confirm the real benefits.
Sources
Max is managing editor at THE DECODER. As a trained philosopher, he deals with consciousness, AI, and the question of whether machines can really think or just pretend to.
Join our community
Join the DECODER community on Discord, Reddit or Twitter - we can't wait to meet you.