Coding Theory
In this final chapter, we will see how entropy connects to compression and what that tells us about large language models.
Entropy is the unbeatable lower bound for lossless compression, if we treat letters independently.
Code intro
Say we have a long DNA string built from letters . We want to store it using as little disk space as possible. Here's a simple plan: We assign a binary code for each letter and encode the string using that code.
For example, we can use . The string gets stored using just 2 bits per letter. Done!
Can we do better? Sometimes, yes! Here's a riddle: In the following widget, try to build a code that encodes the following string with just 1.75 bits per letter on average.
While playing, notice that whenever you use a short code name to a letter, e.g. , no other letter can get a code name starting with the same code name . The reason is that otherwise, the resulting code sometimes wouldn't be decodable. E.g., if you use code names and , the message "" has two meanings.
Binary Tree (4 layers)
.
.
SPOILER
.
.
Here's the solution: . Since the letter frequencies in the text are , this encoding only uses
bits per letter.
OK, using shorter codes for more frequent letters makes sense, but what's the best way to do this? Coding theory has the answer. Roughly speaking, it tells us to try to give a letter with frequency a code-name of length .
For example, looking at [Eq. code-example?], you can rewrite the left-hand side as:
Every letter with frequency got code-name of length exactly ! Notice that for general strings of length , the length of codes satisfying the rule is this:
That is, if you manage to construct a code with , you spend bits per letter on average.
If we implement our -rule-of-thumb with the most direct algorithm possible, we get what's known as Shannon's code. This code sorts the letter-frequencies down from the largest one, and assigns to each letter of frequency a code of length (we round up because we can't have a code name with 1.5 bits). Here it is for English alphabet:
You can notice that the number of bits per letter (also called rate) is a bit north of the entropy of the underlying distribution. This is because of the rounding . For example, if there is a letter with frequency , we would like to give it a code name of length . Too bad that code names have to be integers and Shannon's code assigns it a code name of length . As a general rule, Shannon's code can use up to 1 bit per letter more than what's the entropy. 1
Unfortunately, there are situations where Shannon's code really is almost 1 bit worse than the entropy. Take a string that uses just two characters , but the frequency of is close to zero. Then, the entropy is close to zero, but Shannon can't do better than 1 bit per letter.
The source coding theorem
The source coding theorem is the most important theorem in coding theory. It basically says that on one hand, there are codes that get very close to the entropy bound. On the other hand, you can't beat that bound.
Here's the setup of the theorem more precisely. We model texts like this: There is a source that has a distribution over letters. This source keeps sampling letters independently, and sends them to us. This goes on for a long time, say steps; our goal is to save some binary string to the memory so that later on, we can recover from that string what letters the source sent.
This setup, with the source sampling independent letters, models that we only want to think about the problem of giving code names to letters based on frequencies. We are not trying to use the correlations like ' often follows by ' to get better compression.
The source coding theorem says that first, there's a way to store the emitted string using close to bits per letter. We have come close to that bound above - Shannon's code is only 1 bit worse than entropy. But with more complicated codes, we can even get rid of this 1-bit slack (at the expense that the code is no longer as simple as mapping single letters to code names).
The second part of the theorem says that we can't beat the rate . Details for that in the expand box.
Prediction = Compression
Using the fact that good codes can achieve a rate close to entropy can give us a lot of intuition!
Let's go through the terms in our dictionary and see how we can interpret them using the coding-theory language.
We defined the surprisal as a way of measuring how surprised we are given an outcome that happens with probability . But equivalently, it's also how many bits you need to store the outcome in memory, using the optimal code.
We introduced entropy as the "average surprisal" of a distribution. But equivalently, it's the rate of how many bits on average we need to store outcomes sampled from a distribution : We can interpret the formula as "The -th symbol comes with frequency and whenever it comes, we need bits to store it".
The cross-entropy is very similar; it's how many bits we need when data comes from but we store them using the code that's optimized for a different distribution . Finally, KL divergence (relative entropy) is how much worse our mismatched code is compared to the optimal one.
Here it is again, more concisely in a table:
Concept | Formula | Bayesian detective | Compression |
---|---|---|---|
Surprisal | Surprise of an outcome | Bits to store a letter | |
Entropy | Expected surprisal | Average bits per letter | |
Cross-entropy | Expected surprisal of model q | Average bits per letter for code optimized for q | |
KL divergence | Additional surprise from p β q | Additional storage from p β q |
LLM compression
So far, all our discussion was about encoding each letter independently of its context. The real fun begins when we want to compress a given text as much as possible, period. Now we definitely want to use the fact that usually follows by as much as possible.
I want to explain a hypothetical compression algorithm based on an LLM.
First, a technicality we have to go through quickly. Although we discussed that LLMs are predicting the next letter, that's a simplification. Before training any LLM, we split all texts into tokens and the input and output of LLMs consists of tokens, not letters. There is no fixed definition of what a token is, but for example GPT-2 works with around different tokens. That's enough to cover all frequent syllables, medium-length words, common names, and special symbols.
LLMs can be used to compress text. Here's how: We will assume that both the compression and the decompression algorithm have access to the same LLM, say GPT-2. The compression algorithm runs GPT-2 on the input text; for each position in the text, GPT-2 generates a probability distribution over the token, based on the tokens . We can use a code like Shannon's code to turn this distribution to a code assignment. For example, if GPT-2 predicts that has 50% probability of coming next, the Shannon's code would assign the code to . We encode the actual next token using this code.
That is, our compression algorithm uses a different code for each token, each code corresponding to the distribution over that token predicted by GPT-2. The decoding algorithm works analogously: For each token, it uses GPT-2 & Shannon's code to construct the code used to read the next token code.
The overall length of the encoded text is if we use Shannon's code. In the next widget, you can see how it works.
GPT2 Compression Visualization
Our compression algorithm isn't very practicalβit's very slow. More importantly, both parties also have to have GPT-2 stored on their computer, so we are getting some compression only if the text is much larger than the size of GPT-2, which is in gigabytes.
But here's the punchline. Using more complicated codes, we can get rid of the annoying and encode the input text with length very close to
This is exactly the cross-entropy loss used to train neural nets, as we discussed earlier. We are now getting an equivalent definition of what neural nets are being trained for! LLMs are trained to compress the internet as much as possible!
I think this is a powerful intuition that can help us understand where the knowledge / intelligence of LLMs is coming from. If you want to compress text efficiently, you have to learn the patterns in it so that you can recreate the same text just by storing a few hints about what it was about. To be good in compression, you have to learn that often follows by , texts about often involve , and what kinds of talking points might a virtue-ethicist use to critique consequentialist ethics.
This is a complementary intuition to our intuition that being good in the text-prediction game requires knowledge. Indeed, prediction and compression are two sides of the same coin!
What's next?
You are done, congrats! If you liked the mini-course, continue to the Graveyard that contains bonus chapters that I dropped from the main flow. Check out the resources, too. I will be grateful if you leave feedback here.