Juggling with randomness
Adapted from lectures 10 and 11 of Ryan O’Donnell’s Fall 2017 graduate complexity theory class at CMU.
In Buying randomness with quantifiers, we saw that in some cases, we can trade randomness for
An interlude on quantifier notation
Since both of the two tricks we’ve seen allowed us to “play with the quantifiers” in the definitions of complexity classes in specific ways, it would be useful to make this more systematic by encapsulating the definitions of many complexity classes using quantifiers only.
For example,
- if
, then ; - if
, then .
Because the quantifiers are
- if
, then ; - if
, then .
As such, the reader is encourage to verify that
The tricks in our hands
Let’s recall the tools at our disposal: three easy ones, and the two we just proved.
Weaken:
This one is purely logical: we can always turn a
Specialize:
Another purely logical one: we can swap
In fact, this principle also extends well to
- you can swap
, because “if a song is liked by most people, then most people have a song that they like”; - you can swap
, because “if most people like all songs, then all songs are liked by most people”.
Combine:
If we have the same quantifier twice, then we can reduce it to a single quantifier. For example, suppose we were to define a dumb class
- if
, then ; - if
, then .
Then we could just combine strings
- if
, then ; - if
, then .
And this is now functionally equivalent to
Union bound:
The key of the proof of
In words: “if there’s more songs than people, and everyone likes all but a handful of songs, then most songs are liked by literally everyone”.
Squint:
In the proof that
implies ; implies .
So4 we can transform
In words: “an almost full jar will typically look full if you squint, but an almost empty jar will always look mostly empty no matter how you squint”.
Also, by taking the “
A slurry of corollaries
We’ve done all the hard thinking work, now we get to play around with quantifiers!
A alternative characterization of
The proof that
As it turns out,
One-sided error for and
Both
The two classes vary only in who goes first:
- In
, Merlin first sends a purported “proof” that , then Arthur uses randomness to “verify” the proof. - In
, Arthur first uses randomness to create an -type “challenge” for Merlin (e.g. a SAT instance), then Merlin tries to find a “solution” to the challenge.
The same squinting trick that gives “absolute guarantees” for
Formally,
In effect, we asked Merlin to attach to his proof the offsets
Similarly, for
This time, we used the “
On a surface level,
- For
, this is true because logically implies : the only thing that changes between and is that Merlin now gets to see Arthur’s coin flips before sending his message, so that can only help him make Arthur accept. - For
, on the other hand (and for the same reasons), does not directly imply : even if no message of Merlin can fool Arthur on average when Merlin goes first, once the order is flipped, Merlin might be able to do better by adaptively choosing which message to send based on Arthur’s message. But if the probability Arthur is fooled is small compared to the number of possible messages for Merlin, then most of the time Arthur cannot be fooled by any message.
In math, this gives
Overall, we can summarize what we know about these classes in this diagram (classes that are generally believed to be equal are shaded together):

Constant-round protocols are all in
One might think that by adding more rounds of interaction between Arthur and Merlin, we can keep extending the class of languages. But this is not the case! The reason is the intuition that we got from the previous section: it’s always better for Arthur to go before Merlin, so we can transform the protocol so that Arthur sends all of his messages first, then Merlin sends all of his messages.
For example, consider the class
What if ?
In complexity we often think of
One important result is to show that if
If we had an
But we know
Appendix: I lied a bit
This appendix is boring, please don’t read it.
I claimed earlier that we can always replace
- For the “union bound” in
to work, we need to boost the probability of success. This involves repeating the computation several times in parallel and taking the majority of the answers. - For the “squinting” trick in
we need to first boost the probability of succes then run this boosted computation at several offsets and take the OR.
This means that we should be able to take majorities or ORs in the underlying computation model. For
However, it does turn out to be okay, and ultimately this comes down to the fact that you can push monotone operations like
- if
, then ; - if
, then .
Which is equivalent to
- if
, then ; - if
, then .
-
All strings are understood to have length polynomial in
. ↩ -
In fact, as already mentioned in a footnote in Buying randomness with quantifiers, you can always move
to the inside (because you can just choose not to specialize) and to the outside (because if it always works, it cannot meaningfully specialize): whatever (monotone) quantifier you might come up with, and . But we’re getting carried away! ↩ -
Actually, this is only the case if the underlying computation model allows us to boost the probability of success, see more details in the appendix. ↩
-
Actually, this is only the case if the underlying computation model allows us to boost the probability of success and compute ORs of polynomially many runs, see more details in the appendix. ↩
-
Actually even the claim that
is not obvious. The protocol-based definition only promises an overall probability of success, not that Arthur “succeeds with high probability at every round”. You can (very carefully) prove it by parallel repetition, though. ↩ -
in this note, we won’t need to boost the probability for a computation that itself includes
. Ironically, for , it’s less easy, but you can indeed do it… as long as the probability of that is high enough. So as far as I can tell, if you want to boost the probability of some , you need to first boost the ’s right of it. ↩