Reverse entropic inequalities
Usual entropic inequalities
Fundamentally, shared information is visible in the whole
- entropy is subadditive:
(shared information overcounted on the right); - entropy deficit is superadditive:
(shared information is only counted on the left).
More generally, Shearer’s lemma says that the same observation not just for individual coordinates, but any groupings such that
Using independence to reverse them
When the
And we even get inequalities for Information divergence and mutual information that can otherwise go both ways:
Reversing them without independence
When the variables are not independent, we cannot in general hope for
Conditional deficits
However, we can use the chain rule to show that the conditional deficits need to be large:
which is exactly the edge-isoperimetric inequality when
If
so the Kahn–Kalai–Linial theorem shows that at least one of the conditional deficits is significantly bigger than
Combining the chain rule and Kahn–Kalai–Linial, we get the
which (perhaps surprisingly) is tight for every value of
When
TODO: turn this into an actual #figure
Log-concave distributions
TODO: For distributions, “log-concavity” is a soft notion of independence which implies entropic inequalities, see Log-concave polynomials and distributions (not yet covered in that note).
-
in particular, uniform over the
-inputs of a function ↩