$\require{mathtools} \newcommand{\nc}{\newcommand} % %%% GENERIC MATH %%% % % Environments \newcommand{\al}[1]{\begin{align}#1\end{align}} % need this for \tag{} to work \renewcommand{\r}{\mathrm} \renewcommand{\t}{\textrm} % % Delimiters % (I needed to create my own because the MathJax version of \DeclarePairedDelimiter doesn't have \mathopen{} and that messes up the spacing) % .. one-part \newcommand{\p}[1]{\mathopen{}\left( #1 \right)} \renewcommand{\P}[1]{^{\p{#1}}} \renewcommand{\b}[1]{\mathopen{}\left[ #1 \right]} \newcommand{\set}[1]{\mathopen{}\left\{ #1 \right\}} \newcommand{\abs}[1]{\mathopen{}\left\lvert #1 \right\rvert} \newcommand{\floor}[1]{\mathopen{}\left\lfloor #1 \right\rfloor} \newcommand{\ceil}[1]{\mathopen{}\left\lceil #1 \right\rceil} \newcommand{\inner}[1]{\mathopen{}\left\langle #1 \right\rangle} \newcommand{\norm}[1]{\mathopen{}\left\lVert #1 \strut \right\rVert} \newcommand{\frob}[1]{\norm{#1}_\mathrm{F}} \newcommand{\mix}[1]{\mathopen{}\left\lfloor #1 \right\rceil} %% .. two-part \newcommand{\inco}[2]{#1 \mathop{}\middle|\mathop{} #2} \newcommand{\co}[2]{ {\left.\inco{#1}{#2}\right.}} \newcommand{\cond}{\co} % deprecated \newcommand{\pco}[2]{\p{\inco{#1}{#2}}} \newcommand{\bco}[2]{\b{\inco{#1}{#2}}} \newcommand{\setco}[2]{\set{\inco{#1}{#2}}} \newcommand{\at}[2]{ {\left.#1\strut\right|_{#2}}} \newcommand{\pat}[2]{\p{\at{#1}{#2}}} \newcommand{\bat}[2]{\b{\at{#1}{#2}}} \newcommand{\para}[2]{#1\strut \mathop{}\middle\|\mathop{} #2} \newcommand{\ppa}[2]{\p{\para{#1}{#2}}} \newcommand{\pff}[2]{\p{\ff{#1}{#2}}} \newcommand{\bff}[2]{\b{\ff{#1}{#2}}} \newcommand{\bffco}[4]{\bff{\cond{#1}{#2}}{\cond{#3}{#4}}} \newcommand{\sm}[1]{\p{\begin{smallmatrix}#1\end{smallmatrix}}} % % Greek \newcommand{\eps}{\epsilon} \newcommand{\veps}{\varepsilon} \newcommand{\vpi}{\varpi} % the following cause issues with real LaTeX tho :/ maybe consider naming it \fhi instead? \let\fi\phi % because it looks like an f \let\phi\varphi % because it looks like a p \renewcommand{\th}{\theta} \newcommand{\Th}{\Theta} \newcommand{\om}{\omega} \newcommand{\Om}{\Omega} % % Miscellaneous \newcommand{\LHS}{\mathrm{LHS}} \newcommand{\RHS}{\mathrm{RHS}} \DeclareMathOperator{\cst}{const} % .. operators \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\quasipoly}{quasipoly} \DeclareMathOperator{\negl}{negl} \DeclareMathOperator*{\argmin}{arg\thinspace min} \DeclareMathOperator*{\argmax}{arg\thinspace max} % .. functions \DeclareMathOperator{\id}{id} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\err}{err} \DeclareMathOperator{\ReLU}{ReLU} % .. analysis \let\d\undefined \newcommand{\d}{\operatorname{d}\mathopen{}} \newcommand{\dd}[1]{\operatorname{d}^{#1}\mathopen{}} \newcommand{\df}[2]{ {\f{\d #1}{\d #2}}} \newcommand{\ds}[2]{ {\sl{\d #1}{\d #2}}} \newcommand{\ddf}[3]{ {\f{\dd{#1} #2}{\p{\d #3}^{#1}}}} \newcommand{\dds}[3]{ {\sl{\dd{#1} #2}{\p{\d #3}^{#1}}}} \renewcommand{\part}{\partial} \newcommand{\partf}[2]{\f{\part #1}{\part #2}} \newcommand{\parts}[2]{\sl{\part #1}{\part #2}} \newcommand{\grad}[1]{\mathop{\nabla\!_{#1}}} % .. sets \newcommand{\es}{\emptyset} \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\F}{\mathbb{F}} \newcommand{\zo}{\set{0,1}} \newcommand{\pmo}{\set{\pm 1}} \newcommand{\zpmo}{\set{0,\pm 1}} % .... set operations \newcommand{\sse}{\subseteq} \newcommand{\out}{\not\in} \newcommand{\minus}{\setminus} \newcommand{\inc}[1]{\union \set{#1}} % "including" \newcommand{\exc}[1]{\setminus \set{#1}} % "except" % .. over and under \renewcommand{\ss}[1]{_{\substack{#1}}} \newcommand{\OB}{\overbrace} \newcommand{\ob}[2]{\OB{#1}^\t{#2}} \newcommand{\UB}{\underbrace} \newcommand{\ub}[2]{\UB{#1}_\t{#2}} \newcommand{\ol}{\overline} \newcommand{\tld}{\widetilde} % deprecated \renewcommand{\~}{\widetilde} \newcommand{\HAT}{\widehat} % deprecated \renewcommand{\^}{\widehat} \newcommand{\rt}[1]{ {\sqrt{#1}}} \newcommand{\for}[2]{_{#1=1}^{#2}} \newcommand{\sfor}{\sum\for} \newcommand{\pfor}{\prod\for} % .... two-part \newcommand{\f}{\frac} \renewcommand{\sl}[2]{#1 /\mathopen{}#2} \newcommand{\ff}[2]{\mathchoice{\begin{smallmatrix}\displaystyle\vphantom{\p{#1}}#1\\[-0.05em]\hline\\[-0.05em]\hline\displaystyle\vphantom{\p{#2}}#2\end{smallmatrix}}{\begin{smallmatrix}\vphantom{\p{#1}}#1\\[-0.1em]\hline\\[-0.1em]\hline\vphantom{\p{#2}}#2\end{smallmatrix}}{\begin{smallmatrix}\vphantom{\p{#1}}#1\\[-0.1em]\hline\\[-0.1em]\hline\vphantom{\p{#2}}#2\end{smallmatrix}}{\begin{smallmatrix}\vphantom{\p{#1}}#1\\[-0.1em]\hline\\[-0.1em]\hline\vphantom{\p{#2}}#2\end{smallmatrix}}} % .. arrows \newcommand{\from}{\leftarrow} \DeclareMathOperator*{\<}{\!\;\longleftarrow\;\!} \let\>\undefined \DeclareMathOperator*{\>}{\!\;\longrightarrow\;\!} \let\-\undefined \DeclareMathOperator*{\-}{\!\;\longleftrightarrow\;\!} \newcommand{\so}{\implies} % .. operators and relations \renewcommand{\*}{\cdot} \newcommand{\x}{\times} \newcommand{\sr}{\stackrel} \newcommand{\ce}{\coloneqq} \newcommand{\ec}{\eqqcolon} \newcommand{\ap}{\approx} \newcommand{\ls}{\lesssim} \newcommand{\gs}{\gtrsim} % .. punctuation and spacing \renewcommand{\.}[1]{#1\dots#1} \newcommand{\ts}{\thinspace} \newcommand{\q}{\quad} \newcommand{\qq}{\qquad} % % Levels of closeness \newcommand{\scirc}[1]{\sr{\circ}{#1}} \newcommand{\sdot}[1]{\sr{.}{#1}} \newcommand{\slog}[1]{\sr{\log}{#1}} \newcommand{\createClosenessLevels}[7]{ \newcommand{#2}{\mathrel{(#1)}} \newcommand{#3}{\mathrel{#1}} \newcommand{#4}{\mathrel{#1\!\!#1}} \newcommand{#5}{\mathrel{#1\!\!#1\!\!#1}} \newcommand{#6}{\mathrel{(\sdot{#1})}} \newcommand{#7}{\mathrel{(\slog{#1})}} } \let\lt\undefined \let\gt\undefined % .. vanilla versions (is it within a constant?) \newcommand{\ez}{\scirc=} \newcommand{\eq}{\simeq} \newcommand{\eqq}{\mathrel{\eq\!\!\eq}} \newcommand{\eqqq}{\mathrel{\eq\!\!\eq\!\!\eq}} \newcommand{\lez}{\scirc\le} \renewcommand{\lq}{\preceq} \newcommand{\lqq}{\mathrel{\lq\!\!\lq}} \newcommand{\lqqq}{\mathrel{\lq\!\!\lq\!\!\lq}} \newcommand{\gez}{\scirc\ge} \newcommand{\gq}{\succeq} \newcommand{\gqq}{\mathrel{\gq\!\!\gq}} \newcommand{\gqqq}{\mathrel{\gq\!\!\gq\!\!\gq}} \newcommand{\lz}{\scirc<} \newcommand{\lt}{\prec} \newcommand{\ltt}{\mathrel{\lt\!\!\lt}} \newcommand{\lttt}{\mathrel{\lt\!\!\lt\!\!\lt}} \newcommand{\gz}{\scirc>} \newcommand{\gt}{\succ} \newcommand{\gtt}{\mathrel{\gt\!\!\gt}} \newcommand{\gttt}{\mathrel{\gt\!\!\gt\!\!\gt}} % .. dotted versions (is it equal in the limit?) \newcommand{\ed}{\sdot=} \newcommand{\eqd}{\sdot\eq} \newcommand{\eqqd}{\sdot\eqq} \newcommand{\eqqqd}{\sdot\eqqq} \newcommand{\led}{\sdot\le} \newcommand{\lqd}{\sdot\lq} \newcommand{\lqqd}{\sdot\lqq} \newcommand{\lqqqd}{\sdot\lqqq} \newcommand{\ged}{\sdot\ge} \newcommand{\gqd}{\sdot\gq} \newcommand{\gqqd}{\sdot\gqq} \newcommand{\gqqqd}{\sdot\gqqq} \newcommand{\ld}{\sdot<} \newcommand{\ltd}{\sdot\lt} \newcommand{\lttd}{\sdot\ltt} \newcommand{\ltttd}{\sdot\lttt} \newcommand{\gd}{\sdot>} \newcommand{\gtd}{\sdot\gt} \newcommand{\gttd}{\sdot\gtt} \newcommand{\gtttd}{\sdot\gttt} % .. log versions (is it equal up to log?) \newcommand{\elog}{\slog=} \newcommand{\eqlog}{\slog\eq} \newcommand{\eqqlog}{\slog\eqq} \newcommand{\eqqqlog}{\slog\eqqq} \newcommand{\lelog}{\slog\le} \newcommand{\lqlog}{\slog\lq} \newcommand{\lqqlog}{\slog\lqq} \newcommand{\lqqqlog}{\slog\lqqq} \newcommand{\gelog}{\slog\ge} \newcommand{\gqlog}{\slog\gq} \newcommand{\gqqlog}{\slog\gqq} \newcommand{\gqqqlog}{\slog\gqqq} \newcommand{\llog}{\slog<} \newcommand{\ltlog}{\slog\lt} \newcommand{\lttlog}{\slog\ltt} \newcommand{\ltttlog}{\slog\lttt} \newcommand{\glog}{\slog>} \newcommand{\gtlog}{\slog\gt} \newcommand{\gttlog}{\slog\gtt} \newcommand{\gtttlog}{\slog\gttt} % % %%% SPECIALIZED MATH %%% % % Logic and bit operations \newcommand{\fa}{\forall} \newcommand{\ex}{\exists} \renewcommand{\and}{\wedge} \newcommand{\AND}{\bigwedge} \renewcommand{\or}{\vee} \newcommand{\OR}{\bigvee} \newcommand{\xor}{\oplus} \newcommand{\XOR}{\bigoplus} \newcommand{\union}{\cup} \newcommand{\inter}{\cap} \newcommand{\UNION}{\bigcup} \newcommand{\INTER}{\bigcap} \newcommand{\comp}{\overline} \newcommand{\true}{\r{true}} \newcommand{\false}{\r{false}} \newcommand{\tf}{\set{\true,\false}} \DeclareMathOperator{\One}{\mathbb{1}} \DeclareMathOperator{\1}{\mathbb{1}} % use \mathbbm instead if using real LaTeX \DeclareMathOperator{\LSB}{LSB} % % Linear algebra \newcommand{\spn}{\mathrm{span}} % do NOT use \span because it causes misery with amsmath \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\proj}{proj} \DeclareMathOperator{\dom}{dom} \DeclareMathOperator{\Img}{Im} \newcommand{\transp}{\mathsf{T}} \newcommand{\T}{^\transp} % .. named tensors \newcommand{\namedtensorstrut}{\vphantom{fg}} % milder than \mathstrut \newcommand{\name}[1]{\mathsf{\namedtensorstrut #1}} \newcommand{\nbin}[2]{\mathbin{\underset{\substack{#1}}{\namedtensorstrut #2}}} \newcommand{\ndot}[1]{\nbin{#1}{\odot}} \newcommand{\ncat}[1]{\nbin{#1}{\oplus}} \newcommand{\nsum}[1]{\sum\limits_{\substack{#1}}} \newcommand{\nfun}[2]{\mathop{\underset{\substack{#1}}{\namedtensorstrut\mathrm{#2}}}} \newcommand{\ndef}[2]{\newcommand{#1}{\name{#2}}} \newcommand{\nt}[1]{^{\transp(#1)}} % % Probability \newcommand{\tri}{\triangle} \newcommand{\Normal}{\mathcal{N}} % .. operators \DeclareMathOperator{\supp}{supp} \let\Pr\undefined \DeclareMathOperator*{\Pr}{Pr} \DeclareMathOperator*{\G}{\mathbb{G}} \DeclareMathOperator*{\Odds}{Od} \DeclareMathOperator*{\E}{E} \DeclareMathOperator*{\Var}{Var} \DeclareMathOperator*{\Cov}{Cov} \DeclareMathOperator*{\K}{K} \DeclareMathOperator*{\corr}{corr} \DeclareMathOperator*{\median}{median} \DeclareMathOperator*{\maj}{maj} % ... information theory \let\H\undefined \DeclareMathOperator*{\H}{H} \DeclareMathOperator*{\I}{I} \DeclareMathOperator*{\D}{D} \DeclareMathOperator*{\KL}{KL} % .. other divergences \newcommand{\dTV}{d_{\mathrm{TV}}} \newcommand{\dHel}{d_{\mathrm{Hel}}} \newcommand{\dJS}{d_{\mathrm{JS}}} % %%% SPECIALIZED COMPUTER SCIENCE %%% % % Complexity classes % .. classical \newcommand{\Poly}{\mathsf{P}} \newcommand{\NP}{\mathsf{NP}} \newcommand{\PH}{\mathsf{PH}} \newcommand{\PSPACE}{\mathsf{PSPACE}} \renewcommand{\L}{\mathsf{L}} % .. probabilistic \newcommand{\formost}{\mathsf{Я}} \newcommand{\RP}{\mathsf{RP}} \newcommand{\BPP}{\mathsf{BPP}} \newcommand{\MA}{\mathsf{MA}} \newcommand{\AM}{\mathsf{AM}} \newcommand{\IP}{\mathsf{IP}} \newcommand{\RL}{\mathsf{RL}} % .. circuits \newcommand{\NC}{\mathsf{NC}} \newcommand{\AC}{\mathsf{AC}} \newcommand{\ACC}{\mathsf{ACC}} \newcommand{\ThrC}{\mathsf{TC}} \newcommand{\Ppoly}{\mathsf{P}/\poly} \newcommand{\Lpoly}{\mathsf{L}/\poly} % .. resources \newcommand{\TIME}{\mathsf{TIME}} \newcommand{\SPACE}{\mathsf{SPACE}} \newcommand{\TISP}{\mathsf{TISP}} \newcommand{\SIZE}{\mathsf{SIZE}} % .. keywords \newcommand{\coclass}{\mathsf{co}} \newcommand{\Prom}{\mathsf{Promise}} % % Boolean analysis \newcommand{\harpoon}{\!\upharpoonright\!} \newcommand{\rr}[2]{#1\harpoon_{#2}} \newcommand{\Fou}[1]{\widehat{#1}} \DeclareMathOperator{\Ind}{\mathrm{Ind}} \DeclareMathOperator{\Inf}{\mathrm{Inf}} \newcommand{\Der}[1]{\operatorname{D}_{#1}\mathopen{}} \newcommand{\Exp}[1]{\operatorname{E}_{#1}\mathopen{}} \DeclareMathOperator{\Stab}{\mathrm{Stab}} \DeclareMathOperator{\Tau}{T} \DeclareMathOperator{\sens}{\mathrm{s}} \DeclareMathOperator{\bsens}{\mathrm{bs}} \DeclareMathOperator{\fbsens}{\mathrm{fbs}} \DeclareMathOperator{\Cert}{\mathrm{C}} \DeclareMathOperator{\DT}{\mathrm{DT}} \DeclareMathOperator{\CDT}{\mathrm{CDT}} % canonical \DeclareMathOperator{\ECDT}{\mathrm{ECDT}} \DeclareMathOperator{\CDTv}{\mathrm{CDT_{vars}}} \DeclareMathOperator{\ECDTv}{\mathrm{ECDT_{vars}}} \DeclareMathOperator{\CDTt}{\mathrm{CDT_{terms}}} \DeclareMathOperator{\ECDTt}{\mathrm{ECDT_{terms}}} \DeclareMathOperator{\CDTw}{\mathrm{CDT_{weighted}}} \DeclareMathOperator{\ECDTw}{\mathrm{ECDT_{weighted}}} \DeclareMathOperator{\AvgDT}{\mathrm{AvgDT}} \DeclareMathOperator{\PDT}{\mathrm{PDT}} % partial decision tree \DeclareMathOperator{\DTsize}{\mathrm{DT_{size}}} \DeclareMathOperator{\W}{\mathbf{W}} % .. functions (small caps sadly doesn't work) \DeclareMathOperator{\Par}{\mathrm{Par}} \DeclareMathOperator{\Maj}{\mathrm{Maj}} \DeclareMathOperator{\HW}{\mathrm{HW}} \DeclareMathOperator{\Thr}{\mathrm{Thr}} \DeclareMathOperator{\Tribes}{\mathrm{Tribes}} \DeclareMathOperator{\RotTribes}{\mathrm{RotTribes}} \DeclareMathOperator{\CycleRun}{\mathrm{CycleRun}} \DeclareMathOperator{\SAT}{\mathrm{SAT}} \DeclareMathOperator{\UniqueSAT}{\mathrm{UniqueSAT}} % % Dynamic optimality \newcommand{\OPT}{\mathsf{OPT}} \newcommand{\Alt}{\mathsf{Alt}} \newcommand{\Funnel}{\mathsf{Funnel}} % % Alignment \DeclareMathOperator{\Amp}{\mathrm{Amp}} % %%% TYPESETTING %%% % % In "text" \newcommand{\heart}{\heartsuit} \newcommand{\nth}{^\t{th}} \newcommand{\degree}{^\circ} \newcommand{\qu}[1]{\text{``}#1\text{''}} % remove these last two if using real LaTeX \newcommand{\qed}{\blacksquare} \newcommand{\qedhere}{\tag*{$\blacksquare$}} % % Fonts % .. bold \newcommand{\BA}{\boldsymbol{A}} \newcommand{\BB}{\boldsymbol{B}} \newcommand{\BC}{\boldsymbol{C}} \newcommand{\BD}{\boldsymbol{D}} \newcommand{\BE}{\boldsymbol{E}} \newcommand{\BF}{\boldsymbol{F}} \newcommand{\BG}{\boldsymbol{G}} \newcommand{\BH}{\boldsymbol{H}} \newcommand{\BI}{\boldsymbol{I}} \newcommand{\BJ}{\boldsymbol{J}} \newcommand{\BK}{\boldsymbol{K}} \newcommand{\BL}{\boldsymbol{L}} \newcommand{\BM}{\boldsymbol{M}} \newcommand{\BN}{\boldsymbol{N}} \newcommand{\BO}{\boldsymbol{O}} \newcommand{\BP}{\boldsymbol{P}} \newcommand{\BQ}{\boldsymbol{Q}} \newcommand{\BR}{\boldsymbol{R}} \newcommand{\BS}{\boldsymbol{S}} \newcommand{\BT}{\boldsymbol{T}} \newcommand{\BU}{\boldsymbol{U}} \newcommand{\BV}{\boldsymbol{V}} \newcommand{\BW}{\boldsymbol{W}} \newcommand{\BX}{\boldsymbol{X}} \newcommand{\BY}{\boldsymbol{Y}} \newcommand{\BZ}{\boldsymbol{Z}} \newcommand{\Ba}{\boldsymbol{a}} \newcommand{\Bb}{\boldsymbol{b}} \newcommand{\Bc}{\boldsymbol{c}} \newcommand{\Bd}{\boldsymbol{d}} \newcommand{\Be}{\boldsymbol{e}} \newcommand{\Bf}{\boldsymbol{f}} \newcommand{\Bg}{\boldsymbol{g}} \newcommand{\Bh}{\boldsymbol{h}} \newcommand{\Bi}{\boldsymbol{i}} \newcommand{\Bj}{\boldsymbol{j}} \newcommand{\Bk}{\boldsymbol{k}} \newcommand{\Bl}{\boldsymbol{l}} \newcommand{\Bm}{\boldsymbol{m}} \newcommand{\Bn}{\boldsymbol{n}} \newcommand{\Bo}{\boldsymbol{o}} \newcommand{\Bp}{\boldsymbol{p}} \newcommand{\Bq}{\boldsymbol{q}} \newcommand{\Br}{\boldsymbol{r}} \newcommand{\Bs}{\boldsymbol{s}} \newcommand{\Bt}{\boldsymbol{t}} \newcommand{\Bu}{\boldsymbol{u}} \newcommand{\Bv}{\boldsymbol{v}} \newcommand{\Bw}{\boldsymbol{w}} \newcommand{\Bx}{\boldsymbol{x}} \newcommand{\By}{\boldsymbol{y}} \newcommand{\Bz}{\boldsymbol{z}} \newcommand{\Balpha}{\boldsymbol{\alpha}} \newcommand{\Bbeta}{\boldsymbol{\beta}} \newcommand{\Bgamma}{\boldsymbol{\gamma}} \newcommand{\Bdelta}{\boldsymbol{\delta}} \newcommand{\Beps}{\boldsymbol{\eps}} \newcommand{\Bveps}{\boldsymbol{\veps}} \newcommand{\Bzeta}{\boldsymbol{\zeta}} \newcommand{\Beta}{\boldsymbol{\eta}} \newcommand{\Btheta}{\boldsymbol{\theta}} \newcommand{\Bth}{\boldsymbol{\th}} \newcommand{\Biota}{\boldsymbol{\iota}} \newcommand{\Bkappa}{\boldsymbol{\kappa}} \newcommand{\Blambda}{\boldsymbol{\lambda}} \newcommand{\Bmu}{\boldsymbol{\mu}} \newcommand{\Bnu}{\boldsymbol{\nu}} \newcommand{\Bxi}{\boldsymbol{\xi}} \newcommand{\Bpi}{\boldsymbol{\pi}} \newcommand{\Bvpi}{\boldsymbol{\vpi}} \newcommand{\Brho}{\boldsymbol{\rho}} \newcommand{\Bsigma}{\boldsymbol{\sigma}} \newcommand{\Btau}{\boldsymbol{\tau}} \newcommand{\Bupsilon}{\boldsymbol{\upsilon}} \newcommand{\Bphi}{\boldsymbol{\phi}} \newcommand{\Bfi}{\boldsymbol{\fi}} \newcommand{\Bchi}{\boldsymbol{\chi}} \newcommand{\Bpsi}{\boldsymbol{\psi}} \newcommand{\Bom}{\boldsymbol{\om}} % .. calligraphic \newcommand{\CA}{\mathcal{A}} \newcommand{\CB}{\mathcal{B}} \newcommand{\CC}{\mathcal{C}} \newcommand{\CD}{\mathcal{D}} \newcommand{\CE}{\mathcal{E}} \newcommand{\CF}{\mathcal{F}} \newcommand{\CG}{\mathcal{G}} \newcommand{\CH}{\mathcal{H}} \newcommand{\CI}{\mathcal{I}} \newcommand{\CJ}{\mathcal{J}} \newcommand{\CK}{\mathcal{K}} \newcommand{\CL}{\mathcal{L}} \newcommand{\CM}{\mathcal{M}} \newcommand{\CN}{\mathcal{N}} \newcommand{\CO}{\mathcal{O}} \newcommand{\CP}{\mathcal{P}} \newcommand{\CQ}{\mathcal{Q}} \newcommand{\CR}{\mathcal{R}} \newcommand{\CS}{\mathcal{S}} \newcommand{\CT}{\mathcal{T}} \newcommand{\CU}{\mathcal{U}} \newcommand{\CV}{\mathcal{V}} \newcommand{\CW}{\mathcal{W}} \newcommand{\CX}{\mathcal{X}} \newcommand{\CY}{\mathcal{Y}} \newcommand{\CZ}{\mathcal{Z}} % .. typewriter \newcommand{\TA}{\mathtt{A}} \newcommand{\TB}{\mathtt{B}} \newcommand{\TC}{\mathtt{C}} \newcommand{\TD}{\mathtt{D}} \newcommand{\TE}{\mathtt{E}} \newcommand{\TF}{\mathtt{F}} \newcommand{\TG}{\mathtt{G}} \renewcommand{\TH}{\mathtt{H}} \newcommand{\TI}{\mathtt{I}} \newcommand{\TJ}{\mathtt{J}} \newcommand{\TK}{\mathtt{K}} \newcommand{\TL}{\mathtt{L}} \newcommand{\TM}{\mathtt{M}} \newcommand{\TN}{\mathtt{N}} \newcommand{\TO}{\mathtt{O}} \newcommand{\TP}{\mathtt{P}} \newcommand{\TQ}{\mathtt{Q}} \newcommand{\TR}{\mathtt{R}} \newcommand{\TS}{\mathtt{S}} \newcommand{\TT}{\mathtt{T}} \newcommand{\TU}{\mathtt{U}} \newcommand{\TV}{\mathtt{V}} \newcommand{\TW}{\mathtt{W}} \newcommand{\TX}{\mathtt{X}} \newcommand{\TY}{\mathtt{Y}} \newcommand{\TZ}{\mathtt{Z}}$

Have you ever wondered…

  • what do the square brackets mean when we write things like $\Pr[\cdot]$ or $\E[\cdot]$?
  • in $\E\bco{\By}{\Bx}$, does the argument $\co{\By}{\Bx}$ have its own independent meaning?
  • whether conditional entropy might be defined the wrong way?

Then you’ve come to the wrong place, because this note is confusing as heck! We’ll take the thought “any expression can become random… including random expressions” and run with it.

Basics

Distributions

Distribution are essentially weighted sets: sets only tell us what values are possible, while distributions also tell us how likely each value is. Formally, for a set $X$, a distribution $P$ can be written as

\[ P = \setco{(x,p_x)}{x \in X} \]

where the probability weights $p_x \in [0,1]$ sum to $1$. We can also see $P$ as a function $X \to [0,1]$ in the usual way that functions are defined as sets of pairs: the probability of $x$ is then written $P(x)$.

Some useful notation:

  • Let $\tri(X)$ be the set of distributions over $X$.
  • Let $\supp(P) \ce \setco{x}{P(x) \ne 0}$ denote the support of distribution $P$, i.e. the set of points to which $P$ gives nonzero probability.
  • Let $\CU(X)$ denote the uniform distribution over $X$, which (when $X$ is a finite set) is the distribution that gives each value the same probability: $\CU(X)(x) \ce \f{1}{\abs X}$ for all $x \in X$.

Random comprehensions

Distributions are nice, but they’re not very convenient if we actually want to do things with their values. Sometimes, we want to talk about e.g. what $f(x)$ typically looks like if we take a random draw $x$ from $P$. Instead of doing some sort of sum over all of $X$ involving the probability weights $P(x)$, it would be nice to talk about $f(x)$ directly as a random thing.

When we talk about (unweighted) sets, we can use set comprehensions for this:

\[ \setco{f(x)}{x \in X}. \]

So let’s analogously write random comprehensions:

\[ \Bx \sim P:f(\Bx). \]

A random comprehension has two parts:

  • $\Bx \sim P$ declares a random symbol $\Bx$: it means “consider $\Bx$ to be a random draw from $P$”;
  • $f(\Bx)$ is a random expression: it means “take that $\Bx$, perform such-and-such transformation, and look at the result”.

We will always write random symbols in bold font (e.g. $\BA, \Ba$).

Note that in this view, “$\Bx$”, “$f(\Bx)$” and “$\Bx \sim P:f(\Bx)$” don’t exist as a mathematical objects the same way that the distribution $P$ does. They’re just grammar that we use to describe operations on random values in a more concise manner than if you manipulated the distributions directly. In order to try and make this point, we’ll generally avoid the term “random variable”.

Informally, it’s convenient to declare a random symbol globally then talk about one or more random expressions: e.g. if we say “let $\Bx \sim P$, and consider the correlation between $f(\Bx)$ and $g(\Bx)$” then under the hood we’re actually making claims about the random comprehension $\Bx \sim P: (f(\Bx), g(\Bx))$.

Capturing the distribution

Since random expressions are not themselves real mathematical objects, if we want to do anything useful with them, we eventually need to be able to get a distribution back. We do that using square brackets: if $\Bx \sim P$, then $\b{f(\Bx)}$ denotes “the distribution of $f(x)$, assuming $x$ was chosen at random according to $P$”.

Formally, we’ll write this distribution as $_{\Bx \sim P}\b{f(\Bx)}$, which is defined as

\[ _{\Bx \sim P}\b{f(\Bx)}(y) \ce \sum_{x: f(x)=y} P(x), \]

and we’ll also use the shorthand $_P\p{f} \ce {}_{\Bx \sim P}\b{f(\Bx)}$ in cases where the function $f$ is already explicitly defined. But in the language of random comprehensions, we’ll just write it as $\Bx \sim P: [f(\Bx)]$.

As a spoiler, we’ll later see that this operation of “capturing the distribution” of a random expression is precisely what the square brackets are doing in expressions like “$\Pr[\Bx=x]$” or “$\E[\Bx]$”.

Working with random expressions

Aliases

We can define new random symbols as aliases for random expressions, just like we would do normal values: e.g. $\By \ce f(\Bx)$. If we then talk about some expression $g(\By)$ based on $\By$, this should be interpreted as the random comprehension $\Bx \sim P: g(f(\Bx))$.

Random events

Random events are just random expressions that yield truth values $\tf$, such as “$\Bx = 3$” or “$\Bx \in S$” for some set $S \sse X$. We can denote events two ways:

  • as functions: e.g. $E(x) \ce (x = 3)$, so that $E(\Bx)$ is equivalent to $\Bx=3$;
  • as random symbols: e.g. $\BE \ce (\Bx = 3)$ (in this case $\BE$ is itself in bold font).

Joint distributions

If $R$ is a distribution on the set of pairs $X \times Y$, then we can write $(\Bx,\By) \sim R$ to describe a joint distribution on $\Bx$ and $\By$. Formally, writing $(\Bx,\By) \sim R$ is equivalent to writing $\Bz \sim R$ (where $\Bv$ is a random pair in $X\x Y$) then defining the aliases $\Bx \ce \Bz_1$ and $\Bz \ce \Bz_2$. The same applies more generally to tuples.

More generally, we say that random expressions are jointly distributed if their randomness comes from the same declaration: for example, if $\Bx \sim P$, then $f(\Bx)$ and $g(\Bx)$ are jointly distributed, and we can think about the ways they depend on each other.

If we have two distributions $P \in \tri(X)$ and $Q \in \tri\p Y$, we can form the product distribution $P \times Q \in \tri\p{X \x Y}$, defined as $(P \times Q)(x,y) \ce P(x)Q(y)$. In the other direction, given a joint distribution $R$ we can use our notation to marginalize it: for example, $(\Bx,\By) \sim R : \b{\Bx}$ gives the marginal for the first element of the pair.

Equality of random expressions

There are two ways that two random expressions $\Bx$ and $\By$ can be equal.

They can be “equal in distribution”, i.e. their distributions are identical. We can write this as $[\Bx] = [\By]$. For example, if $\Bom \sim \CU\p{[0, 2\pi]}$, then $[\cos(\Bom)] = [\cos(\Bom+\theta)]$ for any offset $\theta$, even though for almost all values of $\Bom$ these expressions would be different (as long as $\theta \not\in 2\pi\Z$). Note that for this notion to work out, the expressions $\Bx$ and $\By$ don’t need to be jointly distributed: for example, it’s completely fine to write $_{\Bx \sim \CU\p{[-1,0]}}\b{-2\Bx} = {}_{\By \sim \CU\p{[0,2]}}\b{2-\By}$ (and it is a true fact).

More strongly, $\Bx$ and $\By$ can be “equal in probability”, i.e. they are always1 equal. We can write this as $[\Bx = \By](\true) = 1$ (which we’ll see later is what is meant by $\Pr[\Bx = \By] = 1$). Note that $\BX$ and $\BY$ must be jointly distributed in order for this to make sense.

Computing statistics

Now that we have a nice way to manipulate probability distributions, it would be nice if we could get more information about them.

Principles

Suppose $\Bx \sim P$ takes real values, and we would like to know the mean of $f(\Bx)$. Thanks to our judicious notational choices, we can just denote it as $\E[f(\Bx)]$, where

  • $\b{f(\Bx)}$ captures the randomness of $f(\Bx)$ and returns its distribution—name it $Q$;
  • $\E$ is an operator on distributions that takes $Q$ and computes its mean.

This operator is defined as $\E(Q) \ce \sum_y Q(x) y$ for discrete distributions and $\E(Q) \ce \int_y y \d Q(y)$ for continuous distributions. And the notation $\E_{\Bx \sim P}\b{f(\Bx)}$ can be similarly understood as applying the operator $\E$ to the distribution $_{\Bx \sim P}\b{f(\Bx)}$.

In general, let’s call any operator on distributions that returns a single number a statistic. Even though they’re only defined on distributions, they automatically work on random expressions as long as you capture their distribution with brackets.

Types of statistics

There are two main types of statistics: those that are happy to be given values from any set and those that expect the values to be a specific types (e.g. $\R$, $\R^n$, $\tf$).

  • Some examples of generic statistics:
    • Entropy: $\H(P) \ce \E_{\Bx \sim P}\b{\log \f{1}{P(\Bx)}}$, works an arbitrary distribution $P$.
    • Ratio divergences: $D^f\pff{Q}{P} \ce \E_{\Bx \sim P}\b{f\p{\f{Q(\Bx)}{P(\Bx)}}}$ can take arbitrary distributions $P,Q$ as long as $\supp\p Q \sse \supp \p P$
  • Some examples of statistics which require require values of a specific type:
    • Truth values: $\Pr(P) \ce P(\true)$.
    • Real values or vectors:
      • expectation: see earlier;
      • variance: $\Var(P) \ce \E_{\Bx \sim P}\b{\p{\Bx - \E(P)}^2}$.
    • The mutual information is kind of an edge case: while $\I[\Bx;\By] \ce \D\pff{[\Bx]\x[\By]}{[(\Bx,\By)]}$ doesn’t require $\Bx$ or $\By$ to return values of a certain type, if we understand it as an operator on the distribution $R \ce \b{(\Bx,\By)}$, then $\I(R) \ce \D_{(\Bx,\By)}\pff{_{(\Bx,\By) \sim R}[\Bx]{}_{(\Bx,\By) \sim R}[\By]}{R}$ does require $R$ to be a distribution over pairs.

Even though we to define them based on the distributions to make a pedagogical point, most of the statistics in the latter category are simpler to define by using the random expressions directly than going through distributions.

Note: The operator “$\1(\cdot)$”, which is often used to turn a random event into a $\zo$ random variable, does not inherently work on the distributions: it’s just a function that takes a truth value ($\true$ or $\false$) and returns $0$ or $1$. For example, it’s perfectly okay to write $\1\p{x=3}$ to denote “the value which is $1$ if $x=3$ and $0$ otherwise” even if $x$ is deterministic. Because of that, we shouldn’t use brackets for it, since it’s not capturing randomness but just doing data processing.

Higher-level randomness

Hierarchy in randomness

There are many situations where we find ourselves groping for “random random expressions”. And not only that, but there is a sense of hierarchy to it:

  • You might have a random object and want to perform random experiments on it, adding some “inner randomness”.
  • Or, the other way around, you might have a computation that involves a random experiment and want to randomize one parameter of it, adding some “outer randomness”.

As an example of the former situation, suppose that you wanted to learn some function $f$ over an input distribution $D$, and you use a sample $\BS \sim D^m$ of $m$ data points from $D$ to learn some best fit $\HAT{f}_\BS$. Now, you want to know how likely it is that your learned $\HAT{f}_\BS$ is in fact close to $f$ on the whole distribution $D$, not just the sample $\BS$, so you want to make sure that on the average $\Bx \sim \CD$, the squared error $\p{\HAT{f}_\BS(\Bx) - f(\Bx)}^2$ is not too large. In this last expression, there are two sources of randomness: the sample $\BS$ (the outer randomness) and the input $\Bx$ (the inner randomness).

Crucially, they’re not on the same level! In $\HAT{f}_\BS(\Bx)$, there is a sense in which the randomness of $\Bx$ as “comes after” $\BS$. Or put yet another way, we want to see it as “draw a random $\BS$, and for each fixed outcome of $\BS$, consider a random $\Bx$” rather than “draw a random $\Bx$, and for each fixed outcome of $\Bx$, consider a random $\BS$”. In our random comprehension notation, this is akin to allowing the random expression on the right hand side to itself be random: for example, we might represent the squared error $\p{\HAT{f}_\BS(\Bx) - f(\Bx)}^2$ as

\[ \t{``$\BS \sim \CD^m:\p{\Bx \sim \CD : \p{\HAT{f}_\BS(\Bx) - f(\Bx)}^2}$"} \]

or for notational simplicity

\[ \BS \sim \CD^m: \Bx \sim \CD : \p{\HAT{f}_\BS(\Bx) - f(\Bx)}^2. \]

More generally, it feels like we should be able to take any computation and make it “more random” by taking one part and replacing it by a random symbol, even if the computation was itself already random. That is, the notation should allow us to make any random expression such as $f(x,\By)$ more random by randomizing $x$ in an “outer loop”:

\[ \Bx \sim P: \By \sim Q: f(\Bx,\By). \]

Of course, we can keep adding more levels, e.g.

\[ \Bx \sim P : \By \sim Q : \Bz \sim R: f(\Bx,\By,\Bz). \]

Capturing the innermost level

When a random expression has several levels of randomness, we want to work (e.g. compute probabilities or averages) with the innermost randomness first, so that’s what the brackets should capture. For example, in our machine learning example, we should be able write the probability that your model has a mean error greater than $\eps$ as

\[ \Pr\b{\E\b{\p{\HAT{f}_\BS(\Bx)-f(\Bx)}^2} > \eps}, \]

where the expectation is over the randomness of $\Bx$ only and the probability is over the randomness of $\BS$ only.

And for all we know, the distribution of the inner symbols might even depend on the values of the outer random symbols! E.g. we could define a family of distributions $\CQ$ parameterized by $x$ and consider the expression

\[ \Bx \sim P: \By \sim \CQ(\Bx): f(\Bx,\By), \]

in which case it would make no sense to do things like “average $f(\Bx,\By)$ over $\Bx$ only”, expecting to get an expression that depends on $\By$.

So we’ll set the rule for capturing distributions of higher-level random expressions to be to capture only the randomness from the innermost symbol(s) involved in it: e.g. the right way to resolve the comprehension

\[ \Bx \sim P: \By \sim Q: [f(\Bx,\By)] \]

is as

\[ \Bx \sim P: {}_{\By \sim Q}\b{f(\Bx,\By)}, \]

where ${}_{\By \sim Q}\b{f(\Bx,\By)}$ is now a distribution whose probabilities are random over $\Bx$ only:

\[ {}_{\By \sim Q}\b{f(\Bx,\By)}(z) = \sum_{y: f(\Bx,y)=z} Q(y). \]

This makes sense given our intuition that “we should be allowed to take any computation and make it random”. Indeed, if we started out with the computation $\E[f(x,\By)]$ and decided to make the “$x$” part random by introducing a random symbol $\Bx \sim P$, then the meaning of $\E[f(\Bx,\By)]$ ought to be “the value $\E[f(x,\By)]$, for a random $x$”, not “the expectation of $f(\Bx,\By)$ over the randomness of both $\Bx$ and $\By$, assuming they’re independent” nor “the value $\E[(\Bx,y)]$, for a random $y$”.

If we bracket this expression a second time, i.e. $\b{\b{f(\BX,\BY)}}$, then this resolves to a distribution over distributions, $_{\Bx \sim P}\b{_{\By \sim Q}\b{f(\Bx,\By)}}$. The probability that it gives to some distribution $R$ is the probability that if you pick a random $x$ from $P$, the distribution $_{\Bx \sim Q}\b{f(x,\By)}$ happens to be exactly $R$:

\[ \al{ _{\Bx \sim P}\b{_{\By \sim Q}\b{f(\Bx,\By)}}(R) &= \sum_{x:R = {}_{\By \sim Q}\b{f(x,\By)}} P(x)\\ &= \sum_{x:\forall z:R(z) = \sum_{y: f(x,y)=z} Q(y)} P(x). } \]

Resolving ambiguity

In general, if we define random symbols like $\Bx \sim P$ and $\By \sim Q$ in text and then use expressions like $\E\b{f(\Bx,\By)}$, there can be ambiguity over what hierarchy is implied. It could be that $\Bx$ is the outer symbol, that $\By$ is the outer symbol, or that they “share a level” and we should consider them as jointly distribution but independent. In other words, $f(\Bx,\By)$ could mean any of the following:

\[ \al{ \Bx \sim P: \By \sim Q: f(\Bx, \By)&\\ \By \sim Q: \Bx \sim P: f(\Bx, \By)&\\ \Bx \sim P, \By \sim Q: f(\Bx, \By)& \quad \text{(understood as $(\Bx, \By) \sim P \times Q: f(\Bx,\By)$)}, } \]

and each of these possibilities gives a different meaning to $\E\b{f(\Bx,\By)}$.

Even if the relative order of the symbols is clear from the context, we might sometimes deal with expressions where one of the symbols “dropped out”. For example, suppose we’re dealing with the expectation

\[ \Bx \sim P: \By \sim Q: \E[\Bx+\By-\By]. \]

At this stage, the expectation should clearly be over $\BY$, keeping $\BX$ fixed. But as the computation progresses, $\BY$ disappears, and we’d like to write

\[ \E[\Bx + \By - \By] = \E[\Bx], \]

but given that the rule is “always capture the innermost level”, the former is a random expression in $\Bx$, whereas (as written) the latter would be a fixed value!

To deal with such situations, we can specify explicitly which level should be captured. For example,

  • in the first scenario, we could write
    • $\E_\Bx[f(\Bx,\By)]$ to clarify that $\Bx$ was on its own at the innermost level (this yields a random expression in $\By$ only), or vice versa;
    • $\E_{\Bx,\By}[f(\Bx,\By)]$ to clarify that $\Bx$ and $\By$ were on the same level (this yields a deterministic value);
  • in the second scenario, we could write $\E[\Bx + \By - \By] = \E_\By[\Bx] = \Bx$ (in which case all three expressions are random in $\Bx$ only).

Of course, in any of these scenarios, one could just have chosen to write the declarations explicitly:

\[ \al{ &\E_{\Bx \sim P}[f(\Bx,\By)]\\ &\E_{\Bx \sim P, \By \sim Q}[f(\Bx,\By)]\\ &\E_{\By \sim Q}[\Bx + \By - \By] = \E_{\By \sim Q}[\Bx] = \Bx. } \]

And most of the time this is the sane thing to do. But the reason why we need to introduce default rules for the order of capture is that in the next section, we’ll deal with higher-level randomness where the levels were not created by explicitly drawing a random variable from a distribution.

Note: When the hierarchy is clear, this notation should not be used to force the brackets to capture a level that is outermore than some variable in the expression. For example, if the hierarchy is $\Bx \sim P: \By \sim Q: f(\Bx,\By)$, then talking about $\E_\By[f(\Bx,\By)]$ makes no sense. So when the relative levels are clear, the semantics of this subscript is just to ask the brackets to pretend that the symbol was present inside, as in $\E_\By\b\Bx$.

Examples

  • Previously, we defined defined the entropy as an operator $\H$ over distributions, which automatically defines $\H\b\Bx$ as $\H(P)$ for $P \ce \b\Bx$. We can equivalently define the latter as $\H[\Bx] = \E\b{\log\f{1}{[\Bx](\Bx)}}$. At first sight it looks like there might be a naming collision, but it actually resolves normally, in the following way:
    • in $[\Bx]$, $\Bx$ is the only random symbol, so it gets captured as $_{\BX \sim P}[\BX]$, which evaluates to the deterministic value $P$;
    • the whole expression is now $\E\b{\log \f{1}{P(\Bx)}}$;
    • in $\b{\log \f{1}{P(\Bx)}}$, there is again only one random symbol $\Bx$, which gets captured as $_{\Bx \sim P}\b{\log \f{1}{P(\Bx)}}$, giving some distribution $Q$ (whose values are the negative log-probabilities);
    • then we apply the expectation operator to $Q$, giving us the entropy of $P$.
  • Similarly, we can define the variance simply as $\Var[\Bx] \ce \E\b{\p{\Bx-\E[\Bx]}^2}$ without worrying about the two occurrences of $\Bx$ interferering.
    • And more generally, Wick products (e.g. $\inner{\Bx} \ce \Bx - \E[\Bx]$) and cumulants.
  • Coming back to equality of random expressions, if $\Bx$ and $\Bx'$ are level-$l$ random expressions over the same $l$ levels, then both “$[\Bx]=\b{\Bx'}$” and “$\Pr\b{\Bx = \Bx'}=1$” are level-$(l-1)$ random events, and only the innermost level dropped.

Manipulating the sample space

So far, the only way that we’ve manipulated the hierarchy of randomness was by using brackets to “pop” the innermost level off by turning into a distribution. Now let’s look at operations which can transform the innermost level while keeping it random, or even split it into two levels of randomness.

Folding mixtures

Consider an expression like

\[ \Bx \sim P:\By \sim \CQ(\Bx):\By, \]

where $\CQ\p\*$ is a parameterized family of distributions. If we consider the two levels of randomness as a whole, then the distribution of $\By$ could be called a mixture of the distributions $\CQ(x)$ weighted according to $P$, and we might say that we applied a stochastic map $\CQ$ to $\Bx$ in order to produce $\By$.

The distribution of the mixture in this case is given by $\E\b{\CQ\p\Bx}$ (where the expectation averages over the distributions as vectors). More generally, it can be useful to “fold” or “convolute” the two innermost levels of randomness into one. We’ll denote this by the notation $\mix{\*}$ (perhaps because we’re “rounding up” the levels of randomness together). Formally, for some random expression $\Be$, $\mix{\Be}$ is distributed according to $\E\b{\b\Be}$. For example, for the mixture described above, we have $\b{\mix{\By}} = \E\b{\b\By} = \E\b{\CQ(\Bx)}$, as desired.

Another example would be to consider the result of applying a random function

\[ \Bx \sim P: \Bf \sim F: \Bf\p\Bx. \]

In this case, the randomness of $\Bf$ doesn’t “depend on $\Bx$”, so we could have just seen $\Bx$ and $\Bf$ as jointly drawn from $P \x F$. But we still see the overall distribution as a mixture of the distributions $[\Bf(x)]$ for each possible value of $x$. And the folding operation gives us a random expression $\mix{\Bf\p\Bx}$ whose distribution is given by $\b{\mix{\Bf\p\Bx}} = {}_{\Bx \sim P,\Bf \sim F}\b{\Bf\p\Bx}$.

Note: I don’t think this $\mix\*$ notation has great use cases in practice, but I think it’ll be useful to have it when we talk about conditionals later on, because it’s more intuitive to think about folding the randomness than to have to parse complicated expressions like $\E\b{\b\Be}$.

Zooming in on a part of the space

#to-write

  • if $\BE$ is a random event, $\at{\BY}{\BE}$ denotes the “restriction” of $\BY$ to $\BE$: the random variable that has the distribution $\BY$ has after you condition on event $\BE$ being true
    • notation mnemonic: it’s $\BY$ “at” $\BE$ in the sense of $\BY$ on the part of the space where $\BE$
    • its distribution is described by: $\b{\at{\BY}{\BE}}(y) \ce \f{\b{(\BY,\BE)}(y,\r{true})}{\b{\BE}(\r{true})} = \f{\Pr[\BY = y \and \BE]}{\Pr\b{\BE}}$
    • can generalize to arbitrary factors (instead of just random events) (e.g. in the context of undirected graphical models), where $\BE$ takes any value between $0$ and $1$
      • then $\b{\at{\BY}{\BE}}(y) \ce \f{\E[1\p{\BY=y}\BE]}{\E[\BE]}$
  • rules:
    • only applies to the innermost level (clear if you look at the definition of $\bat\By\BE$ above)
    • since it’s more of a modification of the sample space than an operation over a given random expression, should always apply it right before capturing the distribution
    • but in text, might informally talk about $\at{\BY}{\BE}$ and $\at{\BZ}{\BE}$ as jointly distributed if $\BY$ and $\BZ$ were jointly distributed
      • but the right way to see them is as the two components of $\at{\p{\BY,\BZ}}{E}$
  • obvious notes:
    • in $\at{\BY}{\BE}$, $\BY$ and $\BE$ are not taken for their values; their randomness is being caught!
    • $\at{\BY}{\BE}$ depends on the overall (joint) distribution of $\BY$ and $\BE$
      • so it might have been maximally honest to denote it as something like $\r{zoom}\b{\BY,\BE}$?
  • when we write an expectation/probability/etc conditioned on an event $\BE$, we really mean the expectation/probability/etc of the corresponding restriction
    • e.g. $\E\bco{\BY}{\BX=x}$ really means $\E\b{\at{\BY}{\BX=x}}$
    • in fact it might make sense to use $\co\BY\BE$ to mean $\at\BY\BE$ more generally; it collides with our notation for conditionals, but it seems mostly fine since we’ll probably rarely want to condition on either value of a boolean expression, and if we want to do this we could just transform it into a $\zo$ value using the operator $\1\p\*$

Slicing up the space

#to-write

Zooming in on one value

  • suppose $\Bx,\By$ jointly distributed and want to study conditionals like $\at\By{\Bx = x}$ for fixed values $x$
  • let $\By|^\Bx_x$ be “shorthand” for $\at{\By}{\Bx = x}$
    • tbh it doesn’t really need to be an arrow; could just be $\left.\By\right\lvert^\Bx_x$
    • or $\By|^\Bx(x)$ to emphasize that it’s a “function”?
      • i’m a bit uneasy with the idea of having “functions” returning random expressions; the types seem messed up
  • that allows you to replace $x$ by things that are in the same level of randomness without it being interpreted as a single event
    • which would make things like $\left.\BY_i\strut\right|^{\BX_i}_{\~f_i\p{\BX_{i-1}}}$ valid
  • find a couple of VI apps?
    • though i’d expect that ML-style notation will be king here

Conditionals

  • let $\co{\By}{\Bx}$ be a shortcut for $\left.\By\strut\right|^\Bx_\Bx$
    • both conditioned on $\Bx$ and evaluated at $\Bx$
    • whereas the function version cuts the randomness in half and keep only one part, this keeps both parts?
    • in particular $\b{\bco\By\Bx}(Q) = \Pr_{\Bx' \sim \b{\Bx}}\b{\bco{\By}{\Bx=\Bx'} = Q}$
  • it’s level-$2$ random expression
    • in particular, $\E\bco{\By}{\Bx}$ is a random variable depending on $\Bx$, giving the value $\E\bco{\By}{\Bx = x}$ for each value $x$ of $\Bx$
    • outer level is jointly distributed with $\Bx$
      • note that we really want this: it’s okay to talk about things like $\Bx\E\bco{\By}{\Bx}$ and it would be awkward to try and pass $\Bx$ through the $\E$?
      • and it’s also eminently okay given where it came from: $\left.\By\strut\right\downarrow^\Bx_x$ is just a totally fine expression in $x$ that we should be allowed to want to randomize
  • sort of a inverse of mixture:
    • we have $\b{\mix{\co\By\Bx}} = \b\By$ (but not uniquely)
    • and the joint $\p{\Bx,\pco\By\Bx}$ (or equivalently, $\co{\p{\Bx,\By}}{\Bx}$) is the unique random expression $\Be$ such that $\b{\mix{\Be}} = \b{\p{\Bx,\By}}$ and the first component is made entirely of point masses (i.e. $\Pr\b{\exists x:\b{\Be_1} = 1_x} = 1$)
      • more precisely, $\b{\b\Be}$ is unique
  • $\E\b{\E\bco\By\Bx} = \E\b\By$
    • but this is a special property of $\E$ due to its linearity (?); this is not true for all operators
    • e.g. $\Var[\BY] = \Var\b{\E\bco\BY\BX} + \E\b{\Var\bco\BY\BX}$
  • $\cond{\Bx}{\Bx}$ is not quite the same thing as $\Bx$: it’s $\Bx$ but where we replaced each outcome $x$ by a random variable that has value $x$ with probability $1$
    • corresponding (random) distribution: $\bco\Bx\Bx = 1_\Bx$
    • same thing for $\cond{f(\Bx)}{\Bx}$ more generally: $\bco{f(\Bx)}\Bx = 1_{f\p\Bx}$
      • so for example, $\E\bco{f(\Bx)}{\Bx}=f(\Bx)$

Using conditionals

  • what people traditionally call the “conditional entropy” isn’t $\H\bco{\BY}{\BX}$ (which is a random variable depending on $\BX$) but instead $\E\b{\H\bco{\BY}{\BX}}$
    • so chain rule of entropy is more properly written as $\H\b{\BX,\BY} = \H\b{\BX} + \E\b{\H\bco{\BY}{\BX}}$
    • and i think it makes sense for this random variable $\H\bco{\BY}{\BX}$ to be the conditional entropy: it is the entropy of the conditional over $\BX$, and it itself also depends on $\BX$: one way to describe it is that it takes the value $\H\bat{\BY}{\BX = x}$ when $\BX$ takes the value $x$
    • similarly, the “consistent conditioning” property of Ratio divergences is more properly written as $D^f\bff{\BX,\BY'}{\BX,\BY} = \E\b{D^f\bffco{\BY'}{\BX}{\BY}{\BX}}$ (where $D^f\bffco{\BY'}{\BX}{\BY}{\BX}$ is a random variable depending on $\BX$, and the expectation is over $\BX$)
      • more precisely, $D^f\pff{\b{\BX,\BY'}}{\b{\BX,\BY}} = \E\b{D^f\pff{\bco{\BY'}{\BX}}{\bco{\BY}{\BX}}}$
      • and maybe it even makes it clear that there’s no super natural answer to the question of “what variable should you be averaging over when you try define the conditional divergence $D^f\pff{\b{\BX',\BY'}}{\b{\BX,\BY}}$”: $\bco{\BY'}{\BX'}$ and $\bco{\BY}{\BX}$ are both random variables, and they’re (a priori) not jointly distributed
  • $\Cov[\BX, \BY] = \Cov\b{\BX, \E\bco{\BY}{\BX}}$
    • the expression makes sense because $\E\bco{\BY}{\BX}$ is indeed jointly distributed with $\BX$
    • clear that $\E\b\Bx\E\b\By = \E\b\Bx\E\b{\E\bco\By\Bx}$, so suffices to show $\E\b{\Bx\By} = \E\b{\Bx\E\bco\Bx\By}$
\[ \al{ \E\b{\Bx\By} &= \E\b{\E\bco{\Bx\By}\Bx}\\ &= \E\b{\E\b{\Bx\pco{\By}\Bx}}\tag{$\bco\Bx\Bx = 1_\Bx$}\\ &= \E\b{\Bx\E\bco{\By}\Bx}\tag{constants factor out}\\ } \]
  • make sense of $\cond{\BZ}{\BX} = \at{\BZ}{\BY = \cond{\BY}{\BX}}$ (provided that we have the Markov property $\BX \leftarrow \BY \rightarrow \BZ$)
    • i guess first of all you’d first have to write it as $\co\BZ\BX = \BZ|^\BY_{\co\BY\BX}$?
      • but then the LHS is level-2 ($\BZ$’s is random over $\BX$) while the RHS seems level-3 ($\BZ$’s distribution is random over $\BY$, whose distribution is random over $\BX$)
      • so maybe the claim we’re trying to make is that $\bco\BZ\BX = \E\b{\b{\BZ|^\BY_\co\BY\BX}}$?
      • aka $\co\BZ\BX = \mix{\BZ|^\BY_\co\BY\BX}$
        • where the mixing folds the randomness of $\BY$ into the randomness of $\BZ$!
  • should we talk about $\D\b{\at{(\cond{\BX_S}{\BX_{\comp{S}}})}{\BE}}$ or $\D\b{\cond{\at{\BX_S}{\BE}}{\at{\BX_{\comp{S}}}{\BE}}}$?
    • the former only conditions over the $\BX_S$ part (because you can only do zooms for the innermost randomness)
    • the latter doesn’t make sense unless you consider $\at{\BX_S}{\BE}$ and $\at{\BX_S}{\BE}$ to be joint
    • really what you should do is to let $\BX' \ce \at\BX{\BE}$ then consider $\D\bco{\BX'_S}{\BX'_\comp{S}}$

#to-think related jumble of issues:

  • figure out the best possible way to write the math Variational inference then give up on using this notation for ML once and for all
  • is $\ff{\Bx'}{\Bx} \ce \f{\b{\Bx'}}{\b{\Bx}}\p{\Bx'}$ meaningful?
  • how should you actually notate cross-entropy? (or entropy, for that matter)
    • actually cross-entropy is very specific to $f(t) = t \log t$: if we let $g(t) \ce \f{f(t)}{t}$ it’s only because $g\p{\f Q P} - g(Q)$ has the nice form of $g\p{\f1p}$ that thinks work out!
      • yup, and for general ratio divergences we don’t have $\H_Q\p{P} \ge \H\p{Q}$
    • … i guess this is in the same way that entropy is nice because it’s also the divergence $\E\b{\D\pff{1_\Bx}{\b\Bx}}$
      • and that view is much more amenable to cross-entropy, which is $\E\b{\D\pff{1_\By}{\b{\Bx}}}$
      • suggests you might want to denote cross-entropy as something like
        • $\H\pat{P}{Q}$? (though this would be )
    • (maybe you should give up on the double bar having a consistent meaning?)
  1. The technical term is “almost surely”, to account for edge cases with continuous random variables where they technically could be equal but it happens with probability $0$