$\require{mathtools} % %%% GENERIC MATH %%% % % Environments \newcommand{\al}[1]{\begin{align}#1\end{align}} % need this for \tag{} to work \renewcommand{\r}{\mathrm} % % Greek \newcommand{\eps}{\epsilon} \newcommand{\veps}{\varepsilon} \newcommand{\Om}{\Omega} \newcommand{\om}{\omega} \newcommand{\Th}{\Theta} \let\fi\phi % because it looks like an f \let\phi\varphi % because it looks like a p % % Miscellaneous shortcuts % .. over and under \newcommand{\ss}[1]{_{\substack{#1}}} \newcommand{\ob}{\overbrace} \newcommand{\ub}{\underbrace} \newcommand{\ol}{\overline} \newcommand{\tld}{\widetilde} \newcommand{\HAT}{\widehat} \newcommand{\f}{\frac} \newcommand{\s}[2]{#1 /\mathopen{}#2} \newcommand{\rt}[1]{ {\sqrt{#1}}} % .. relations \newcommand{\sr}{\stackrel} \newcommand{\sse}{\subseteq} \newcommand{\ce}{\coloneqq} \newcommand{\ec}{\eqqcolon} \newcommand{\ap}{\approx} \newcommand{\ls}{\lesssim} \newcommand{\gs}{\gtrsim} % .. miscer \newcommand{\q}{\quad} \newcommand{\qq}{\qquad} \newcommand{\heart}{\heartsuit} % % Delimiters % (I needed to create my own because the MathJax version of \DeclarePairedDelimiter doesn't have \mathopen{} and that messes up the spacing) % .. one-part \newcommand{\p}[1]{\mathopen{}\left( #1 \right)} \newcommand{\b}[1]{\mathopen{}\left[ #1 \right]} \newcommand{\set}[1]{\mathopen{}\left\{ #1 \right\}} \newcommand{\abs}[1]{\mathopen{}\left\lvert #1 \right\rvert} \newcommand{\floor}[1]{\mathopen{}\left\lfloor #1 \right\rfloor} \newcommand{\ceil}[1]{\mathopen{}\left\lceil #1 \right\rceil} \newcommand{\inner}[1]{\mathopen{}\left\langle #1 \right\rangle} % .... (use phantom to force at least the standard height of double bars) \newcommand{\norm}[1]{\mathopen{}\left\lVert #1 \vphantom{f} \right\rVert} \newcommand{\frob}[1]{\norm{#1}_\mathrm{F}} %% .. two-part \newcommand{\incond}[2]{#1 \mathop{}\middle|\mathop{} #2} \newcommand{\cond}[2]{ {\left.\incond{#1}{#2}\right.}} \newcommand{\pco}[2]{\p{\incond{#1}{#2}}} \newcommand{\bco}[2]{\b{\incond{#1}{#2}}} \newcommand{\setco}[2]{\set{\incond{#1}{#2}}} \newcommand{\at}[2]{ {\left.#1\right|_{#2}}} \newcommand{\pat}[2]{\p{\at{#1}{#2}}} \newcommand{\bat}[2]{\b{\at{#1}{#2}}} % ..... (use phantom to force at least the standard height of double bar) \newcommand{\oldpara}[2]{#1\vphantom{f} \mathop{}\middle\|\mathop{} #2} %\newcommand{\para}[2]{#1\vphantom{f} \mathop{}\middle\|\mathop{} #2} \newcommand{\para}[2]{\mathchoice{\begin{matrix}#1\\\hdashline#2\end{matrix}}{\begin{smallmatrix}#1\\\hdashline#2\end{smallmatrix}}{\begin{smallmatrix}#1\\\hdashline#2\end{smallmatrix}}{\begin{smallmatrix}#1\\\hdashline#2\end{smallmatrix}}} \newcommand{\ppa}[2]{\p{\para{#1}{#2}}} \newcommand{\bpa}[2]{\b{\para{#1}{#2}}} %\newcommand{\bpaco}[4]{\bpa{\incond{#1}{#2}}{\incond{#3}{#4}}} \newcommand{\bpaco}[4]{\bpa{\cond{#1}{#2}}{\cond{#3}{#4}}} % % Levels of closeness \newcommand{\scirc}[1]{\sr{\circ}{#1}} \newcommand{\sdot}[1]{\sr{.}{#1}} \newcommand{\slog}[1]{\sr{\log}{#1}} \newcommand{\createClosenessLevels}[7]{ \newcommand{#2}{\mathrel{(#1)}} \newcommand{#3}{\mathrel{#1}} \newcommand{#4}{\mathrel{#1\!\!#1}} \newcommand{#5}{\mathrel{#1\!\!#1\!\!#1}} \newcommand{#6}{\mathrel{(\sdot{#1})}} \newcommand{#7}{\mathrel{(\slog{#1})}} } \let\lt\undefined \let\gt\undefined % .. vanilla versions (is it within a constant?) \newcommand{\ez}{\scirc=} \newcommand{\eq}{\simeq} \newcommand{\eqq}{\mathrel{\eq\!\!\eq}} \newcommand{\eqqq}{\mathrel{\eq\!\!\eq\!\!\eq}} \newcommand{\lez}{\scirc\le} \newcommand{\lq}{\preceq} \newcommand{\lqq}{\mathrel{\lq\!\!\lq}} \newcommand{\lqqq}{\mathrel{\lq\!\!\lq\!\!\lq}} \newcommand{\gez}{\scirc\ge} \newcommand{\gq}{\succeq} \newcommand{\gqq}{\mathrel{\gq\!\!\gq}} \newcommand{\gqqq}{\mathrel{\gq\!\!\gq\!\!\gq}} \newcommand{\lz}{\scirc<} \newcommand{\lt}{\prec} \newcommand{\ltt}{\mathrel{\lt\!\!\lt}} \newcommand{\lttt}{\mathrel{\lt\!\!\lt\!\!\lt}} \newcommand{\gz}{\scirc>} \newcommand{\gt}{\succ} \newcommand{\gtt}{\mathrel{\gt\!\!\gt}} \newcommand{\gttt}{\mathrel{\gt\!\!\gt\!\!\gt}} % .. dotted versions (is it equal in the limit?) \newcommand{\ed}{\sdot=} \newcommand{\eqd}{\sdot\eq} \newcommand{\eqqd}{\sdot\eqq} \newcommand{\eqqqd}{\sdot\eqqq} \newcommand{\led}{\sdot\le} \newcommand{\lqd}{\sdot\lq} \newcommand{\lqqd}{\sdot\lqq} \newcommand{\lqqqd}{\sdot\lqqq} \newcommand{\ged}{\sdot\ge} \newcommand{\gqd}{\sdot\gq} \newcommand{\gqqd}{\sdot\gqq} \newcommand{\gqqqd}{\sdot\gqqq} \newcommand{\ld}{\sdot<} \newcommand{\ltd}{\sdot\lt} \newcommand{\lttd}{\sdot\ltt} \newcommand{\ltttd}{\sdot\lttt} \newcommand{\gd}{\sdot>} \newcommand{\gtd}{\sdot\gt} \newcommand{\gttd}{\sdot\gtt} \newcommand{\gtttd}{\sdot\gttt} % .. log versions (is it equal up to log?) \newcommand{\elog}{\slog=} \newcommand{\eqlog}{\slog\eq} \newcommand{\eqqlog}{\slog\eqq} \newcommand{\eqqqlog}{\slog\eqqq} \newcommand{\lelog}{\slog\le} \newcommand{\lqlog}{\slog\lq} \newcommand{\lqqlog}{\slog\lqq} \newcommand{\lqqqlog}{\slog\lqqq} \newcommand{\gelog}{\slog\ge} \newcommand{\gqlog}{\slog\gq} \newcommand{\gqqlog}{\slog\gqq} \newcommand{\gqqqlog}{\slog\gqqq} \newcommand{\llog}{\slog<} \newcommand{\ltlog}{\slog\lt} \newcommand{\lttlog}{\slog\ltt} \newcommand{\ltttlog}{\slog\lttt} \newcommand{\glog}{\slog>} \newcommand{\gtlog}{\slog\gt} \newcommand{\gttlog}{\slog\gtt} \newcommand{\gtttlog}{\slog\gttt} % % Miscellaneous \newcommand{\LHS}{\mathrm{LHS}} \newcommand{\RHS}{\mathrm{RHS}} % .. operators \DeclareMathOperator{\poly}{poly} \DeclareMathOperator{\polylog}{polylog} \DeclareMathOperator{\quasipoly}{quasipoly} \DeclareMathOperator{\negl}{negl} \DeclareMathOperator*{\argmin}{arg\,min} \DeclareMathOperator*{\argmax}{arg\,max} % .. functions \DeclareMathOperator{\id}{id} \DeclareMathOperator{\sign}{sign} \DeclareMathOperator{\err}{err} \DeclareMathOperator{\ReLU}{ReLU} % .. analysis \let\d\undefined \newcommand{\d}{\operatorname{d}\mathopen{}} \newcommand{\df}[2]{ {\f{\d #1}{\d #2}}} \newcommand{\ds}[2]{ {\s{\d #1}{\d #2}}} \newcommand{\part}{\partial} \newcommand{\partf}[2]{\f{\part #1}{\part #2}} \newcommand{\parts}[2]{\s{\part #1}{\part #2}} \newcommand{\grad}[1]{\mathop{\nabla\!_{#1}}} % .. sets of numbers \newcommand{\N}{\mathbb{N}} \newcommand{\Z}{\mathbb{Z}} \newcommand{\R}{\mathbb{R}} \newcommand{\C}{\mathbb{C}} \newcommand{\F}{\mathbb{F}} % %%% SPECIALIZED MATH %%% % % Logic \renewcommand{\and}{\wedge} \newcommand{\AND}{\bigwedge} \newcommand{\or}{\vee} \newcommand{\OR}{\bigvee} \newcommand{\xor}{\oplus} \newcommand{\XOR}{\bigoplus} \newcommand{\union}{\cup} \newcommand{\inter}{\cap} \newcommand{\UNION}{\bigcup} \newcommand{\INTER}{\bigcap} \newcommand{\comp}{\overline} \newcommand{\true}{\r{true}} \newcommand{\false}{\r{false}} \newcommand{\tf}{\set{\true,\false}} \DeclareMathOperator{\One}{\mathbb{1}} \DeclareMathOperator{\1}{\mathbb{1}} % % Linear algebra \renewcommand{\span}{\mathrm{span}} \DeclareMathOperator{\rank}{rank} \DeclareMathOperator{\proj}{proj} \DeclareMathOperator{\dom}{dom} \DeclareMathOperator{\Img}{Im} \newcommand{\transp}{\mathsf{T}} \renewcommand{\t}{^\transp} % ... named tensors \newcommand{\namedtensorstrut}{\vphantom{fg}} % milder than \mathstrut \newcommand{\name}[1]{\mathsf{\namedtensorstrut #1}} \newcommand{\nbin}[2]{\mathbin{\underset{\substack{#1}}{\namedtensorstrut #2}}} \newcommand{\ndot}[1]{\nbin{#1}{\odot}} \newcommand{\ncat}[1]{\nbin{#1}{\oplus}} \newcommand{\nsum}[1]{\sum\limits_{\substack{#1}}} \newcommand{\nfun}[2]{\mathop{\underset{\substack{#1}}{\namedtensorstrut\mathrm{#2}}}} \newcommand{\ndef}[2]{\newcommand{#1}{\name{#2}}} \newcommand{\nt}[1]{^{\transp(#1)}} % % Probability \newcommand{\Normal}{\mathcal{N}} \let\Pr\undefined \DeclareMathOperator*{\Pr}{Pr} \DeclareMathOperator*{\G}{\mathbb{G}} \DeclareMathOperator*{\Odds}{Od} \DeclareMathOperator*{\E}{E} \DeclareMathOperator*{\Var}{Var} \DeclareMathOperator*{\Cov}{Cov} \DeclareMathOperator*{\corr}{corr} \DeclareMathOperator*{\median}{median} \newcommand{\dTV}{d_{\mathrm{TV}}} \newcommand{\dHel}{d_{\mathrm{Hel}}} \newcommand{\dJS}{d_{\mathrm{JS}}} % ... information theory \let\H\undefined \DeclareMathOperator*{\H}{H} \DeclareMathOperator*{\I}{I} \DeclareMathOperator*{\D}{D} % %%% SPECIALIZED COMPUTER SCIENCE %%% % % Complexity classes % .. classical \newcommand{\Poly}{\mathsf{P}} \newcommand{\NP}{\mathsf{NP}} \newcommand{\PH}{\mathsf{PH}} \newcommand{\PSPACE}{\mathsf{PSPACE}} \renewcommand{\L}{\mathsf{L}} % .. probabilistic \newcommand{\formost}{\mathsf{Я}} \newcommand{\RP}{\mathsf{RP}} \newcommand{\BPP}{\mathsf{BPP}} \newcommand{\MA}{\mathsf{MA}} \newcommand{\AM}{\mathsf{AM}} \newcommand{\IP}{\mathsf{IP}} \newcommand{\RL}{\mathsf{RL}} % .. circuits \newcommand{\NC}{\mathsf{NC}} \newcommand{\AC}{\mathsf{AC}} \newcommand{\ACC}{\mathsf{ACC}} \newcommand{\TC}{\mathsf{TC}} \newcommand{\Ppoly}{\mathsf{P}/\poly} \newcommand{\Lpoly}{\mathsf{L}/\poly} % .. resources \newcommand{\TIME}{\mathsf{TIME}} \newcommand{\SPACE}{\mathsf{SPACE}} \newcommand{\TISP}{\mathsf{TISP}} \newcommand{\SIZE}{\mathsf{SIZE}} % .. keywords \newcommand{\co}{\mathsf{co}} \newcommand{\Prom}{\mathsf{Promise}} % % Boolean analysis \newcommand{\zo}{\set{0,1}} \newcommand{\pmo}{\set{\pm 1}} \newcommand{\zpmo}{\set{0,\pm 1}} \newcommand{\harpoon}{\!\upharpoonright\!} \newcommand{\rr}[2]{#1\harpoon_{#2}} \newcommand{\Fou}[1]{\widehat{#1}} \DeclareMathOperator{\Ind}{\mathrm{Ind}} \DeclareMathOperator{\Inf}{\mathrm{Inf}} \newcommand{\Der}[1]{\operatorname{D}_{#1}\mathopen{}} \newcommand{\Exp}[1]{\operatorname{E}_{#1}\mathopen{}} \DeclareMathOperator{\Stab}{\mathrm{Stab}} \DeclareMathOperator{\T}{T} \DeclareMathOperator{\sens}{\mathrm{s}} \DeclareMathOperator{\bsens}{\mathrm{bs}} \DeclareMathOperator{\fbsens}{\mathrm{fbs}} \DeclareMathOperator{\Cert}{\mathrm{C}} \DeclareMathOperator{\DT}{\mathrm{DT}} \DeclareMathOperator{\CDT}{\mathrm{CDT}} % canonical \DeclareMathOperator{\ECDT}{\mathrm{ECDT}} \DeclareMathOperator{\CDTv}{\mathrm{CDT_{vars}}} \DeclareMathOperator{\ECDTv}{\mathrm{ECDT_{vars}}} \DeclareMathOperator{\CDTt}{\mathrm{CDT_{terms}}} \DeclareMathOperator{\ECDTt}{\mathrm{ECDT_{terms}}} \DeclareMathOperator{\CDTw}{\mathrm{CDT_{weighted}}} \DeclareMathOperator{\ECDTw}{\mathrm{ECDT_{weighted}}} \DeclareMathOperator{\AvgDT}{\mathrm{AvgDT}} \DeclareMathOperator{\PDT}{\mathrm{PDT}} % partial decision tree \DeclareMathOperator{\DTsize}{\mathrm{DT_{size}}} \DeclareMathOperator{\W}{\mathbf{W}} % .. functions (small caps sadly doesn't work) \DeclareMathOperator{\Par}{\mathrm{Par}} \DeclareMathOperator{\Maj}{\mathrm{Maj}} \DeclareMathOperator{\HW}{\mathrm{HW}} \DeclareMathOperator{\Thr}{\mathrm{Thr}} \DeclareMathOperator{\Tribes}{\mathrm{Tribes}} \DeclareMathOperator{\RotTribes}{\mathrm{RotTribes}} \DeclareMathOperator{\CycleRun}{\mathrm{CycleRun}} \DeclareMathOperator{\SAT}{\mathrm{SAT}} \DeclareMathOperator{\UniqueSAT}{\mathrm{UniqueSAT}} % % Dynamic optimality \newcommand{\OPT}{\mathsf{OPT}} \newcommand{\Alt}{\mathsf{Alt}} \newcommand{\Funnel}{\mathsf{Funnel}} % % Alignment \DeclareMathOperator{\Amp}{\mathrm{Amp}} % %%% TYPESETTING %%% % % In text \renewcommand{\th}{^{\mathrm{th}}} \newcommand{\degree}{^\circ} % % Fonts % .. bold \newcommand{\BA}{\boldsymbol{A}} \newcommand{\BB}{\boldsymbol{B}} \newcommand{\BC}{\boldsymbol{C}} \newcommand{\BD}{\boldsymbol{D}} \newcommand{\BE}{\boldsymbol{E}} \newcommand{\BF}{\boldsymbol{F}} \newcommand{\BG}{\boldsymbol{G}} \newcommand{\BH}{\boldsymbol{H}} \newcommand{\BI}{\boldsymbol{I}} \newcommand{\BJ}{\boldsymbol{J}} \newcommand{\BK}{\boldsymbol{K}} \newcommand{\BL}{\boldsymbol{L}} \newcommand{\BM}{\boldsymbol{M}} \newcommand{\BN}{\boldsymbol{N}} \newcommand{\BO}{\boldsymbol{O}} \newcommand{\BP}{\boldsymbol{P}} \newcommand{\BQ}{\boldsymbol{Q}} \newcommand{\BR}{\boldsymbol{R}} \newcommand{\BS}{\boldsymbol{S}} \newcommand{\BT}{\boldsymbol{T}} \newcommand{\BU}{\boldsymbol{U}} \newcommand{\BV}{\boldsymbol{V}} \newcommand{\BW}{\boldsymbol{W}} \newcommand{\BX}{\boldsymbol{X}} \newcommand{\BY}{\boldsymbol{Y}} \newcommand{\BZ}{\boldsymbol{Z}} \newcommand{\Ba}{\boldsymbol{a}} \newcommand{\Bb}{\boldsymbol{b}} \newcommand{\Bc}{\boldsymbol{c}} \newcommand{\Bd}{\boldsymbol{d}} \newcommand{\Be}{\boldsymbol{e}} \newcommand{\Bf}{\boldsymbol{f}} \newcommand{\Bg}{\boldsymbol{g}} \newcommand{\Bh}{\boldsymbol{h}} \newcommand{\Bi}{\boldsymbol{i}} \newcommand{\Bj}{\boldsymbol{j}} \newcommand{\Bk}{\boldsymbol{k}} \newcommand{\Bp}{\boldsymbol{p}} \newcommand{\Bq}{\boldsymbol{q}} \newcommand{\Br}{\boldsymbol{r}} \newcommand{\Bs}{\boldsymbol{s}} \newcommand{\Bt}{\boldsymbol{t}} \newcommand{\Bu}{\boldsymbol{u}} \newcommand{\Bv}{\boldsymbol{v}} \newcommand{\Bw}{\boldsymbol{w}} \newcommand{\Bx}{\boldsymbol{x}} \newcommand{\By}{\boldsymbol{y}} \newcommand{\Bz}{\boldsymbol{z}} \newcommand{\Balpha}{\boldsymbol{\alpha}} \newcommand{\Bbeta}{\boldsymbol{\beta}} \newcommand{\Bgamma}{\boldsymbol{\gamma}} \newcommand{\Bdelta}{\boldsymbol{\delta}} \newcommand{\Beps}{\boldsymbol{\eps}} \newcommand{\Bveps}{\boldsymbol{\veps}} \newcommand{\Bzeta}{\boldsymbol{\zeta}} \newcommand{\Beta}{\boldsymbol{\eta}} \newcommand{\Btheta}{\boldsymbol{\theta}} \newcommand{\Biota}{\boldsymbol{\iota}} \newcommand{\Bkappa}{\boldsymbol{\kappa}} \newcommand{\Blambda}{\boldsymbol{\lambda}} \newcommand{\Bmu}{\boldsymbol{\mu}} \newcommand{\Bnu}{\boldsymbol{\nu}} \newcommand{\Bxi}{\boldsymbol{\xi}} \newcommand{\Bomicron}{\boldsymbol{\omicron}} \newcommand{\Bpi}{\boldsymbol{\pi}} \newcommand{\Brho}{\boldsymbol{\rho}} \newcommand{\Bsigma}{\boldsymbol{\sigma}} \newcommand{\Btau}{\boldsymbol{\tau}} \newcommand{\Bupsilon}{\boldsymbol{\upsilon}} \newcommand{\Bphi}{\boldsymbol{\phi}} \newcommand{\Bfi}{\boldsymbol{\fi}} \newcommand{\Bchi}{\boldsymbol{\chi}} \newcommand{\Bpsi}{\boldsymbol{\psi}} \newcommand{\Bomega}{\boldsymbol{\omega}} % .. calligraphic \newcommand{\CA}{\mathcal{A}} \newcommand{\CB}{\mathcal{B}} \newcommand{\CC}{\mathcal{C}} \newcommand{\CD}{\mathcal{D}} \newcommand{\CE}{\mathcal{E}} \newcommand{\CF}{\mathcal{F}} \newcommand{\CG}{\mathcal{G}} \newcommand{\CH}{\mathcal{H}} \newcommand{\CI}{\mathcal{I}} \newcommand{\CJ}{\mathcal{J}} \newcommand{\CK}{\mathcal{K}} \newcommand{\CL}{\mathcal{L}} \newcommand{\CM}{\mathcal{M}} \newcommand{\CN}{\mathcal{N}} \newcommand{\CO}{\mathcal{O}} \newcommand{\CP}{\mathcal{P}} \newcommand{\CQ}{\mathcal{Q}} \newcommand{\CR}{\mathcal{R}} \newcommand{\CS}{\mathcal{S}} \newcommand{\CT}{\mathcal{T}} \newcommand{\CU}{\mathcal{U}} \newcommand{\CV}{\mathcal{V}} \newcommand{\CW}{\mathcal{W}} \newcommand{\CX}{\mathcal{X}} \newcommand{\CY}{\mathcal{Y}} \newcommand{\CZ}{\mathcal{Z}} % .. typewriter \newcommand{\TA}{\mathtt{A}} \newcommand{\TB}{\mathtt{B}} \newcommand{\TC}{\mathtt{C}} \newcommand{\TD}{\mathtt{D}} \newcommand{\TE}{\mathtt{E}} \newcommand{\TF}{\mathtt{F}} \newcommand{\TG}{\mathtt{G}} \newcommand{\TH}{\mathtt{H}} \newcommand{\TI}{\mathtt{I}} \newcommand{\TJ}{\mathtt{J}} \newcommand{\TK}{\mathtt{K}} \newcommand{\TL}{\mathtt{L}} \newcommand{\TM}{\mathtt{M}} \newcommand{\TN}{\mathtt{N}} \newcommand{\TO}{\mathtt{O}} \newcommand{\TP}{\mathtt{P}} \newcommand{\TQ}{\mathtt{Q}} \newcommand{\TR}{\mathtt{R}} \newcommand{\TS}{\mathtt{S}} \newcommand{\TT}{\mathtt{T}} \newcommand{\TU}{\mathtt{U}} \newcommand{\TV}{\mathtt{V}} \newcommand{\TW}{\mathtt{W}} \newcommand{\TX}{\mathtt{X}} \newcommand{\TY}{\mathtt{Y}} \newcommand{\TZ}{\mathtt{Z}}$

Personal summary of Named Tensor Notation.

Named tensor notation gives names to the axes in vectors, matrices and tensors. These names are analogous to units in physics: if you keep track of the units while manipulating physical quantities, you’ll catch basic mistakes, and the computation will make more sense overall.

For example, instead of defining the parity check matrix of a code as $H \in \F^{(n-k)\times n}$, you might define it as $\ndef{\gens}{gens}\ndef{\coords}{coords}\ndef{\checks}{checks}H \in \F^{\checks\times\coords}$. Then, you can

  • access an individual parity check as $H_{\checks(j)} \in \F^\coords$,
  • access the coefficients for a particular coordinate as $H_{\coords(i)} \in \F^\checks$,
  • access one particular coefficient as $H_{\checks(j),\coords(i)} \in \F$.

Since the axes have names, their order doesn’t matter:

  • $\F^{\checks\times\coords} = \F^{\coords\times\checks}$,
  • $H_{\checks(j),\coords(i)}$ and $H_{\coords(i),\checks(j)}$ have the same effect.

Element-wise operations

You can perform any operation element-wise. For example, given two matrices $\ndef{\height}{height}\ndef{\width}{width}A,B \in \R^{\height \times \width}$, their sum $A+B$ is defined as

\[ (A+B)_{\height(i),\width(j)} \ce A_{\height(i),\width(j)} + B_{\height(i),\width(j)}. \]

In general, the result will inherit all the axes present in either operand. For example, given a “column vector” $u \in \R^{\height}$ and a “row vector” $v \in \R^\width$, their sum $u+v$ will be the matrix whose elements are

\[ (u+v)_{\height(i),\width(j)} \ce u_{\height(i)}+v_{\width(j)}. \]

To avoid confusion, we can denote element-wise multiplication explicitly by $\odot$. This generalizes the outer product:

\[(u\odot v)_{\height(i),\width(j)} \ce (uv)_{\height(i),\width(j)} = u_{\height(i)}v_{\width(j)}.\]

Element-wise operations are commutative/associative whenever the underlying operations are.

Diagonal matrices

This sometimes allows you to use vectors instead of diagonal matrices. For example, if you want to scale the rows of a matrix $A \in \R^{\height\times\width}$, instead of multiplying on the left by a diagonal matrix $D$, you can just take the element-wise product $dA$ with a vector $d \in \R^{\height}$. Indeed, we have

\[ (dA)_{\height(i)} = \ub{d_{\height(i)}}_{\text{scaling factor ($\in \R$)}}\q \ub{A_{\height(i)}}_\text{row ($\in \R^\width$)}. \]

Reductions

Reductions “summarize” the information across one of the axes. For example, for a matrix $A \in \R^{\height\times\width}$,

  • $\sum_\height A \in \R^{\width}$ is a “row vector” containing the sum of each column,
  • $\max_{\width}A \in \R^\height$ is a “column vector” containing the maxima of each row,
  • $\norm{A}_\height \in \R^\width$ is a “row vector” containing the ($2$-)norm of each column.

Contractions

Many linear algebra operations are contractions: an element-wise multiplication followed by a sum. We denote these using the shorthand

\[ \ndef{\ax}{axis} \ndef{\newax}{newaxis} A \ndot{\ax} B \ce \sum_\ax A \odot B. \]

Contractions generalize:

  • dot product: for $x,y \in \R^\coords$, we have $x \ndot\coords y \in \R$;
  • matrix-vector product: for $\ndef{\input}{input}\ndef{\output}{output}A \in \R^{\output\times\input}$ and $x \in \R^{\input}$, we have $A \ndot{\input} x\in \R^{\output}$;
  • matrix-matrix product: for $\ndef{\hidden}{hidden}A \in \R^{\output\times\hidden}$ and $B \in \R^{\hidden\times\input}$, we have $A \ndot{\hidden} B \in \R^{\output \times \input}$.

Contractions are great because:

  • they make it clear what you’re summing over,
  • they are commutative,
  • they are (mostly) associative: you can turn $\p{A \ndot{\ax_1} B} \ndot{\ax_2} C$ into $A \ndot{\ax_1} \p{B \ndot{\ax_2} C}$ as long as $A$ doesn’t have $\ax_2$ and $C$ doesn’t have $\ax_1$;1
  • you don’t need to waste time thinking about whether you should take the transpose.

Limitations

Since contractions “type-check” that the axes being summed over are identical, they cannot represent operations like the trace of a matrix (which depends on its two axes having the same length). Because of this, they are a bit weaker than Einstein summation (e.g. np.einsum), which sums over arbitrary pairs of axes.

Differentiation

Named tensor notation makes it easy to compute some derivatives which if done in matrix form would have required writing out the matrix products explicitly, using the method of differentials.

Let $f : \R^{\CS} \to \R^{\CT}$ be a function (where $\CS,\CT$ are disjoint sets of axes), and let $Y = f(X)$. The main rule is that if you can express the differential $\partial Y$ as

\[ \partial Y = A \ndot\CS \partial X + \mathrm{constant} \]

where $\mathrm{constant}$ doesn’t depend on $\partial X$, then

\[ \frac{\partial Y}{\partial X} = A. \]

Matrix products

For example, consider a depth-two linear network with a single output

\[ \newcommand{\Wout}{w^\mathrm{out}} \newcommand{\Win}{W^\mathrm{in}} y = x \ndot{\input} \Win \ndot{\hidden} \Wout , \]

where $x \in \R^\input$, $\Win \in \R^{\input \times \hidden}$ and $\Wout \in \R^{\hidden}$, and you want to compute $\frac{\partial y}{\partial \Win}$. Then you can compute

\[ \al{ \partial y &= \partial\p{x \ndot{\input} \Win \ndot{\hidden} \Wout}\\ &= \partial x \ndot\input \Win \ndot\hidden \Wout + x \ndot\input \partial\Win \ndot\hidden \Wout + x \ndot\input \Win \ndot\hidden \partial \Wout\\ &= \ub{\p{x \odot \Wout}}_A \ndot{\input\\\hidden} \partial\Win + \ub{\partial x \ndot\input \Win \ndot\hidden \Wout + x \ndot\input \Win \ndot\hidden \partial \Wout}_{\mathrm{constant}}, } \]

which means $\frac{\partial y}{\partial \Win} = x \odot \Wout$.

More generally, if we have matrices $W^{(i)} \in \R^{\ax_{i-1} \times \ax_i}$ for $i \in [d]$, then the partial derivatives of their product are

\[ \al{ \frac{\partial\p{W^{(1)} \ndot{\ax_1} \ldots\ndot{\ax_{d-1}} W^{(d)}}}{\partial W^{(1)}} &= \p{W^{(2)} \ndot{\ax_{2}} \ldots\ndot{\ax_{d-1}} W^{(d)}} \in \R^{\ax_1 \times \ax_d}\\ \frac{\partial\p{W^{(1)} \ndot{\ax_1} \ldots\ndot{\ax_{d-1}} W^{(d)}}}{\partial W^{(d)}} &= \p{W^{(1)} \ndot{\ax_1} \ldots\ndot{\ax_{d-2}} W^{(d-1)}} \in \R^{\ax_0 \times \ax_{d-1}}\\ \frac{\partial\p{W^{(1)} \ndot{\ax_1} \ldots\ndot{\ax_{d-1}} W^{(d)}}}{\partial W^{(i)}} &= \p{W^{(1)} \ndot{\ax_1} \ldots\ndot{\ax_{i-2}} W^{(i-1)}} \odot \p{W^{(i+1)} \ndot{\ax_{i+1}} \ldots\ndot{\ax_{d-1}} W^{(d)}}\\ &\in \R^{\ax_0 \times \ax_{i-1} \times \ax_{i+1} \times \ax_d}. } \]

Note that due to the existence of $\ax_0$ and $\ax_d$, these have more axes than the matrices we’re deriving over. If $W^{(1)} \in \R^{\ax_1}$ instead of $\R^{\ax_0 \times \ax_1}$, or if $W^{(d)} \in \R^{\ax_{d-1}}$ instead of $\R^{\ax_{d-1} \times \ax_d}$, then the derivatives stay the same but the type changes.

Renaming

Given a tensor $A \in \R^{\ax_1 \times \cdots \times \ax_n}$ we can rename one if its axes using the notation $A_{\ax_i \to \newax}$, which now in $\R^{\ax_1 \times \cdots \times \ax_{i-1} \times \newax \times \ax_{i+1} \times \cdots \times \ax_2}$ and is defined by

\[ \p{A_{\ax_i \to \newax}}_{\newax(k)} = A_{\ax_i(k)}. \]

This is equivalent to multiplying by the corresponding identity matrix:

\[ A_{\ax_i \to \newax} = A \ndot{\ax_i} I_{\ax_i, \newax} \]

Transpose

Renaming is mostly used to create a temporary independent copy of an axis that is present twice in a computation. In that use, renaming is analogous to taking the transpose, and we’ll2 use the shorthand $A\nt\ax$ to denote either $A_{\ax\to\ax'}$ or $A_{\ax'\to\ax}$ depending on which axis $A$ currently has.

For example, say you have a matrix $\ndef{\batch}{batch}X \in \R^{\batch \times \coords}$ containing $b$ vectors $x^{(1)}, \ldots, x^{(b)} \in \R^{\coords}$, and you want to express its Gram matrix: the matrix of pairwise inner products $x^{(i)} \ndot\coords x^{(j)}$. Then you might try

\[ X \ndot\coords X. \]

However, the $\batch$ axis gets merged between the two copies of $X$, and all this gets you is a vector in $\R^{\batch}$, containing the squared norms

\[ x^{(i)} \ndot\coords x^{(i)} = \norm{x^{(i)}}^2_\coords \]

of each of the original vectors (i.e. we only got the diagonal of the Gram matrix). If we want the result to keep the $\batch$ axes separate, we need to rename one of them:

\[ X \ndot\coords X\nt\batch \in \R^{\batch \times \batch'}. \]

Similarly, we can express the second-moment matrix as

\[ X \ndot\batch X\nt\coords \in \R^{\coords \times \coords'}. \]

In the usual matrix notation, where the Gram matrix would be $XX\t$ and the second-moment matrix would be $X\t X$, which is more compact but less explicit about which axes remain and which axis is summed over.

#to-write more general remarks about “square matrices”

  • and how they’re inconvenient to work with in this notation
  • but on the other hand maybe rightfully so?
  • would be nice if you could allow duplicated axes for symmetric matrices? but you’d need notation for that anyway, to stop the values from just getting multiplied coordinatewise / the axes from getting merged

Powers

#to-write

  • for a matrix $A \in \R^{\ax \times \ax'}$ can define its power $A^d$ in an unambiguous way
  • but eigenvalues are still annoying to deal with? writing $A\ndot\ax v = \lambda v\nt\ax$ is still very distasteful…
    • but maybe that goes to show how unnatural the whole concept of eigenvalues is? :P
    • and it serves as a good reminder that you can’t just write $(A-\lambda)\odot v = 0$ (since that would mean removing $\lambda$ from every entry, not just the diagonal)

Fixing associativity

Suppose you have a product of the form

\[ \p{A \ndot{\ax_1} B} \ndot{\ax_2} C \]

where $C$ has $\ax_1$, which means that $\ax_1$ first gets summed over when contracting $A$ with $B$, then gets added back to the result when contracting with $C$. We can’t directly apply associativity because the reduction $B \ndot{\ax_2} C$ would incorrectly merge the two occurrences of $\ax_1$. But we can temporarily rename $\ax_1$ in $C$ using the transpose, then transpose it back at the end to get the same result:

\[ \al{ \p{A \ndot{\ax_1} B} \ndot{\ax_2} C &= \p{\p{A \ndot{\ax_1} B} \ndot{\ax_2} C\nt{\ax_1}}\nt{\ax_1}\\ &= \p{A \ndot{\ax_1} \p{B \ndot{\ax_2} C\nt{\ax_1}}}\nt{\ax_1} } \]

(where the second equality holds as long as $A$ doesn’t have $\ax_2$).

  1. The former problem seems unsolvable, since the result should be summing over $A$’s $\ax_2$, but after associating $A$ is now sitting on the outside with $\ax_1$ as the only reduction (and also it might be that $B$ doesn’t have $\ax_2$). The latter problem can be solved by renaming $\ax_1$ in $C$ before associating. 

  2. This shorthand is not part of the original specification