## The basics

### Scalar

A scalar is a single real number.

Examples:

- 0
- 1
- \pi
- -3.5

### Vector

An n-vector is a collection of n real numbers.

Examples:

- A 1-vector: \begin{bmatrix}5\end{bmatrix}
- A 3-vector: \begin{bmatrix}2\\-4.5\\\pi\end{bmatrix}
- An n-vector: \begin{bmatrix}3\\\vdots\\-1\end{bmatrix} (it has n elements, trust me)

## Orthonormal matrix

A matrix A \in \mathbb{R}^{m \times n} is *orthonormal* if A^\top A = I_n. In other words, all the columns of the matrix are orthogonal and have unit length.

## Matrix decompositions

### Singular value decomposition

The singular value decomposition (SVD) involves rewriting a matrix X \in \mathbb{R}^{m \times n} as the product of three matrices X = U \Sigma V^\top, where

- U \in \mathbb{R}^{m \times m} and V \in \mathbb{R}^{n \times n} are orthonormal matrices and
- \Sigma \in \mathbb{R}^{m \times n} is zero everywhere except potentially on its main diagonal.

### Eigendecomposition

The eigendecomposition of a symmetric matrix X \in \mathbb{R}^{n \times n} involves rewriting it as the product of three matrices X = V \Lambda V^\top, where

- V \in \mathbb{n \times n} is orthonormal and
- \Lambda \in \mathbb{n \times n} is diagonal with non-negative entries.

## Covariance

### Auto-covariance

Given a data matrix X \in \mathbb{R}^{n \times f} containing neural responses to n stimuli from f neurons, the *auto-covariance* of X (or simply its *covariance*) is defined as:

\text{cov}(X) = \left(\dfrac{1}{n - 1}\right) (X - \overline{X})^\top (X - \overline{X})

This is an f \times f matrix where the (i, j)-th element measures how much neuron i covaries with neuron j. If the covariance is positive, they tend to have similar activation: a stimulus that activates one neuron will tend to activate the other. If the covariance is negative, the neurons will have dissimilar activation: a stimulus that activates one neuron will likely decrease the activity of the other.

### Cross-covariance

Given two data matrices X \in \mathbb{R}^{n \times f_X} and Y \in \mathbb{R}^{n \times f_Y} containing neural responses to n stimuli from f_X and f_Y neurons respectively, the *cross-covariance* of X and Y is defined as:

\text{cov}(X, Y) = \left(\dfrac{1}{n - 1}\right) (X - \overline{X})^\top (Y - \overline{Y})