Home Powered by MathJax

Vandermonde Determinant Formula

by

Gregg Kelly

In this section we look at some consequences of the Vandermonde determinant formula.

The Vandermonde determinant formula is \begin{equation} \label{eq:vandermonde} \begin{vmatrix} 1 & x_1 & x_1^2 & \cdots & x_1^{n-1} \\ 1 & x_2 & x_2^2 & \cdots & x_2^{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_n^2 & \cdots & x_n^{n-1} \\ \end{vmatrix} \space = \space \prod\limits_{1\le i\lt j \le n} (x_j-x_i) \end{equation}

Basis Formula For Polynomials

A similar formula holds when the monomials $x^i$ are replaced by an arbitrary polynomial basis $P_i(x)$ of the $n$ dimensional vector space of polynomials of degree less than $n$, let \begin{equation} \label{eq:p} P_i(x) = \sum\limits_{j=0}^{n-1} c_{ij} x^j \end{equation} then we have \begin{equation} \label{eq:basis} \begin{vmatrix} P_1(x_1) & P_2(x_1) & \cdots & P_n(x_1) \\ P_1(x_2) & P_2(x_2) & \cdots & P_n(x_2) \\ \vdots & \vdots & \ddots & \vdots \\ P_1(x_n) & P_2(x_n) & \cdots & P_n(x_n) \\ \end{vmatrix} \space = \space \det(c_{ij}) \prod\limits_{1\le i\lt j \le n} (x_j-x_i) \end{equation} This formula follows directly from equation \eqref{eq:vandermonde} by writing the LHS of \eqref{eq:basis} as the product of the two matrices $c_{ij}$ and $x_i^j$ and taking the determinant.

Taking the limit as $x_i \rightarrow x$ gives \begin{equation} \label{eq:basislimit} \begin{vmatrix} P_1(x) & P_2(x) & \cdots & P_n(x) \\ P_1'(x) & P_2'(x) & \cdots & P_n'(x) \\ \vdots & \vdots & \ddots & \vdots \\ P_1^{[n-1]}(x) & P_2^{[n-1]}(x) & \cdots & P_n^{[n-1]}(x) \\ \end{vmatrix} \space = \space \det(c_{ij}) \prod\limits_{i=0}^{n-1}i! \end{equation}

Depleted Vandermonde Determinant Formula

If we knock out one row and one column from a Vandermonde matrix we can still compute it's determinant with a formula very similar to those above, for example

\begin{equation} \label{eq:minormiracle} \begin{vmatrix} 1 & x_1 & x_1^2 & x_1^4 \\ 1 & x_2 & x_2^2 & x_2^4 \\ 1 & x_3 & x_3^2 & x_3^4 \\ 1 & x_4 & x_4^2 & x_4^4 \\ \end{vmatrix} \space = \space \left(x_1+x_2+x_3+x_4\right) \prod\limits_{1\le i\lt j \le 4} (x_j-x_i) \end{equation}

This identity follows by expanding each side of the usual Vandermonde determinant formula as a polynomial in $t$ and equating coefficients. \begin{equation} \label{eq:proof} \begin{vmatrix} 1 & x_1 & x_1^2 & x_1^3 & x_1^4 \\ 1 & x_2 & x_2^2 & x_2^3 & x_2^4 \\ 1 & x_3 & x_3^2 & x_3^3 & x_3^4 \\ 1 & x_4 & x_4^2 & x_4^3 & x_4^4 \\ 1 & t & t^2 & t^3 & t^4 \\ \end{vmatrix} \space = \space (t - x_1)(t - x_2)(t - x_3)(t - x_4) \prod\limits_{1\le i\lt j \le 4} (x_j-x_i) \end{equation} The LHS of \eqref{eq:minormiracle} is the coefficient of $-t^3$ on the LHS of \eqref{eq:proof} and the RHS of \eqref{eq:minormiracle} is the coefficient of $-t^3$ on the RHS of \eqref{eq:proof}.

The following two formulae are derived similarly

\begin{equation} \label{eq:minormiracle2} \begin{vmatrix} 1 & x_1 & x_1^3 & x_1^4 \\ 1 & x_2 & x_2^3 & x_2^4 \\ 1 & x_3 & x_3^3 & x_3^4 \\ 1 & x_4 & x_4^3 & x_4^4 \\ \end{vmatrix} \space = \space \left(x_1x_2+x_1x_3+x_1x_4+x_2x_3+x_2x_4+x_3x_4\right) \prod\limits_{1\le i\lt j \le 4} (x_j-x_i) \end{equation}

and

\begin{equation} \label{eq:minormiracle3} \begin{vmatrix} 1 & x_1^2 & x_1^3 & x_1^4 \\ 1 & x_2^2 & x_2^3 & x_2^4 \\ 1 & x_3^2 & x_3^3 & x_3^4 \\ 1 & x_4^2 & x_4^3 & x_4^4 \\ \end{vmatrix} \space = \space \left(x_1x_2x_3+x_1x_2x_4+x_1x_3x_4+x_2x_3x_4\right) \prod\limits_{1\le i\lt j \le 4} (x_j-x_i) \end{equation}

Formula for Determinant of Sub-Matrix of Minors

There is another simple depleted determinant formula that I have come across while investigating elliptic curves.

Let $M$ be a square matrix of order $n$ and let $\minors{M}$ denote the matrix of minors of $M$. Let overbar denote a sub-matrix operation which selects $m$ rows and columns and underbar denote the complementary sub-matrix operation which selects the remaining $n - m$ rows and columns. Then

\begin{equation} \label{eq:subdet} \det(\overline{\minors{M}}) \enspace = \enspace \det(M)^{m-1} \det(\underline{M}) \end{equation}

For example for the matrix \begin{equation*} M \enspace = \enspace \begin{pmatrix} a_1 & b_1 & c_1 & d_1 \\ a_2 & b_2 & c_2 & d_2 \\ a_3 & b_3 & c_3 & d_3 \\ a_4 & b_4 & c_4 & d_4 \\ \end{pmatrix} \end{equation*} and the sub-matrix operation which selects rows 1 and 2, and columns 1 and 2 of $\minors{M}$ we have \begin{equation} \label{eq:crossproduct} \begin{vmatrix} b_2 & c_2 & d_2 \\ b_3 & c_3 & d_3 \\ b_4 & c_4 & d_4 \\ \end{vmatrix} \space \begin{vmatrix} a_1 & c_1 & d_1 \\ a_3 & c_3 & d_3 \\ a_4 & c_4 & d_4 \\ \end{vmatrix} \enspace - \enspace \begin{vmatrix} b_1 & c_1 & d_1 \\ b_3 & c_3 & d_3 \\ b_4 & c_4 & d_4 \\ \end{vmatrix} \space \begin{vmatrix} a_2 & c_2 & d_2 \\ a_3 & c_3 & d_3 \\ a_4 & c_4 & d_4 \\ \end{vmatrix} \enspace = \enspace \begin{vmatrix} a_1 & b_1 & c_1 & d_1 \\ a_2 & b_2 & c_2 & d_2 \\ a_3 & b_3 & c_3 & d_3 \\ a_4 & b_4 & c_4 & d_4 \\ \end{vmatrix} \space \begin{vmatrix} c_3 & d_3 \\ c_4 & d_4 \\ \end{vmatrix} \end{equation} This can also be written as a formula for the determinant of a sub-matrix of the inverse of $M$ \begin{equation} \label{eq:subdet2} \det(\overline{M^{-1}}) \enspace = \enspace (-1)^{\delta} \det(M)^{-1} \det(\underline{M^T}) \end{equation} where $\delta$ is the sum of row and column indices of the overbar operation.

Discriminant of Polynomial

Vandermonde determinants also give a formula for the discriminant of a polynomial in terms of the sums of powers of roots.

Let $P(x) = (x-e_1)(x-e_2)(x-e_3)(x-e_4)$ and $s_k = \sum e_i^k$ then using the product formula for determinants we get \begin{equation} \discrim(P) \space = \space \prod_{i \lt j} (e_i - e_j)^2 \space = \space \begin{vmatrix} 1 & e_1 & e_1^2 & e_1^3 \\ 1 & e_2 & e_2^2 & e_2^3 \\ 1 & e_3 & e_3^2 & e_3^3 \\ 1 & e_4 & e_4^2 & e_4^3 \\ \end{vmatrix}^2 \space = \space \begin{vmatrix} s_0 & s_1 & s_2 & s_3 \\ s_1 & s_2 & s_3 & s_4 \\ s_2 & s_3 & s_4 & s_5 \\ s_3 & s_4 & s_5 & s_6 \\ \end{vmatrix} \end{equation}

Generalised Laplace Determinant Expansion Formula

The Laplace expansion of a determinant by cofactors of the first column can be written like this (writing $.$ instead of zero for clarity) \begin{equation*} \begin{aligned} \begin{vmatrix} a_1 & a_2 & a_3 & a_4 \\ b_1 & b_2 & b_3 & b_4 \\ c_1 & c_2 & c_3 & c_4 \\ d_1 & d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace &= \enspace \begin{vmatrix} a_1 & . & . & . \\ . & b_2 & b_3 & b_4 \\ . & c_2 & c_3 & c_4 \\ . & d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} . & a_2 & a_3 & a_4 \\ b_1 & . & . & . \\ . & c_2 & c_3 & c_4 \\ . & d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} . & a_2 & a_3 & a_4 \\ . & b_2 & b_3 & b_4 \\ c_1 & . & . & . \\ . & d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} . & a_2 & a_3 & a_4 \\ . & b_2 & b_3 & b_4 \\ . & c_2 & c_3 & c_4 \\ d_1 & . & . & . \\ \end{vmatrix} \\\\ &= \enspace a_1 \begin{vmatrix} b_2 & b_3 & b_4 \\ c_2 & c_3 & c_4 \\ d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace - \enspace b_1 \begin{vmatrix} b_2 & b_3 & b_4 \\ c_2 & c_3 & c_4 \\ d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace c_1 \begin{vmatrix} a_2 & a_3 & a_4 \\ b_2 & b_3 & b_4 \\ d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace - \enspace d_1 \begin{vmatrix} a_2 & a_3 & a_4 \\ b_2 & b_3 & b_4 \\ c_2 & c_3 & c_4 \\ \end{vmatrix} \\ \end{aligned} \end{equation*} In a similar way the determinant can be expanded by sub-determinants of the first two columns like this \begin{equation*} \begin{aligned} \begin{vmatrix} a_1 & a_2 & a_3 & a_4 \\ b_1 & b_2 & b_3 & b_4 \\ c_1 & c_2 & c_3 & c_4 \\ d_1 & d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace &= \enspace \begin{vmatrix} a_1 & a_2 & . & . \\ b_1 & b_2 & . & . \\ . & . & c_3 & c_4 \\ . & . & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} a_1 & a_2 & . & . \\ . & . & b_3 & b_4 \\ c_1 & c_2 & . & . \\ . & . & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} a_1 & a_2 & . & . \\ . & . & b_3 & b_4 \\ . & . & c_3 & c_4 \\ d_1 & d_2 & . & . \\ \end{vmatrix} \enspace + \enspace \textsf{3 more terms} \\\\ &= \enspace \begin{vmatrix} a_1 & a_2 \\ b_1 & b_2 \\ \end{vmatrix} \begin{vmatrix} c_3 & c_4 \\ d_3 & d_4 \\ \end{vmatrix} \enspace - \enspace \begin{vmatrix} a_1 & a_2 \\ c_1 & c_2 \\ \end{vmatrix} \begin{vmatrix} b_3 & b_4 \\ d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} a_1 & a_2 \\ d_1 & d_2 \\ \end{vmatrix} \begin{vmatrix} b_3 & b_4 \\ c_3 & c_4 \\ \end{vmatrix} \enspace + \enspace \textsf{3 more terms} \\ \end{aligned} \end{equation*} More generally

then the determinant of $M$ is given by summing over all the $n \choose k$ possible sub-matrices $L$ \begin{equation} \det(M) \enspace = \enspace \sum_L (-1)^{n_L}\det(L)\det(L') \end{equation} When $k = 1$ this is the Laplace formula for computing the determinant using cofactors of the first column.

Standard Resultant Formula

Given two algebraic curves $F(x,y) = 0$ and $G(x,y) = 0$ their resultant with respect to $y$ is a polynomial $\mathfrak{R}(x)$ that can be written \begin{equation} \mathfrak{R}(x) \space = \space \resultant_y(F,G) \space = \space \operatorname{lead}_y(F)^n \operatorname{lead}_y(G)^m \prod_{i=1}^m \prod_{j=1}^n (\psi_i(x) - \phi_j(x)) \space = \space \operatorname{lead}_y(F)^n \prod_{i=1}^m G(x,\psi_i(x)) \space = \space (-1)^{mn} \operatorname{lead}_y(G)^m \prod_{i=1}^n F(x,\phi_i(x)) \end{equation} where $m=\deg_y(F)$, $n=\deg_y(G)$ and $\psi_i(x)$ are the $m$ roots of $F(x,\psi_i(x)) = 0$ and $\phi_i(x)$ are the $n$ roots of $G(x,\phi_i(x)) = 0$.

When $n = 1$ we have the simpler formula \begin{equation} \mathfrak{R}(x) \space = \space \resultant_y(F,G) \space = \space \operatorname{lead}_y(F) \prod_{i=1}^m G(x,\psi_i(x)) \space = \space (-1)^m \operatorname{lead}_y(G)^m F(x,\phi(x)) \end{equation} where $G(x,\phi(x)) = 0$.

In homogenous coordinates \begin{equation*} F\left( x_3 \begin{vmatrix} x_1 & z_1 \\ x_2 & z_2 \\ \end{vmatrix}, \space x_3 \begin{vmatrix} y_1 & z_1 \\ y_2 & z_2 \\ \end{vmatrix} \space + \space z_3 \begin{vmatrix} x_1 & y_1 \\ x_2 & y_2 \\ \end{vmatrix}, \space z_3 \begin{vmatrix} x_1 & z_1 \\ x_2 & z_2 \\ \end{vmatrix} \right) \enspace = \enspace z_1^2 z_2^2 \begin{vmatrix} x_1 & z_1 \\ x_3 & z_3 \\ \end{vmatrix} \begin{vmatrix} x_2 & z_2 \\ x_3 & z_3 \\ \end{vmatrix} \begin{vmatrix} X & Z \\ x_3 & z_3 \\ \end{vmatrix} \enspace - \enspace \begin{vmatrix} x_2 & z_2 \\ x_3 & z_3 \\ \end{vmatrix}^3 F(x_1,y_1,z_1) \enspace + \enspace \begin{vmatrix} x_1 & z_1 \\ x_3 & z_3 \\ \end{vmatrix}^3 F(x_2,y_2,z_2) \end{equation*}

References

[1] Euler, Leonhard (1761) Observationes de comparatione arcuum curvarum irrectificibilium Euler Archive - All Works. 252.