In this section we look at some consequences of the Vandermonde determinant formula.
The Vandermonde determinant formula is \begin{equation} \label{eq:vandermonde} \begin{vmatrix} 1 & x_1 & x_1^2 & \cdots & x_1^{n-1} \\ 1 & x_2 & x_2^2 & \cdots & x_2^{n-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & x_n & x_n^2 & \cdots & x_n^{n-1} \\ \end{vmatrix} \space = \space \prod\limits_{1\le i\lt j \le n} (x_j-x_i) \end{equation}
Basis Formula For Polynomials
A similar formula holds when the monomials $x^i$ are replaced by an arbitrary polynomial basis $P_i(x)$ of the $n$ dimensional vector space of polynomials of degree less than $n$. \begin{equation} \label{eq:p} P_i(x) = \sum\limits_{j=0}^{n-1} c_{ij} x^j \end{equation} then we have \begin{equation} \label{eq:basis} \begin{vmatrix} P_1(x_1) & P_2(x_1) & \cdots & P_n(x_1) \\ P_1(x_2) & P_2(x_2) & \cdots & P_n(x_2) \\ \vdots & \vdots & \ddots & \vdots \\ P_1(x_n) & P_2(x_n) & \cdots & P_n(x_n) \\ \end{vmatrix} \space = \space \det(c_{ij}) \cdot \prod\limits_{i\lt j} (x_j-x_i) \end{equation} This formula follows directly from equation \eqref{eq:vandermonde} by writing the LHS of \eqref{eq:basis} as the product of the two matrices $c_{ij}$ and $x_i^j$ and taking the determinant.
Taking the limit as $x_i \rightarrow x$ gives \begin{equation} \label{eq:basislimit} \begin{vmatrix} P_1(x) & P_2(x) & \cdots & P_n(x) \\ P_1'(x) & P_2'(x) & \cdots & P_n'(x) \\ \vdots & \vdots & \ddots & \vdots \\ P_1^{[n-1]}(x) & P_2^{[n-1]}(x) & \cdots & P_n^{[n-1]}(x) \\ \end{vmatrix} \space = \space \det(c_{ij}) \cdot \prod\limits_{i=0}^{n-1} i! \end{equation}
Depleted Vandermonde Determinant Formula
If we knock out one row and one column from a Vandermonde matrix we can still compute it's determinant with a formula very similar to those above, for example
This identity follows by expanding each side of the usual Vandermonde determinant formula as a polynomial in $t$ and equating coefficients. \begin{equation} \label{eq:proof} \begin{vmatrix} 1 & x_1 & x_1^2 & x_1^3 & x_1^4 \\ 1 & x_2 & x_2^2 & x_2^3 & x_2^4 \\ 1 & x_3 & x_3^2 & x_3^3 & x_3^4 \\ 1 & x_4 & x_4^2 & x_4^3 & x_4^4 \\ 1 & t & t^2 & t^3 & t^4 \\ \end{vmatrix} \space = \space (t - x_1)(t - x_2)(t - x_3)(t - x_4) \cdot \prod\limits_{i\lt j} (x_j-x_i) \end{equation} The LHS of \eqref{eq:minormiracle} is the coefficient of $-t^3$ on the LHS of \eqref{eq:proof} and the RHS of \eqref{eq:minormiracle} is the coefficient of $-t^3$ on the RHS of \eqref{eq:proof}.
The following two formulae are derived similarly
and
These formulae are special cases of Jacobi's bialternant formula.
Dual Vandermonde Matrix
Form an $n \times n$ matrix whose rows are the elementary symmetric polynomials on $n-1$ variables, the $i$-th row having $x_i$ omitted. Call this the dual Vandermonde matrix, then it has the same determinant as the usual Vandermonde matrix:
This formula is a very special case of the second Jacobi-Trudi formula.
Double Vandermonde Determinant Formulae Variant 1
The $n \times n$ matrix $m_{ij} = \sum\limits_{k,l} c_{kl} \thinspace x_i^k \thinspace y_j^l$ of polynomials factors into the product of three matrices and its determinant is given by
When $c_{ij}$ is the matrix with binomial coefficients down the anti-diagonal and zeroes elsewhere we get \begin{equation} \label{eq:dblvan1} \begin{vmatrix} (x_1+y_1)^{n-1} & (x_1+y_2)^{n-1} & \cdots & (x_1+y_n)^{n-1} \\ (x_2+y_1)^{n-1} & (x_2+y_2)^{n-1} & \cdots & (x_2+y_n)^{n-1} \\ \vdots & \vdots & \ddots & \vdots \\ (x_n+y_1)^{n-1} & (x_n+y_2)^{n-1} & \cdots & (x_n+y_n)^{n-1} \\ \end{vmatrix} \space = \space (-1)^{\sfrac 1 2 n(n-1)} \cdot \prod\limits_{i=0}^{n-1}{n-1 \choose i} \cdot \prod\limits_{i\lt j} (x_j-x_i) \cdot \prod\limits_{i\lt j} (y_j-y_i) \end{equation}
Reduced Double Vandermonde Determinant Formula
A simple transformation of \eqref{eq:dblvan1} yields
Double Vandermonde Determinant Formulae Variant 2
Multiplying a Vandermonde matrix in $x_i$ by the transpose of a Dual Vandermonde matrix in $y_i$ gives
Zeta-Sigma Determinant Formula
The polynomial analog (and limiting case) of the Weierstrass $\zeta$ and $\sigma$ function determinant formula[1] is
\begin{equation} \begin{vmatrix} 0 & 1 & 1 & \cdots & 1 \\ 1 & (x_1+y_1)^{-1} & (x_1+y_2)^{-1} & \cdots & (x_1+y_n)^{-1} \\ 1 & (x_2+y_1)^{-1} & (x_2+y_2)^{-1} & \cdots & (x_2+y_n)^{-1} \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 1 & (x_n+y_1)^{-1} & (x_n+y_2)^{-1} & \cdots & (x_n+y_n)^{-1} \\ \end{vmatrix} \space = \space - \frac {\sum\limits_i (x_i + y_i) \cdot \prod\limits_{i\lt j} (x_j-x_i) \cdot \prod\limits_{i\lt j} (y_j-y_i)} {\prod\limits_{i,j} (x_i + y_j)} \end{equation}
This formula can be obtained by computing the power series expansion in $t$ of \eqref{eq:dblvan2} at $n+1$ with $x_{n+1} = y_{n+1} = 1/t$. Using \begin{equation*} \frac 1 {x + 1/t} \enspace = \enspace t \space - \space xt^2 + \space \bigO(t^3) \end{equation*} the left hand side of \eqref{eq:dblvan2} at $n+1$ is \begin{equation*} \begin{vmatrix} (x_1+y_1)^{-1} & \cdots & (x_1+y_n)^{-1} & t + \bigO(t^2) \\ (x_2+y_1)^{-1} & \cdots & (x_2+y_n)^{-1} & t + \bigO(t^2) \\ \vdots & \ddots & \vdots & \vdots \\ t + \bigO(t^2) & \cdots & t + \bigO(t^2) & \tfrac 1 2 t \\ \end{vmatrix} \enspace = \enspace \begin{vmatrix} (x_1+y_1)^{-1} & \cdots & (x_1+y_n)^{-1} & 0 \\ (x_2+y_1)^{-1} & \cdots & (x_2+y_n)^{-1} & 0 \\ \vdots & \ddots & \vdots & \vdots \\ 0 & \cdots & 0 & \tfrac 1 2 \\ \end{vmatrix} \thinspace t \enspace + \enspace \begin{vmatrix} (x_1+y_1)^{-1} & \cdots & (x_1+y_n)^{-1} & 1 \\ (x_2+y_1)^{-1} & \cdots & (x_2+y_n)^{-1} & 1 \\ \vdots & \ddots & \vdots & \vdots \\ 1 & \cdots & 1 & 0 \\ \end{vmatrix} \thinspace t^2 \enspace + \enspace \bigO(t^3) \end{equation*} the right hand side of \eqref{eq:dblvan2} at $n+1$ is \begin{equation*} \frac t 2 \prod_{i=1}^n \frac {(1 - tx_i)(1 - ty_i)} {(1 + tx_i)(1 + ty_i)} \cdot \Delta \enspace = \enspace \frac t 2 \left[1 \enspace - \enspace 2t\sum_{i=1}^n (x_i + y_i) \enspace + \enspace \bigO(t^2) \right] \cdot \Delta \end{equation*} where $\Delta$ is the right hand side of \eqref{eq:dblvan2} at $n$. The result follows by equating the coefficients of $t^2$ terms.
Formula for Determinant of Sub-Matrix of Minors
There is another simple depleted determinant formula that I have come across while investigating elliptic curves.
Let $M$ be a square matrix of order $n$ and let $\minors{M}$ denote the matrix of minors of $M$. Let overbar denote a sub-matrix operation which selects $m$ rows and columns and underbar denote the complementary sub-matrix operation which selects the remaining $n - m$ rows and columns. Then
I have verified \eqref{eq:subdet} using CAS for $n=3,4,5$ presumably a proof goes something like this 🤔
For example for the matrix \begin{equation*} M \enspace = \enspace \begin{pmatrix} a_1 & b_1 & c_1 & d_1 \\ a_2 & b_2 & c_2 & d_2 \\ a_3 & b_3 & c_3 & d_3 \\ a_4 & b_4 & c_4 & d_4 \\ \end{pmatrix} \end{equation*} and the sub-matrix operation which selects rows 1 and 2, and columns 1 and 2 of $\minors{M}$ we have \begin{equation} \label{eq:crossproduct} \begin{vmatrix} b_2 & c_2 & d_2 \\ b_3 & c_3 & d_3 \\ b_4 & c_4 & d_4 \\ \end{vmatrix} \space \begin{vmatrix} a_1 & c_1 & d_1 \\ a_3 & c_3 & d_3 \\ a_4 & c_4 & d_4 \\ \end{vmatrix} \enspace - \enspace \begin{vmatrix} b_1 & c_1 & d_1 \\ b_3 & c_3 & d_3 \\ b_4 & c_4 & d_4 \\ \end{vmatrix} \space \begin{vmatrix} a_2 & c_2 & d_2 \\ a_3 & c_3 & d_3 \\ a_4 & c_4 & d_4 \\ \end{vmatrix} \enspace = \enspace \begin{vmatrix} a_1 & b_1 & c_1 & d_1 \\ a_2 & b_2 & c_2 & d_2 \\ a_3 & b_3 & c_3 & d_3 \\ a_4 & b_4 & c_4 & d_4 \\ \end{vmatrix} \space \begin{vmatrix} c_3 & d_3 \\ c_4 & d_4 \\ \end{vmatrix} \end{equation} This can also be written as a formula for the determinant of a sub-matrix of the inverse of $M$ \begin{equation} \label{eq:subdet2} \det(\overline{M^{-1}}) \enspace = \enspace (-1)^{\delta} \det(M)^{-1} \det(\underline{M^T}) \end{equation} where $\delta$ is the sum of row and column indices of the overbar operation.
Discriminant of Polynomial
Vandermonde determinants also give a formula for the discriminant of a polynomial in terms of the sums of powers of roots.
Let $P(x) = (x-e_1)(x-e_2)(x-e_3)(x-e_4)$ and $s_k = \sum e_i^k$ then using the product formula for determinants we get \begin{equation} \discrim(P) \space = \space \prod_{i \lt j} (e_i - e_j)^2 \space = \space \begin{vmatrix} 1 & e_1 & e_1^2 & e_1^3 \\ 1 & e_2 & e_2^2 & e_2^3 \\ 1 & e_3 & e_3^2 & e_3^3 \\ 1 & e_4 & e_4^2 & e_4^3 \\ \end{vmatrix}^2 \space = \space \begin{vmatrix} s_0 & s_1 & s_2 & s_3 \\ s_1 & s_2 & s_3 & s_4 \\ s_2 & s_3 & s_4 & s_5 \\ s_3 & s_4 & s_5 & s_6 \\ \end{vmatrix} \end{equation}
Generalised Laplace Determinant Expansion Formula
The Laplace expansion of a determinant by cofactors of the first column can be written like this (writing $.$ instead of zero for clarity) \begin{equation*} \begin{aligned} \begin{vmatrix} a_1 & a_2 & a_3 & a_4 \\ b_1 & b_2 & b_3 & b_4 \\ c_1 & c_2 & c_3 & c_4 \\ d_1 & d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace &= \enspace \begin{vmatrix} a_1 & . & . & . \\ . & b_2 & b_3 & b_4 \\ . & c_2 & c_3 & c_4 \\ . & d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} . & a_2 & a_3 & a_4 \\ b_1 & . & . & . \\ . & c_2 & c_3 & c_4 \\ . & d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} . & a_2 & a_3 & a_4 \\ . & b_2 & b_3 & b_4 \\ c_1 & . & . & . \\ . & d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} . & a_2 & a_3 & a_4 \\ . & b_2 & b_3 & b_4 \\ . & c_2 & c_3 & c_4 \\ d_1 & . & . & . \\ \end{vmatrix} \\\\ &= \enspace a_1 \begin{vmatrix} b_2 & b_3 & b_4 \\ c_2 & c_3 & c_4 \\ d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace - \enspace b_1 \begin{vmatrix} b_2 & b_3 & b_4 \\ c_2 & c_3 & c_4 \\ d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace c_1 \begin{vmatrix} a_2 & a_3 & a_4 \\ b_2 & b_3 & b_4 \\ d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace - \enspace d_1 \begin{vmatrix} a_2 & a_3 & a_4 \\ b_2 & b_3 & b_4 \\ c_2 & c_3 & c_4 \\ \end{vmatrix} \\ \end{aligned} \end{equation*} In a similar way the determinant can be expanded by sub-determinants of the first two columns like this \begin{equation*} \begin{aligned} \begin{vmatrix} a_1 & a_2 & a_3 & a_4 \\ b_1 & b_2 & b_3 & b_4 \\ c_1 & c_2 & c_3 & c_4 \\ d_1 & d_2 & d_3 & d_4 \\ \end{vmatrix} \enspace &= \enspace \begin{vmatrix} a_1 & a_2 & . & . \\ b_1 & b_2 & . & . \\ . & . & c_3 & c_4 \\ . & . & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} a_1 & a_2 & . & . \\ . & . & b_3 & b_4 \\ c_1 & c_2 & . & . \\ . & . & d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} a_1 & a_2 & . & . \\ . & . & b_3 & b_4 \\ . & . & c_3 & c_4 \\ d_1 & d_2 & . & . \\ \end{vmatrix} \enspace + \enspace \textsf{3 more terms} \\\\ &= \enspace \begin{vmatrix} a_1 & a_2 \\ b_1 & b_2 \\ \end{vmatrix} \begin{vmatrix} c_3 & c_4 \\ d_3 & d_4 \\ \end{vmatrix} \enspace - \enspace \begin{vmatrix} a_1 & a_2 \\ c_1 & c_2 \\ \end{vmatrix} \begin{vmatrix} b_3 & b_4 \\ d_3 & d_4 \\ \end{vmatrix} \enspace + \enspace \begin{vmatrix} a_1 & a_2 \\ d_1 & d_2 \\ \end{vmatrix} \begin{vmatrix} b_3 & b_4 \\ c_3 & c_4 \\ \end{vmatrix} \enspace + \enspace \textsf{3 more terms} \\ \end{aligned} \end{equation*} More generally
then the determinant of $M$ is given by summing over all the $n \choose k$ possible sub-matrices $L$ \begin{equation} \det(M) \enspace = \enspace \sum_L (-1)^{n_L}\det(L)\det(L') \end{equation} When $k = 1$ this is the Laplace formula for computing the determinant using cofactors of the first column.
Standard Resultant Formula
Given two algebraic curves $F(x,y) = 0$ and $G(x,y) = 0$ their resultant with respect to $y$ is a polynomial $\mathfrak{R}(x)$ that can be written \begin{equation} \mathfrak{R}(x) \space = \space \resultant_y(F,G) \space = \space \operatorname{lead}_y(F)^n \operatorname{lead}_y(G)^m \prod_{i=1}^m \prod_{j=1}^n (\psi_i(x) - \phi_j(x)) \space = \space \operatorname{lead}_y(F)^n \prod_{i=1}^m G(x,\psi_i(x)) \space = \space (-1)^{mn} \operatorname{lead}_y(G)^m \prod_{i=1}^n F(x,\phi_i(x)) \end{equation} where $m=\deg_y(F)$, $n=\deg_y(G)$ and $\psi_i(x)$ are the $m$ roots of $F(x,\psi_i(x)) = 0$ and $\phi_i(x)$ are the $n$ roots of $G(x,\phi_i(x)) = 0$.
When $n = 1$ we have the simpler formula \begin{equation} \mathfrak{R}(x) \space = \space \resultant_y(F,G) \space = \space \operatorname{lead}_y(F) \prod_{i=1}^m G(x,\psi_i(x)) \space = \space (-1)^m \operatorname{lead}_y(G)^m F(x,\phi(x)) \end{equation} where $G(x,\phi(x)) = 0$.
In homogenous coordinates \begin{equation*} F\left( x_3 \begin{vmatrix} x_1 & z_1 \\ x_2 & z_2 \\ \end{vmatrix}, \space x_3 \begin{vmatrix} y_1 & z_1 \\ y_2 & z_2 \\ \end{vmatrix} \space + \space z_3 \begin{vmatrix} x_1 & y_1 \\ x_2 & y_2 \\ \end{vmatrix}, \space z_3 \begin{vmatrix} x_1 & z_1 \\ x_2 & z_2 \\ \end{vmatrix} \right) \enspace = \enspace z_1^2 z_2^2 \begin{vmatrix} x_1 & z_1 \\ x_3 & z_3 \\ \end{vmatrix} \begin{vmatrix} x_2 & z_2 \\ x_3 & z_3 \\ \end{vmatrix} \begin{vmatrix} X & Z \\ x_3 & z_3 \\ \end{vmatrix} \enspace - \enspace \begin{vmatrix} x_2 & z_2 \\ x_3 & z_3 \\ \end{vmatrix}^3 F(x_1,y_1,z_1) \enspace + \enspace \begin{vmatrix} x_1 & z_1 \\ x_3 & z_3 \\ \end{vmatrix}^3 F(x_2,y_2,z_2) \end{equation*}
References
[1] Zur Theorie der elliptischen Functionen J. reine angew. Math 83 (1877), 175–179