In this section we look at some consequences of the Vandermonde determinant formula.
The Vandermonde determinant formula is
\begin{equation*}
\begin{vmatrix}
1 & x_1 & x_1^2 & \cdots & x_1^{n-1} \\
1 & x_2 & x_2^2 & \cdots & x_2^{n-1} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
1 & x_n & x_n^2 & \cdots & x_n^{n-1} \\
\end{vmatrix} \space = \space \prod\limits_{1\le i\lt j \le n} (x_j-x_i)
\end{equation*}
Basis Formula For Polynomials
A similar formula holds when the monomials $x^i$ are replaced by an arbitrary polynomial basis $P_i(x)$ of the $n$ dimensional vector space of polynomials of degree less than $n$.
\begin{equation*}
P_i(x) = \sum\limits_{j=0}^{n-1} c_{ij} x^j
\end{equation*}
then we have
\begin{equation*}
\begin{vmatrix}
P_1(x_1) & P_2(x_1) & \cdots & P_n(x_1) \\
P_1(x_2) & P_2(x_2) & \cdots & P_n(x_2) \\
\vdots & \vdots & \ddots & \vdots \\
P_1(x_n) & P_2(x_n) & \cdots & P_n(x_n) \\
\end{vmatrix} \space = \space \det(c_{ij}) \cdot \prod\limits_{i\lt j} (x_j-x_i)
\end{equation*}
This formula follows directly from equation vandermonde by writing the LHS of basis as the product of the two matrices $c_{ij}$ and $x_i^j$ and taking the determinant.
Taking the limit as $x_i \rightarrow x$ gives
\begin{equation*}
\begin{vmatrix}
P_1(x) & P_2(x) & \cdots & P_n(x) \\
P_1'(x) & P_2'(x) & \cdots & P_n'(x) \\
\vdots & \vdots & \ddots & \vdots \\
P_1^{[n-1]}(x) & P_2^{[n-1]}(x) & \cdots & P_n^{[n-1]}(x) \\
\end{vmatrix} \space = \space \det(c_{ij}) \cdot \prod\limits_{i=0}^{n-1} i!
\end{equation*}
Depleted Vandermonde Determinant Formula
If we knock out one row and one column from a Vandermonde matrix we can still compute it's determinant with a formula very similar to those above, for example
\begin{equation*}
\begin{vmatrix}
1 & x_1 & x_1^2 & x_1^4 \\
1 & x_2 & x_2^2 & x_2^4 \\
1 & x_3 & x_3^2 & x_3^4 \\
1 & x_4 & x_4^2 & x_4^4 \\
\end{vmatrix} \space = \space \left(x_1+x_2+x_3+x_4\right) \cdot \prod\limits_{i\lt j} (x_j-x_i)
\end{equation*}
This identity follows by expanding each side of the usual Vandermonde determinant formula as a polynomial in $t$ and equating coefficients.
\begin{equation*}
\begin{vmatrix}
1 & x_1 & x_1^2 & x_1^3 & x_1^4 \\
1 & x_2 & x_2^2 & x_2^3 & x_2^4 \\
1 & x_3 & x_3^2 & x_3^3 & x_3^4 \\
1 & x_4 & x_4^2 & x_4^3 & x_4^4 \\
1 & t & t^2 & t^3 & t^4 \\
\end{vmatrix} \space = \space (t - x_1)(t - x_2)(t - x_3)(t - x_4) \cdot \prod\limits_{i\lt j} (x_j-x_i)
\end{equation*}
The LHS of minormiracle is the coefficient of $-t^3$ on the LHS of proof and the RHS of minormiracle is the coefficient of $-t^3$ on the RHS of proof.
The following two formulae are derived similarly
\begin{equation*}
\begin{vmatrix}
1 & x_1 & x_1^3 & x_1^4 \\
1 & x_2 & x_2^3 & x_2^4 \\
1 & x_3 & x_3^3 & x_3^4 \\
1 & x_4 & x_4^3 & x_4^4 \\
\end{vmatrix} \space = \space \left(x_1x_2+x_1x_3+x_1x_4+x_2x_3+x_2x_4+x_3x_4\right) \cdot \prod\limits_{i\lt j} (x_j-x_i)
\end{equation*}
and
\begin{equation*}
\begin{vmatrix}
1 & x_1^2 & x_1^3 & x_1^4 \\
1 & x_2^2 & x_2^3 & x_2^4 \\
1 & x_3^2 & x_3^3 & x_3^4 \\
1 & x_4^2 & x_4^3 & x_4^4 \\
\end{vmatrix} \space = \space \left(x_1x_2x_3+x_1x_2x_4+x_1x_3x_4+x_2x_3x_4\right) \cdot \prod\limits_{i\lt j} (x_j-x_i)
\end{equation*}
These formulae are special cases of Jacobi's bialternant formula.
Dual Vandermonde Matrix
Form an $n \times n$ matrix whose rows are the
elementary symmetric polynomials
on $n-1$ variables, the $i$-th row having $x_i$ omitted.
Call this the dual Vandermonde matrix, then it has the same determinant as the usual Vandermonde matrix:
\begin{equation*}
\begin{vmatrix}
x_2x_3x_4 & x_2x_3 + x_2x_4 + x_3x_4 & x_2 + x_3 + x_4 & 1 \\
x_1x_3x_4 & x_1x_3 + x_1x_4 + x_3x_4 & x_1 + x_3 + x_4 & 1 \\
x_1x_2x_4 & x_1x_2 + x_1x_4 + x_2x_4 & x_1 + x_2 + x_4 & 1 \\
x_1x_2x_3 & x_1x_2 + x_1x_3 + x_2x_3 & x_1 + x_2 + x_3 & 1 \\
\end{vmatrix} \enspace = \enspace \prod\limits_{i \lt j}(x_j - x_i)
\end{equation*}
Follows from the fact that the determinant is an homogenous polynomial of degree $\tfrac 1 2 n(n-1)$ which vanishes when $x_i = x_j$ for $i \ne j$,
because then the $i$-th and $j$-th rows are identical.
Also dividing the first $n-1$ columns on the LHS by $x_n$ and dividing the $n-1$ factors like $(x_n-x_i)$ on the RHS by $x_n$
and letting $x_n \rightarrow \infty$ converts the $n$-th formula into the $(n-1)$-th formula.
And because the formula for $n=1$ evaluates to $1$, the unknown constant on the RHS is $1$.
This formula is a very special case of the second Jacobi-Trudi formula.
Double Vandermonde Determinant Formulae Variant 1
The $n \times n$ matrix $m_{ij} = \sum\limits_{k,l} c_{kl} \thinspace x_i^k \thinspace y_j^l$ of polynomials factors into the product of three matrices and
its determinant is given by
\begin{equation*}
\det\left(m_{ij}\right) \space = \space \det\left(c_{ij}\right) \cdot \prod\limits_{i\lt j} (x_j-x_i) \cdot \prod\limits_{i\lt j} (y_j-y_i)
\end{equation*}
This result follows from taking the determinant of the matrix factorisation
$\displaystyle
(m_{ij}) \space = \space
\begin{pmatrix}
1 & x_1 & \ldots & x_1^{n-1} \\
\vdots & \vdots & \ddots & \vdots \\
1 & x_n & \ldots & x_n^{n-1} \\
\end{pmatrix}
\allowbreak
\begin{pmatrix}
c_{11} & c_{12} & \ldots & c_{1n} \\
\vdots & \vdots & \ddots & \vdots \\
c_{n1} & c_{n2} & \ldots & c_{nn} \\
\end{pmatrix}
\allowbreak
\begin{pmatrix}
1 & y_1 & \ldots & y_1^{n-1} \\
\vdots & \vdots & \ddots & \vdots \\
1 & y_n & \ldots & y_n^{n-1} \\
\end{pmatrix}^T
$
This is easier to see with tensor index notation
\begin{equation*}
M_i^j \space = \space X_i^k \thinspace C_k^l \thinspace Y_l^j
\end{equation*}
where the components of the tensors are given by $M_i^j = m_{ij}, \enspace X_i^k = x_i^k, \enspace C_k^l = c_{kl}, \enspace Y_l^j = y_j^l$
When $c_{ij}$ is the matrix with binomial coefficients down the anti-diagonal and zeroes elsewhere we get
\begin{equation*}
\begin{vmatrix}
(x_1+y_1)^{n-1} & (x_1+y_2)^{n-1} & \cdots & (x_1+y_n)^{n-1} \\
(x_2+y_1)^{n-1} & (x_2+y_2)^{n-1} & \cdots & (x_2+y_n)^{n-1} \\
\vdots & \vdots & \ddots & \vdots \\
(x_n+y_1)^{n-1} & (x_n+y_2)^{n-1} & \cdots & (x_n+y_n)^{n-1} \\
\end{vmatrix} \space = \space (-1)^{\sfrac 1 2 n(n-1)} \cdot \prod\limits_{i=0}^{n-1}{n-1 \choose i} \cdot \prod\limits_{i\lt j} (x_j-x_i) \cdot \prod\limits_{i\lt j} (y_j-y_i)
\end{equation*}
Reduced Double Vandermonde Determinant Formula
A simple transformation of dblvan1 yields
\begin{equation*}
\begin{vmatrix}
0 & 1 & 1 & \cdots & 1 \\
1 & (x_1+y_1)^n & (x_1+y_2)^n & \cdots & (x_1+y_n)^n \\
1 & (x_2+y_1)^n & (x_2+y_2)^n & \cdots & (x_2+y_n)^n \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
1 & (x_n+y_1)^n & (x_n+y_2)^n & \cdots & (x_n+y_n)^n \\
\end{vmatrix} \space = \space (-1)^{\sfrac 1 2 n(n+1)} \cdot \prod\limits_{i=0}^{n}{n \choose i} \cdot \prod\limits_{i\lt j} (x_j-x_i) \cdot \prod\limits_{i\lt j} (y_j-y_i)
\end{equation*}
In the $n + 1$ case of
dblvan1, put $x_{n+1} = y_{n+1} = t$, divide each side by $t^{2n}$, let $t \rightarrow \infty$
and finally move the last column and row to the first column and row position.
Double Vandermonde Determinant Formulae Variant 2
Multiplying a Vandermonde matrix in $x_i$ by the transpose of a Dual Vandermonde matrix in $y_i$ gives
\begin{equation*}
\begin{vmatrix}
(x_1+y_1)^{-1} & (x_1+y_2)^{-1} & \cdots & (x_1+y_n)^{-1} \\
(x_2+y_1)^{-1} & (x_2+y_2)^{-1} & \cdots & (x_2+y_n)^{-1} \\
\vdots & \vdots & \ddots & \vdots \\
(x_n+y_1)^{-1} & (x_n+y_2)^{-1} & \cdots & (x_n+y_n)^{-1} \\
\end{vmatrix} \space = \space \frac {\prod\limits_{i\lt j} (x_j-x_i) \cdot \prod\limits_{i\lt j} (y_j-y_i)} {\prod\limits_{i,j} (x_i+y_j)}
\end{equation*}
The inner product of the $i$-th row of the vandermonde matrix in $x_i$, denoted by $X$, with the $j$-th row of the dual vandermonde matrix in $y_i$, denoted by $Y$,
is given by
$\displaystyle
\left(XY^T\right)_{ij} \space = \space \sum_{k=0}^n x_i^k \thinspace e_{n-j}(S_j) \space = \space \prod\limits_{y \in S_j} (x_i + y) \space = \space (x_i+y_j)^{-1} \prod_{k=1}^n (x_i+y_k)
$
where $S_j = \left\{y_k: 1 \le k \le n, k \ne j\right\}$ and $e_m(S_j)$ is the
elementary symmetric polynomial of degree $m$ on the $n-1$ variables $S_j$.
Therefore
$\displaystyle
X Y^T \space = \space
\begin{pmatrix}
(x_1+y_1)^{-1} \prod\limits_{k=1}^n (x_1+y_k) & \cdots & (x_1+y_n)^{-1} \prod\limits_{k=1}^n (x_1+y_k) \\
\vdots & \ddots & \vdots \\
(x_n+y_1)^{-1} \prod\limits_{k=1}^n (x_n+y_k) & \cdots & (x_n+y_n)^{-1} \prod\limits_{k=1}^n (x_n+y_k) \\
\end{pmatrix} \space = \space
\begin{pmatrix}
(x_1+y_1)^{-1} & \cdots & (x_1+y_n)^{-1} \\
\vdots & \ddots & \vdots \\
(x_n+y_1)^{-1} & \cdots & (x_n+y_n)^{-1} \\
\end{pmatrix} \cdot \prod\limits_{l=1}^n \prod\limits_{k=1}^n (x_l+y_k)
$
and dblvan2 follows on taking determinants of both sides.
Zeta-Sigma Determinant Formula
The polynomial analog (and limiting case) of the Weierstrass $\zeta$ and $\sigma$ function determinant formula is
\begin{equation*}
\begin{vmatrix}
0 & 1 & 1 & \cdots & 1 \\
1 & (x_1+y_1)^{-1} & (x_1+y_2)^{-1} & \cdots & (x_1+y_n)^{-1} \\
1 & (x_2+y_1)^{-1} & (x_2+y_2)^{-1} & \cdots & (x_2+y_n)^{-1} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
1 & (x_n+y_1)^{-1} & (x_n+y_2)^{-1} & \cdots & (x_n+y_n)^{-1} \\
\end{vmatrix} \space = \space
- \frac {\sum\limits_i (x_i + y_i) \cdot \prod\limits_{i\lt j} (x_j-x_i) \cdot \prod\limits_{i\lt j} (y_j-y_i)} {\prod\limits_{i,j} (x_i + y_j)}
\end{equation*}
This formula can be obtained by computing the power series expansion in $t$ of dblvan2 at $n+1$ with $x_{n+1} = y_{n+1} = 1/t$.
Using
\begin{equation*}
\frac 1 {x + 1/t} \enspace = \enspace t \space - \space xt^2 + \space \bigO(t^3)
\end{equation*}
the left hand side of dblvan2 at $n+1$ is
\begin{equation*}
\begin{vmatrix}
(x_1+y_1)^{-1} & \cdots & (x_1+y_n)^{-1} & t + \bigO(t^2) \\
(x_2+y_1)^{-1} & \cdots & (x_2+y_n)^{-1} & t + \bigO(t^2) \\
\vdots & \ddots & \vdots & \vdots \\
t + \bigO(t^2) & \cdots & t + \bigO(t^2) & \tfrac 1 2 t \\
\end{vmatrix} \enspace = \enspace
\begin{vmatrix}
(x_1+y_1)^{-1} & \cdots & (x_1+y_n)^{-1} & 0 \\
(x_2+y_1)^{-1} & \cdots & (x_2+y_n)^{-1} & 0 \\
\vdots & \ddots & \vdots & \vdots \\
0 & \cdots & 0 & \tfrac 1 2 \\
\end{vmatrix} \thinspace t \enspace + \enspace
\begin{vmatrix}
(x_1+y_1)^{-1} & \cdots & (x_1+y_n)^{-1} & 1 \\
(x_2+y_1)^{-1} & \cdots & (x_2+y_n)^{-1} & 1 \\
\vdots & \ddots & \vdots & \vdots \\
1 & \cdots & 1 & 0 \\
\end{vmatrix} \thinspace t^2 \enspace + \enspace \bigO(t^3)
\end{equation*}
the right hand side of dblvan2 at $n+1$ is
\begin{equation*}
\frac t 2 \prod_{i=1}^n \frac {(1 - tx_i)(1 - ty_i)} {(1 + tx_i)(1 + ty_i)} \cdot \Delta \enspace = \enspace
\frac t 2 \left[1 \enspace - \enspace 2t\sum_{i=1}^n (x_i + y_i) \enspace + \enspace \bigO(t^2) \right] \cdot \Delta
\end{equation*}
where $\Delta$ is the right hand side of dblvan2 at $n$.
The result follows by equating the coefficients of $t^2$ terms.
Formula for Determinant of Sub-Matrix of Minors
There is another simple depleted determinant formula that I have come across while investigating elliptic curves.
Let $M$ be a square matrix of order $n$ and let $\minors{M}$ denote the matrix of minors of $M$.
Let overbar denote a sub-matrix operation which selects $m$ rows and columns and underbar denote the complementary sub-matrix operation which selects the remaining $n - m$ rows and columns.
Then
\begin{equation*}
\det(\overline{\minors{M}}) \enspace = \enspace \det(M)^{m-1} \det(\underline{M})
\end{equation*}
I have verified subdet using CAS for $n=3,4,5$ presumably a proof goes something like this 🤔
-
We know that if $\det(M) = 0$ then $\rank(\minors{M}) \le 1$.
-
Assume $\rank(\minors{M}) = 1$ then all rows of $\minors{M}$ are a multiple of each other and no row can be all zeroes.
-
The determinant of any square $2\times 2$ submatrix must be zero otherwise we would have two linearly independent rows.
-
Therefore the algebraic expression for the determinant of each $2\times 2$ submatrix is divisible by $\det(M)$.
-
We can write the determinant of a $3\times 3$ submatrix as a linear sum of $2\times 2$ determinants which we can therefore divide by $\det(M)$.
-
If the resulting expression is non-zero we can again construct two linearly independent rows.
Therefore the algebraic expression for the determinant of any $3\times 3$ submatrix is divisible by $\det(M)^2$.
-
And by induction the determinant of any square $m\times m$ submatrix is divisible by $\det(M)^{m-1}$.
-
If $\det(\overline{\minors{M}}) = 0$ and $\det(M) \ne 0$ the determinant of the excluded rows and columns must be zero.
-
Thus the RHS is determined up to an unknown multiplicative constant.
By setting $M$ to a diagonal matrix that constant can be determined to be 1.
-
The case $m=1$ is trivial ie. the determinant of a $1 \times 1$ matrix whose sole entry is a minor of M, is equal to the determinant of the sub-matrix of $M$ whose entries are the same as that
minor.
-
The case $m=2$ is a formula for the cross product of minors eg. crossproduct.
It implies that if the determinant of a matrix vanishes then all rows of minors are proportional to one another.
-
The case $m=n-1$ says that the $i,j$-th minor of the matrix of minors of $M$ is equal to the determinant of $M$ to the power $n-2$ times the $i,j$-th element of $M$.
-
The case $m=n$ is the well known formula for the determinant of the matrix of minors, (adopting the convention that the determinant of the empty matrix is 1).
For example for the matrix
\begin{equation*}
M \enspace = \enspace \begin{pmatrix}
a_1 & b_1 & c_1 & d_1 \\
a_2 & b_2 & c_2 & d_2 \\
a_3 & b_3 & c_3 & d_3 \\
a_4 & b_4 & c_4 & d_4 \\
\end{pmatrix}
\end{equation*}
and the sub-matrix operation which selects rows 1 and 2, and columns 1 and 2 of $\minors{M}$ we have
\begin{equation*}
\begin{vmatrix}
b_2 & c_2 & d_2 \\
b_3 & c_3 & d_3 \\
b_4 & c_4 & d_4 \\
\end{vmatrix} \space
\begin{vmatrix}
a_1 & c_1 & d_1 \\
a_3 & c_3 & d_3 \\
a_4 & c_4 & d_4 \\
\end{vmatrix} \enspace - \enspace
\begin{vmatrix}
b_1 & c_1 & d_1 \\
b_3 & c_3 & d_3 \\
b_4 & c_4 & d_4 \\
\end{vmatrix} \space
\begin{vmatrix}
a_2 & c_2 & d_2 \\
a_3 & c_3 & d_3 \\
a_4 & c_4 & d_4 \\
\end{vmatrix} \enspace = \enspace
\begin{vmatrix}
a_1 & b_1 & c_1 & d_1 \\
a_2 & b_2 & c_2 & d_2 \\
a_3 & b_3 & c_3 & d_3 \\
a_4 & b_4 & c_4 & d_4 \\
\end{vmatrix} \space
\begin{vmatrix}
c_3 & d_3 \\
c_4 & d_4 \\
\end{vmatrix}
\end{equation*}
This can also be written as a formula for the determinant of a sub-matrix of the inverse of $M$
\begin{equation*}
\det(\overline{M^{-1}}) \enspace = \enspace (-1)^{\delta} \det(M)^{-1} \det(\underline{M^T})
\end{equation*}
where $\delta$ is the sum of row and column indices of the overbar operation.
Discriminant of Polynomial
Vandermonde determinants also give a formula for the discriminant of a polynomial in terms of the sums of powers of roots.
Let $P(x) = (x-e_1)(x-e_2)(x-e_3)(x-e_4)$ and $s_k = \sum e_i^k$ then using the product formula for determinants we get
\begin{equation*}
\discrim(P) \space = \space \prod_{i \lt j} (e_i - e_j)^2 \space = \space
\begin{vmatrix}
1 & e_1 & e_1^2 & e_1^3 \\
1 & e_2 & e_2^2 & e_2^3 \\
1 & e_3 & e_3^2 & e_3^3 \\
1 & e_4 & e_4^2 & e_4^3 \\
\end{vmatrix}^2 \space = \space
\begin{vmatrix}
s_0 & s_1 & s_2 & s_3 \\
s_1 & s_2 & s_3 & s_4 \\
s_2 & s_3 & s_4 & s_5 \\
s_3 & s_4 & s_5 & s_6 \\
\end{vmatrix}
\end{equation*}
Generalised Laplace Determinant Expansion Formula
The Laplace expansion of a determinant by cofactors of the first column can be written like this (writing $.$ instead of zero for clarity)
\begin{aligned}
\begin{vmatrix}
a_1 & a_2 & a_3 & a_4 \\
b_1 & b_2 & b_3 & b_4 \\
c_1 & c_2 & c_3 & c_4 \\
d_1 & d_2 & d_3 & d_4 \\
\end{vmatrix} \enspace &= \enspace
\begin{vmatrix}
a_1 & . & . & . \\
. & b_2 & b_3 & b_4 \\
. & c_2 & c_3 & c_4 \\
. & d_2 & d_3 & d_4 \\
\end{vmatrix} \enspace + \enspace
\begin{vmatrix}
. & a_2 & a_3 & a_4 \\
b_1 & . & . & . \\
. & c_2 & c_3 & c_4 \\
. & d_2 & d_3 & d_4 \\
\end{vmatrix} \enspace + \enspace
\begin{vmatrix}
. & a_2 & a_3 & a_4 \\
. & b_2 & b_3 & b_4 \\
c_1 & . & . & . \\
. & d_2 & d_3 & d_4 \\
\end{vmatrix} \enspace + \enspace
\begin{vmatrix}
. & a_2 & a_3 & a_4 \\
. & b_2 & b_3 & b_4 \\
. & c_2 & c_3 & c_4 \\
d_1 & . & . & . \\
\end{vmatrix} \\\\
&= \enspace
a_1
\begin{vmatrix}
b_2 & b_3 & b_4 \\
c_2 & c_3 & c_4 \\
d_2 & d_3 & d_4 \\
\end{vmatrix} \enspace - \enspace
b_1
\begin{vmatrix}
b_2 & b_3 & b_4 \\
c_2 & c_3 & c_4 \\
d_2 & d_3 & d_4 \\
\end{vmatrix} \enspace + \enspace
c_1
\begin{vmatrix}
a_2 & a_3 & a_4 \\
b_2 & b_3 & b_4 \\
d_2 & d_3 & d_4 \\
\end{vmatrix} \enspace - \enspace
d_1
\begin{vmatrix}
a_2 & a_3 & a_4 \\
b_2 & b_3 & b_4 \\
c_2 & c_3 & c_4 \\
\end{vmatrix} \\
\end{aligned}
In a similar way the determinant can be expanded by sub-determinants of the first two columns like this
\begin{aligned}
\begin{vmatrix}
a_1 & a_2 & a_3 & a_4 \\
b_1 & b_2 & b_3 & b_4 \\
c_1 & c_2 & c_3 & c_4 \\
d_1 & d_2 & d_3 & d_4 \\
\end{vmatrix} \enspace &= \enspace
\begin{vmatrix}
a_1 & a_2 & . & . \\
b_1 & b_2 & . & . \\
. & . & c_3 & c_4 \\
. & . & d_3 & d_4 \\
\end{vmatrix} \enspace + \enspace
\begin{vmatrix}
a_1 & a_2 & . & . \\
. & . & b_3 & b_4 \\
c_1 & c_2 & . & . \\
. & . & d_3 & d_4 \\
\end{vmatrix} \enspace + \enspace
\begin{vmatrix}
a_1 & a_2 & . & . \\
. & . & b_3 & b_4 \\
. & . & c_3 & c_4 \\
d_1 & d_2 & . & . \\
\end{vmatrix} \enspace + \enspace \textsf{3 more terms} \\\\
&= \enspace \begin{vmatrix}
a_1 & a_2 \\
b_1 & b_2 \\
\end{vmatrix}
\begin{vmatrix}
c_3 & c_4 \\
d_3 & d_4 \\
\end{vmatrix} \enspace - \enspace
\begin{vmatrix}
a_1 & a_2 \\
c_1 & c_2 \\
\end{vmatrix}
\begin{vmatrix}
b_3 & b_4 \\
d_3 & d_4 \\
\end{vmatrix} \enspace + \enspace
\begin{vmatrix}
a_1 & a_2 \\
d_1 & d_2 \\
\end{vmatrix}
\begin{vmatrix}
b_3 & b_4 \\
c_3 & c_4 \\
\end{vmatrix}
\enspace + \enspace \textsf{3 more terms} \\
\end{aligned}
More generally
-
Suppose we have an $n \times n$ matrix $M$.
-
Split the matrix vertically by taking the first $k$ columns.
-
Choose a $k \times k$ sub-matrix $L$ on the left, made up from the first $k$ columns and an arbitrary selection of $k$ rows.
-
Then there is a complementary $(n-k) \times (n-k)$ sub-matrix on the right, call it $L'$.
-
Count the number of row transpositions $n_L$ needed to move the rows of $L$ to the top and the rows of $L'$ to the bottom (keeping the rows otherwise in order).
then the determinant of $M$ is given by summing over all the $n \choose k$ possible sub-matrices $L$
\begin{equation*}
\det(M) \enspace = \enspace \sum_L (-1)^{n_L}\det(L)\det(L')
\end{equation*}
When $k = 1$ this is the Laplace formula for computing the determinant using cofactors of the first column.
Standard Resultant Formula
Given two algebraic curves $F(x,y) = 0$ and $G(x,y) = 0$ their
resultant with respect to $y$ is a polynomial $\mathfrak{R}(x)$ that can be written
\begin{equation*}
\mathfrak{R}(x) \space = \space \resultant_y(F,G) \space = \space
\operatorname{lead}_y(F)^n \operatorname{lead}_y(G)^m \prod_{i=1}^m \prod_{j=1}^n (\psi_i(x) - \phi_j(x)) \space = \space
\operatorname{lead}_y(F)^n \prod_{i=1}^m G(x,\psi_i(x)) \space = \space
(-1)^{mn} \operatorname{lead}_y(G)^m \prod_{i=1}^n F(x,\phi_i(x))
\end{equation*}
where $m=\deg_y(F)$, $n=\deg_y(G)$ and $\psi_i(x)$ are the $m$ roots of $F(x,\psi_i(x)) = 0$ and $\phi_i(x)$ are the $n$ roots of $G(x,\phi_i(x)) = 0$.
When $n = 1$ we have the simpler formula
\begin{equation*}
\mathfrak{R}(x) \space = \space \resultant_y(F,G) \space = \space
\operatorname{lead}_y(F) \prod_{i=1}^m G(x,\psi_i(x)) \space = \space
(-1)^m \operatorname{lead}_y(G)^m F(x,\phi(x))
\end{equation*}
where $G(x,\phi(x)) = 0$.
In homogenous coordinates
\begin{equation*}
F\left(
x_3
\begin{vmatrix}
x_1 & z_1 \\
x_2 & z_2 \\
\end{vmatrix}, \space
x_3
\begin{vmatrix}
y_1 & z_1 \\
y_2 & z_2 \\
\end{vmatrix} \space + \space
z_3
\begin{vmatrix}
x_1 & y_1 \\
x_2 & y_2 \\
\end{vmatrix}, \space
z_3
\begin{vmatrix}
x_1 & z_1 \\
x_2 & z_2 \\
\end{vmatrix}
\right) \enspace = \enspace z_1^2 z_2^2
\begin{vmatrix}
x_1 & z_1 \\
x_3 & z_3 \\
\end{vmatrix}
\begin{vmatrix}
x_2 & z_2 \\
x_3 & z_3 \\
\end{vmatrix}
\begin{vmatrix}
X & Z \\
x_3 & z_3 \\
\end{vmatrix}
\enspace - \enspace
\begin{vmatrix}
x_2 & z_2 \\
x_3 & z_3 \\
\end{vmatrix}^3 F(x_1,y_1,z_1) \enspace + \enspace
\begin{vmatrix}
x_1 & z_1 \\
x_3 & z_3 \\
\end{vmatrix}^3 F(x_2,y_2,z_2)
\end{equation*}
References