Home Powered by MathJax

Quartic Elliptic Tour

by

Gregg Kelly

In this section we gather together quartic elliptic formulae from previous pages to show that it is not necessary to reduce to special forms to get elegant and reasonably concise formulae.

$\approx \textsf{Euler}^* \approx$ If the roots of two quartic polynomials can be mapped to one another by a Möbius transform $L$ then the change of variables $x = L(u)$ gives \begin{equation} \sqrt[12] {\discrim(a,b,c,d,e)} \space \bigint \frac 1 {\sqrt{ax^4 + bx^3 + cx^2 + dx + e}} dx \enspace = \enspace \sqrt[12] {\discrim(p,q,r,s,t)} \space \bigint \frac 1 {\sqrt{pu^4 + qu^3 + ru^2 + su + t}} du \end{equation} where $\discrim$ denotes the discriminant of the fourth degree polynomials.

NOTES

$\approx \textsf{Weierstrass}^* \approx$ Let $f$ be a solution of the differential equation \begin{equation} f'(z)^2 \enspace = \enspace a f(z)^4 + b f(z)^3 + c f(z)^2 + d f(z) + e \end{equation} with the boundary condition $f'(0) = 0$. In general there are four solutions corresponding to the four roots of the polynomial. Each solution $f$ is an even function and it's cross-ratio satisfies the formula \begin{equation} \frac {\big[f(z_1)-f(z_2)\big]\thinspace\big[f(z_3)-f(z_4)\big]} {\big[f(z_1)-f(z_3)\big]\thinspace\big[f(z_2)-f(z_4)\big]} \enspace = \enspace \frac {\sigma(z_1-z_2)\sigma(z_1+z_2)\sigma(z_3-z_4)\sigma(z_3+z_4)} {\sigma(z_1-z_3)\sigma(z_1+z_3)\sigma(z_2-z_4)\sigma(z_2+z_4)} \end{equation} where $\sigma$ is the Weierstrass sigma function with the same periods as $f$.

NOTES

$\approx \textsf{Frobenius & Stickelberger}^* \approx$ In a similar vein, if $\pm\rho$ are the poles of $f$, we have \begin{equation} \{f\} \enspace = \enspace \begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 1 & f(z_4) & f(z_4)^2 & f'(z_4) \\ \end{vmatrix} \enspace = \enspace \frac {2 \sigma^4(2\rho) \space \sigma(z_1 + z_2 + z_3 + z_4)\space \prod\limits_{i \lt j}\sigma(z_i-z_j)} {a^2 \prod\limits_{i=1}^4 \sigma^2(z_i-\rho)\sigma^2(z_i+\rho)} \end{equation} This equation is the first minor miracle, in the computation of the addition formula, because we are able to deduce the $\sigma(z_1 + z_2 + z_3 + z_4)$ term from the fact that the sum of the zeroes of an elliptic function is equal to the sum of the poles.

When $z_1 + z_2 + z_3 + z_4 = 0$ we have \begin{equation} \begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 1 & f(z_4) & f(z_4)^2 & f'(z_4) \\ \end{vmatrix} \enspace = \enspace 0 \end{equation} This is the symmetric addition formula for $f$.

NOTES

Let \begin{equation} R(x) \enspace = \enspace ax^4 + bx^3 + cx^2 + dx + e \end{equation} The differential equation for $f$ implies that $f^{-1}(x)$, is an anti-derivative of $\large{\frac 1 {\sqrt{R(x)}}}$. Therefore another way to express the symmetric addition formula for $f$ is to say that if \begin{equation} \bigint_{x_3}^{x_1} \frac {dx} {\sqrt{R(x)}} \enspace + \enspace \bigint_{x_3}^{x_2} \frac {dx} {\sqrt{R(x)}} \enspace = \enspace \bigint_{x_3}^{x_4} \frac {dx} {\sqrt{R(x)}} \end{equation} then $x_1,x_2,x_3,x_4$ satisfy the algebraic relation

\begin{equation} \begin{vmatrix} 1 & x_1 & x_1^2 & \hphantom{+}\sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \hphantom{+}\sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & -\sqrt{R(x_3)} \\ 1 & x_4 & x_4^2 & -\sqrt{R(x_4)} \\ \end{vmatrix} \space = \space 0 \end{equation}

NOTES

$\approx \textsf{Abel}^* \approx$ We can also express this in terms of the quartic curve \begin{equation} y^2 \enspace = \enspace ax^4 + bx^3 + cx^2 + dx + e \end{equation} If we run a parabola through three points $(x_1,y_1),(x_2,y_2),(x_3,y_3)$ on this curve, it will intersect the curve at a fourth point $(x_4, y_4)$ with coordinates given implicitly by \begin{equation} \begin{vmatrix} 1 & x_1 & x_1^2 & y_1 \\ 1 & x_2 & x_2^2 & y_2 \\ 1 & x_3 & x_3^2 & y_3 \\ 1 & x_4 & x_4^2 & y_4 \\ \end{vmatrix} \enspace = \enspace 0 \qquad\qquad \text{&} \qquad\qquad y_4^2 \enspace = \enspace R(x_4) \end{equation}

These two equations can be solved for $x_4$ by eliminating $y_4$ resulting in a quartic equation for $x_4$ and this is where the second minor miracle occurs. Because the points $(x_1,y_1),(x_2,y_2),(x_2,y_3)$ lie on both the quartic and parabolic curves, we already know three roots of this equation, namely $x_1,x_2,x_3$. When we factor out these roots we are left with a linear equation for $x_4$ which is easily solved. If we use resultants, in their product form, to do the elimination we get \begin{equation} x_4 \enspace = \enspace \frac 1 {x_1 x_2 x_3} \frac {\begin{vmatrix} 1 & x_1 & x_1^2 & y_1 \\ 1 & x_2 & x_2^2 & y_2 \\ 1 & x_3 & x_3^2 & y_3 \\ 1 & 0 & 0 & \sqrt{e} \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & y_1 \\ 1 & x_2 & x_2^2 & y_2 \\ 1 & x_3 & x_3^2 & y_3 \\ 1 & 0 & 0 & -\sqrt{e} \\ \end{vmatrix}} {\begin{vmatrix} 1 & x_1 & x_1^2 & y_1 \\ 1 & x_2 & x_2^2 & y_2 \\ 1 & x_3 & x_3^2 & y_3 \\ 0 & 0 & 1 & \sqrt{a} \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & y_1 \\ 1 & x_2 & x_2^2 & y_2 \\ 1 & x_3 & x_3^2 & y_3 \\ 0 & 0 & 1 & -\sqrt{a} \\ \end{vmatrix}}, \qquad\qquad y_4 \enspace = \enspace \frac {a^2} {y_1 y_2 y_3} \frac {\begin{vmatrix} 1 & x_1 & x_1^2 & y_1 \\ 1 & x_2 & x_2^2 & y_2 \\ 1 & x_3 & x_3^2 & y_3 \\ 1 & e_1 & e_1^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & y_1 \\ 1 & x_2 & x_2^2 & y_2 \\ 1 & x_3 & x_3^2 & y_3 \\ 1 & e_2 & e_2^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & y_1 \\ 1 & x_2 & x_2^2 & y_2 \\ 1 & x_3 & x_3^2 & y_3 \\ 1 & e_3 & e_3^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & y_1 \\ 1 & x_2 & x_2^2 & y_2 \\ 1 & x_3 & x_3^2 & y_3 \\ 1 & e_4 & e_4^2 & 0 \\ \end{vmatrix}} {\begin{vmatrix} 1 & x_1 & x_1^2 & y_1 \\ 1 & x_2 & x_2^2 & y_2 \\ 1 & x_3 & x_3^2 & y_3 \\ 0 & 0 & 1 & \sqrt{a} \\ \end{vmatrix}^2 \thinspace \begin{vmatrix} 1 & x_1 & x_1^2 & y_1 \\ 1 & x_2 & x_2^2 & y_2 \\ 1 & x_3 & x_3^2 & y_3 \\ 0 & 0 & 1 & -\sqrt{a} \\ \end{vmatrix}^2} \end{equation} where $e_1,e_2,e_3,e_4$ are the roots of the polynomial $R(x)$.

Due to the symmetries both $x_4$ and $y_4$ are rational functions of $x_1,x_2,x_3,y_1,y_2,y_3$ and the coefficients $a,b,c,d,e$ of the quartic curve. Which means that if the curve has rational coefficients and the first three points have rational coordinates then the fourth point will also have rational coordinates.

NOTES

And from this it follows that \begin{equation} \bigint_{x_3}^{x_1} \frac {dx} {\sqrt{R(x)}} \enspace + \enspace \bigint_{x_3}^{x_2} \frac {dx} {\sqrt{R(x)}} \enspace = \enspace \bigint_{x_3}^{x_4} \frac {dx} {\sqrt{R(x)}} \qquad\qquad\implies\qquad\qquad x_4 \enspace = \enspace \frac 1 {x_1 x_2 x_3} \frac {\begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & -\sqrt{R(x_3)} \\ 1 & 0 & 0 & \sqrt{e} \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & -\sqrt{R(x_3)} \\ 1 & 0 & 0 & -\sqrt{e} \\ \end{vmatrix}} {\begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & -\sqrt{R(x_3)} \\ 0 & 0 & 1 & \sqrt{a} \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & -\sqrt{R(x_3)} \\ 0 & 0 & 1 & -\sqrt{a} \\ \end{vmatrix}} \end{equation} Putting $R(x)=(1-x^2)(1-k^2x^2)$ and $x_1 = u$, $x_2 = v$, $x_3 = 0$ and $x_4 = w$ gives Eulers classic addition formula for the elliptic integral \begin{equation} \bigint_0^u \frac {dx} {\sqrt{R(x)}} \enspace + \enspace \bigint_0^v \frac {dx} {\sqrt{R(x)}} \enspace = \enspace \bigint_0^w \frac {dx} {\sqrt{R(x)}} \qquad\qquad \implies \qquad\qquad w \enspace = \enspace \frac {u\sqrt{R(v)} \space + \space v\sqrt{R(u)}} {1 \space - \space k^2u^2v^2} \qquad\qquad\qquad\qquad \end{equation}

NOTES

Because $f$ is even it also follows that \begin{equation} f(z_1 + z_2 + z_3) \enspace = \enspace \frac 1 {f(z_1) f(z_2) f(z_3)} \frac {\begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 1 & 0 & 0 & \sqrt{e} \\ \end{vmatrix} \space \begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 1 & 0 & 0 & -\sqrt{e} \\ \end{vmatrix}} {\begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 0 & 0 & 1 & \sqrt{a} \\ \end{vmatrix} \space \begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 0 & 0 & 1 & -\sqrt{a} \\ \end{vmatrix}} \end{equation} and because $f'$ is odd \begin{equation} f'(z_1 + z_2 + z_3) \enspace = \enspace -\frac {a^2} {f'(z_1) f'(z_2) f'(z_3)} \frac {\begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 1 & e_1 & e_1^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 1 & e_2 & e_2^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 1 & e_3 & e_3^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 1 & e_4 & e_4^2 & 0 \\ \end{vmatrix}} {\begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 0 & 0 & 1 & \sqrt{a} \\ \end{vmatrix}^2 \thinspace \begin{vmatrix} 1 & f(z_1) & f(z_1)^2 & f'(z_1) \\ 1 & f(z_2) & f(z_2)^2 & f'(z_2) \\ 1 & f(z_3) & f(z_3)^2 & f'(z_3) \\ 0 & 0 & 1 & -\sqrt{a} \\ \end{vmatrix}^2} \end{equation}

NOTES

These formulae also give us a two variable addition formulae for any solution $g$ of the differential equation $g'^2 = ag^4 + bg^3 + cg^2 + dg + e$ namely \begin{equation} g(u + v) \enspace = \enspace \frac 1 {g(u) g(v) g(0)} \frac {\begin{vmatrix} 1 & g(u) & g(u)^2 & g'(u) \\ 1 & g(v) & g(v)^2 & g'(v) \\ 1 & g(0) & g(0)^2 & -g'(0) \\ 1 & 0 & 0 & \sqrt{e} \\ \end{vmatrix} \space \begin{vmatrix} 1 & g(u) & g(u)^2 & g'(u) \\ 1 & g(v) & g(v)^2 & g'(v) \\ 1 & g(0) & g(0)^2 & -g'(0) \\ 1 & 0 & 0 & -\sqrt{e} \\ \end{vmatrix}} {\begin{vmatrix} 1 & g(u) & g(u)^2 & g'(u) \\ 1 & g(v) & g(v)^2 & g'(v) \\ 1 & g(0) & g(0)^2 & -g'(0) \\ 0 & 0 & 1 & \sqrt{a} \\ \end{vmatrix} \space \begin{vmatrix} 1 & g(u) & g(u)^2 & g'(u) \\ 1 & g(v) & g(v)^2 & g'(v) \\ 1 & g(0) & g(0)^2 & -g'(0) \\ 0 & 0 & 1 & -\sqrt{a} \\ \end{vmatrix}} \end{equation} and \begin{equation} g'(u + v) \enspace = \enspace \frac {a^2} {g'(u) g'(v) g'(0)} \frac {\begin{vmatrix} 1 & g(u) & g(u)^2 & g'(u) \\ 1 & g(v) & g(v)^2 & g'(v) \\ 1 & g(0) & g(0)^2 & -g'(0) \\ 1 & e_1 & e_1^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & g(u) & g(u)^2 & g'(u) \\ 1 & g(v) & g(v)^2 & g'(v) \\ 1 & g(0) & g(0)^2 & -g'(0) \\ 1 & e_2 & e_2^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & g(u) & g(u)^2 & g'(u) \\ 1 & g(v) & g(v)^2 & g'(v) \\ 1 & g(0) & g(0)^2 & -g'(0) \\ 1 & e_3 & e_3^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & g(u) & g(u)^2 & g'(u) \\ 1 & g(v) & g(v)^2 & g'(v) \\ 1 & g(0) & g(0)^2 & -g'(0) \\ 1 & e_4 & e_4^2 & 0 \\ \end{vmatrix}} {\begin{vmatrix} 1 & g(u) & g(u)^2 & g'(u) \\ 1 & g(v) & g(v)^2 & g'(v) \\ 1 & g(0) & g(0)^2 & -g'(0) \\ 0 & 0 & 1 & \sqrt{a} \\ \end{vmatrix}^2 \space \begin{vmatrix} 1 & g(u) & g(u)^2 & g'(u) \\ 1 & g(v) & g(v)^2 & g'(v) \\ 1 & g(0) & g(0)^2 & -g'(0) \\ 0 & 0 & 1 & -\sqrt{a} \\ \end{vmatrix}^2} \end{equation}

NOTES

In the diagram parametrise the quartic curve by $f$ so that the $x,y$-coordinates of the points are given by $\left(x_i,y_i\right) = \left(f(z_i),f'(z_i)\right)$. Then in terms of $z$-coordinates, if we vary the position of the four points on the quartic, while constraining them so that they also lie on a parabola, then the equation $z_1 + z_2 + z_3 + z_4 = 0$ becomes a differential equation $dz_1 + dz_2 + dz_3 + dz_4 = 0$. In terms of $x,y$ coordinates this translates into the differential equation \begin{equation} \frac {dx_1}{y_1} \enspace + \enspace \frac {dx_2}{y_2} \enspace + \enspace \frac {dx_3}{y_3} \enspace + \enspace \frac {dx_4}{y_4} \enspace = \enspace 0 \end{equation}

We can also hold the fourth point on the curve fixed and vary the other three to obtain the differential equation \begin{equation} \frac {dx_1} {\sqrt{R(x_1)}} \enspace + \enspace \frac {dx_2} {\sqrt{R(x_2)}} \enspace + \enspace \frac {dx_3} {\sqrt{R(x_3)}} \enspace = \enspace 0 \end{equation} This implies the formula for the $x$-coordinate of the fourth point on the curve is an explicit algebraic integral of this differential equation namely \begin{equation} X(x_1,x_2,x_3) \space = \space \textit{const.} \end{equation} where $X$ is the algebraic function \begin{equation} X(x_1,x_2,x_3) \enspace = \enspace \frac 1 {x_1 x_2 x_3} \frac {\begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\ 1 & 0 & 0 & \sqrt{e} \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\ 1 & 0 & 0 & -\sqrt{e} \\ \end{vmatrix}} {\begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\ 0 & 0 & 1 & \sqrt{a} \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\ 0 & 0 & 1 & -\sqrt{a} \\ \end{vmatrix}} \end{equation} It also implies the formula for the $y$-coordinate of the fourth point is another algebraic integral of this differential equation namely \begin{equation} Y(x_1,x_2,x_3) \space = \space \textit{const.} \end{equation} where \begin{equation} Y(x_1,x_2,x_3) \enspace = \enspace \frac {a^2} {\sqrt{R(x_1)R(x_2)R(x_3)}} \frac {\begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\ 1 & e_1 & e_1^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\ 1 & e_2 & e_2^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\ 1 & e_3 & e_3^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\ 1 & e_4 & e_4^2 & 0 \\ \end{vmatrix}} {\begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\ 0 & 0 & 1 & \sqrt{a} \\ \end{vmatrix}^2 \thinspace \begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\ 0 & 0 & 1 & -\sqrt{a} \\ \end{vmatrix}^2} \end{equation} The apparent contradiction of having two distinct algebraic integrals is explained by the fact that they satisfy an algebraic relation \begin{equation} Y^2 \enspace = \enspace R(X) \end{equation}

NOTES

To motivate the next set of equations observe that by multiplying the differential equation \begin{equation} \frac {dx_1} {x_1} \enspace + \enspace \frac {dx_2} {x_2} \enspace + \enspace \frac {dx_3} {x_3} \enspace = \enspace 0 \end{equation} by the algebraic integrating factor $x_1 x_2 x_3$ we can integrate it as follows \begin{equation} x_2 x_3 \thinspace dx_1 \enspace + \enspace x_1 x_3 \thinspace dx_2 \enspace + \enspace x_1 x_2 \thinspace dx_3 \enspace = \enspace 0 \qquad\qquad \implies \qquad\qquad x_1 x_2 x_3 \enspace = \enspace \textit{const.} \end{equation} That is we can integrate using an algebraic integrating factor rather than the transcendental $\log$ function. We now seek to do the same thing for \begin{equation} \frac {dx_1} {\sqrt{R(x_1)}} \enspace + \enspace \frac {dx_2} {\sqrt{R(x_2)}} \enspace + \enspace \frac {dx_3} {\sqrt{R(x_3)}} \enspace = \enspace 0 \end{equation}

We can express the addition formulae for $f$ and $f'$ in terms of $X$ and $Y$ \begin{equation} f(z_1 + z_2 + z_3) \enspace = \enspace X(f(z_1), f(z_2), f(z_3)) \qquad\qquad\qquad f'(z_1 + z_2 + z_3) \enspace = \enspace -Y(f(z_1), f(z_2), f(z_3)) \end{equation} and by differentiating these formulae with respect to $z_1,z_2,z_3$ and converting back to $x$-coordinates obtain the identities \begin{equation} -Y \enspace = \enspace \sqrtsm{R(x_1)} \frac {\partial X} {\partial x_1} \enspace = \enspace \sqrtsm{R(x_2)} \frac {\partial X} {\partial x_2} \enspace = \enspace \sqrtsm{R(x_3)} \frac {\partial X} {\partial x_3} \qquad\qquad\qquad -\tfrac 1 2 R'(X) \enspace = \enspace \sqrtsm{R(x_1)} \frac {\partial Y} {\partial x_1} \enspace = \enspace \sqrtsm{R(x_2)} \frac {\partial Y} {\partial x_2} \enspace = \enspace \sqrtsm{R(x_3)} \frac {\partial Y} {\partial x_3} \end{equation} Then we may observe that the expression $-Y(x_1,x_2,x_3)$ is an algebraic integrating factor for the differential equation \begin{equation} \frac {dx_1} {\sqrt{R(x_1)}} \enspace + \enspace \frac {dx_2} {\sqrt{R(x_2)}} \enspace + \enspace \frac {dx_3} {\sqrt{R(x_3)}} \enspace = \enspace 0 \end{equation} because after multiplication by $-Y$, utilising the above identities, it becomes \begin{equation} \frac {\partial X}{\partial x_1} {dx_1} \enspace + \enspace \frac {\partial X}{\partial x_2} {dx_2} \enspace + \enspace \frac {\partial X}{\partial x_3} {dx_3} \enspace = \enspace 0 \end{equation} which integrates, as expected, to \begin{equation} X(x_1,x_2,x_3) \space = \space \textit{const.} \end{equation} Similarly the expression $-\tfrac 1 2 R'(X)$ is another algebraic integrating factor, multiplying by it and using the above identities gives \begin{equation} \frac {\partial Y}{\partial x_1} {dx_1} \enspace + \enspace \frac {\partial Y}{\partial x_2} {dx_2} \enspace + \enspace \frac {\partial Y}{\partial x_3} {dx_3} \enspace = \enspace 0 \end{equation} which integrates, as expected, to \begin{equation} Y(x_1,x_2,x_3) \space = \space \textit{const.} \end{equation} More generally if $P(X,Y)$ is any non-trivial differentiable function, then $\displaystyle - \left[Y \frac {\partial P} {\partial X} + \tfrac 1 2 R'(X) \frac {\partial P} {\partial Y} \right]$ is an algebraic integrating factor, and multiplying by it gives \begin{equation} \left[\frac {\partial P}{\partial X}\frac {\partial X}{\partial x_1} + \frac {\partial P}{\partial Y}\frac {\partial Y}{\partial x_1}\right] {dx_1} \enspace + \enspace \left[\frac {\partial P}{\partial X}\frac {\partial X}{\partial x_2} + \frac {\partial P}{\partial Y}\frac {\partial Y}{\partial x_2}\right] {dx_2} \enspace + \enspace \left[\frac {\partial P}{\partial X}\frac {\partial X}{\partial x_3} + \frac {\partial P}{\partial Y}\frac {\partial Y}{\partial x_3}\right] {dx_3} \enspace = \enspace 0 \end{equation} which integrates to \begin{equation} P\big(X(x_1,x_2,x_3), Y(x_1,x_2,x_3)\big) \space = \space \textit{const.} \end{equation}

Taking a slightly different approach and writing the differential equation like this \begin{equation} \frac {dx_1} {\sqrt{(x_1-e_1)(x_1-e_2)(x_1-e_3)(x_1-e_4)}} \enspace + \enspace \frac {dx_2} {\sqrt{(x_2-e_1)(x_2-e_2)(x_2-e_3)(x_2-e_4)}} \enspace + \enspace \frac {dx_3} {\sqrt{(x_3-e_1)(x_3-e_2)(x_3-e_3)(x_3-e_4)}} \enspace = \enspace 0 \end{equation} several simple determinant style implicit algebraic integrals can be found, with $x_4$ the constant of integration:

Table 1
Implicit Algebraic Integral Basis And Polar Divisor Lattice
$I_1$ \begin{equation*} \begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3)(x_1 - e_4)} \\ 1 & x_2 & x_2^2 & \sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3)(x_2 - e_4)} \\ 1 & x_3 & x_3^2 & \sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3)(x_3 - e_4)} \\ 1 & x_4 & x_4^2 & \sqrt{(x_4 - e_1)(x_4 - e_2)(x_4 - e_3)(x_4 - e_4)} \\ \end{vmatrix} \enspace = \enspace 0 \end{equation*} $\displaystyle 1, \enspace f, \enspace f^2, \enspace \sqrt{(f-e_1)(f-e_2)(f-e_3)(f-e_4)}$

$2(\rho) + 2(-\rho)$
$\left[2\omega_1,2\omega_2\right]$
$I_2$ \begin{equation*} \begin{vmatrix} 1 & x_1 & \sqrt{(x_1-e_1)(x_1-e_4)} & \sqrt{(x_1 - e_2)(x_1 - e_3)} \\ 1 & x_2 & \sqrt{(x_2-e_1)(x_2-e_4)} & \sqrt{(x_2 - e_2)(x_2 - e_3)} \\ 1 & x_3 & \sqrt{(x_3-e_1)(x_3-e_4)} & \sqrt{(x_3 - e_2)(x_3 - e_3)} \\ 1 & x_4 & \sqrt{(x_4-e_1)(x_4-e_4)} & \sqrt{(x_4 - e_2)(x_4 - e_3)} \\ \end{vmatrix} \enspace = \enspace 0 \end{equation*} $\displaystyle 1, \enspace f, \enspace \sqrt{(f-e_1)(f-e_4)}, \enspace \sqrt{(f-e_2)(f-e_3)}$

$(\rho) + (-\rho) + (\rho+2\omega_1) + (-\rho-2\omega_1)$
$\left[4\omega_1,2\omega_2\right]$
$I_3$ \begin{equation*} \begin{vmatrix} \sqrt{x_1 - e_1} & \sqrt{x_1 - e_2} & \sqrt{x_1 - e_3} & \sqrt{x_1 - e_4} \\ \sqrt{x_2 - e_1} & \sqrt{x_2 - e_2} & \sqrt{x_2 - e_3} & \sqrt{x_2 - e_4} \\ \sqrt{x_3 - e_1} & \sqrt{x_3 - e_2} & \sqrt{x_3 - e_3} & \sqrt{x_3 - e_4} \\ \sqrt{x_4 - e_1} & \sqrt{x_4 - e_2} & \sqrt{x_4 - e_3} & \sqrt{x_4 - e_4} \\ \end{vmatrix} \enspace = \enspace 0 \end{equation*} $\displaystyle 1, \enspace \sqrt{\frac {f-e_1} {f-e_4}}, \enspace \sqrt{\frac {f-e_2} {f-e_4}}, \enspace \sqrt{\frac {f-e_3} {f-e_4}}$

$(0) + (2\omega_1) + (2\omega_2) + (2\omega_3)$
$\left[4\omega_1,4\omega_2\right]$

Where $\omega_1, \omega_2, \omega_3$ are half periods are defined by taking $f(\omega_1) = e_1, \space f(\omega_2) = e_2, \space f(\omega_3) = e_3, \space f(0) = e_4$. By convention $\omega_1 + \omega_2 + \omega_3 = 0$ and the polar divisors all sum to zero.

NOTES

Note these formulae are just the tip of the iceberg. Any even, order 2 elliptic function $g$ whose period lattice intersects that of $f$ non-trivially is related to $f$ by an algebraic relation of genus zero. (Because both functions are rational functions of the $\wp$ function on the lattice of intersection.) If the algebraic relation is solved for $g$ then we get $g = A(f)$ where $A$ is some algebraic function. And another implict integral of the differential equation is given by \begin{equation} \begin{vmatrix} 1 & A(x_1) & A(x_1)^2 & A'(x_1) \\ 1 & A(x_2) & A(x_2)^2 & A'(x_2) \\ 1 & A(x_3) & A(x_3)^2 & A'(x_3) \\ 1 & A(x_4) & A(x_4)^2 & A'(x_4) \\ \end{vmatrix} \enspace = \enspace 0 \end{equation} Further the determinants $I_1 \ldots I_5$ are (algebraic) symmetric root differences in the sense of classical invariant theory. For example the 2 + 2 generalised Laplace expansion of $I_2$ gives 12 terms like \begin{equation} (x_2 - x_1)\sqrt{(x_3-e_1)(x_3 - e_4)(x_4 - e_2)(x_4 - e_3)} \end{equation} which is a root difference of weight 1 in the $x_i$ and weight $\tfrac 1 2$ in the $e_j$. Therefore the product of $I_3$ over all $2^6$ sign combinations of the last 3 rows will be a simultaneous invariant of the two quartic polynomials whose roots are $x_i$ and $e_j$.

$\approx \textsf{Cayley}^* \approx$ If \begin{equation} \bigint_{x_2}^{x_1} \frac {dx} {\sqrt{(x - e_1)(x - e_2)(x - e_3)(x - e_4)}} \space = \space \bigint_{x_4}^{x_3} \frac {dx} {\sqrt{(x - e_1)(x - e_2)(x - e_3)(x - e_4)}} \end{equation} then the $x_i$ and $e_i$ satisfy a polynomial relation \begin{equation} \mathfrak{R}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \enspace = \enspace 0 \end{equation} Because the integral formula is invariant under Möbius transformations, $\mathfrak{R}$ must be a simultaneous invariant (in the sense of classical Invariant Theory) of the two quartic polynomials $R(x) = (x - e_1)(x - e_2)(x - e_3)(x - e_4)$ and $S(x) = (x - x_1)(x - x_2)(x - x_3)(x - x_4)$. It is given by \begin{equation} \begin{aligned} \mathfrak{R}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \enspace &= \enspace \begin{vmatrix} 1 & x_1 & x_1^2 & x_1^3 \\ 1 & x_2 & x_2^2 & x_2^3 \\ 1 & x_3 & x_3^2 & x_3^3 \\ 1 & x_4 & x_4^2 & x_4^3 \\ \end{vmatrix}^{-4} \prod_{\textsf{all signs}} \begin{vmatrix} 1 & x_1 & x_1^2 & \hphantom{+} \sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3)(x_1 - e_4)} \\ 1 & x_2 & x_2^2 & \pm \sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3)(x_2 - e_4)} \\ 1 & x_3 & x_3^2 & \pm \sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3)(x_3 - e_4)} \\ 1 & x_4 & x_4^2 & \pm \sqrt{(x_4 - e_1)(x_4 - e_2)(x_4 - e_3)(x_4 - e_4)} \\ \end{vmatrix} \\\\ &= \enspace \left(x_1x_2x_3 + x_1x_2x_4 + x_1x_3x_4 + x_2x_3x_4\right)^4\left(e_1 + e_2 + e_3 + e_4\right)^4 \quad + \quad \text{another 237 similar terms} \end{aligned} \end{equation} This can be simplified by letting \begin{equation} \begin{aligned} \mathfrak{S}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \enspace &= \enspace \begin{vmatrix} 1 & x_1 & x_1^2 & x_1^3 \\ 1 & x_2 & x_2^2 & x_2^3 \\ 1 & x_3 & x_3^2 & x_3^3 \\ 1 & x_4 & x_4^2 & x_4^3 \\ \end{vmatrix}^{-2} \frac 1 2 \space \left( \prod_{\textsf{evens}} \begin{vmatrix} 1 & x_1 & x_1^2 & \hphantom{+} \sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3)(x_1 - e_4)} \\ 1 & x_2 & x_2^2 & \pm \sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3)(x_2 - e_4)} \\ 1 & x_3 & x_3^2 & \pm \sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3)(x_3 - e_4)} \\ 1 & x_4 & x_4^2 & \pm \sqrt{(x_4 - e_1)(x_4 - e_2)(x_4 - e_3)(x_4 - e_4)} \\ \end{vmatrix} \enspace + \enspace \prod_{\textsf{odds}} \ldots \right) \\\\ &= \enspace\left(x_1x_2x_3 + x_1x_2x_4 + x_1x_3x_4 + x_2x_3x_4\right)^2\left(e_1 + e_2 + e_3 + e_4\right)^2 \quad + \quad \text{another 23 similar terms} \end{aligned} \end{equation} where evens means the subset of four factors with an even number of minus signs, and odds the subset of four factors with an odd number of minus signs. Then we have \begin{equation} \mathfrak{R}(R,S) \enspace = \enspace \mathfrak{S}(R,S)^2 \enspace - \enspace 64 \cdot \resultant(R,S) \end{equation} The simultaneous invariant $\mathfrak{S}$, of the two quartics $R$ and $S$, can be concisely expressed in terms of scaled transvectants \begin{equation} \mathfrak{S}(R,S) \enspace = \enspace 96 \thinspace \transvectant{\transvectant{R,R}_2,\transvectant{S,S}_2}_4 \enspace - \enspace 8 \thinspace \transvectant{R,S}_4^2 \enspace - \enspace 8 \thinspace \transvectant{R,R}_4 \transvectant{S,S}_4 \end{equation} which implies some unexpected symmetry, namely \begin{equation} \mathfrak{S}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \space = \space \mathfrak{S}(e_1,e_2,e_3,e_4,x_1,x_2,x_3,x_4)\hspace{4em} \textsf{and} \hspace{4em} \mathfrak{R}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \space = \space \mathfrak{R}(e_1,e_2,e_3,e_4,x_1,x_2,x_3,x_4) \end{equation}

NOTES

In terms of $f$, by taking the product over factors with an even number of minus signs we get this identity for any $z_1,z_2,z_3,z_4$ \begin{equation} \mathfrak{S}\left(f(z_1),f(z_2),f(z_3),f(z_4)\right) \enspace + \enspace 8 f'(z_1)f'(z_2)f'(z_3)f'(z_4) \enspace = \enspace 16 a^4 \sigma^4(2\rho) \space \frac {\prod\limits_{\textsf{evens}} \sigma(z_1 \pm z_2 \pm z_3 \pm z_4)} {\prod\limits_{i=1}^4 \sigma^2(z_i-\rho)\sigma^2(z_i+\rho)} \end{equation} and a similar formula for an odd number of minus signs \begin{equation} \mathfrak{S}\left(f(z_1),f(z_2),f(z_3),f(z_4)\right) \enspace - \enspace 8 f'(z_1)f'(z_2)f'(z_3)f'(z_4) \enspace = \enspace 16 a^4 \sigma^4(2\rho) \space \frac {\prod\limits_{\textsf{odds}} \sigma(z_1 \pm z_2 \pm z_3 \pm z_4)} {\prod\limits_{i=1}^4 \sigma^2(z_i-\rho)\sigma^2(z_i+\rho)} \end{equation} The product over all signs gives the identity \begin{equation} \mathfrak{R}\left(f(z_1),f(z_2),f(z_3),f(z_4)\right) \enspace = \enspace 256 a^8 \sigma^8(2\rho) \space \frac {\prod\limits_{\textsf{all signs}} \sigma(z_1 \pm z_2 \pm z_3 \pm z_4)} {\prod\limits_{i=1}^4 \sigma^4(z_i-\rho)\sigma^4(z_i+\rho)} \end{equation} From this we get the symmetric four variable addition formula \begin{equation} z_1 \pm z_2 \pm z_3 \pm z_4 \space = \space 0 \qquad \iff \qquad \mathfrak{R}(f(z_1),f(z_2),f(z_3),f(z_4)) \enspace = \enspace 0 \end{equation} and by the substitution $z_4=0$ the simpler but not quite so symmetric three variable addition formula \begin{equation} z_1 \pm z_2 \pm z_3 \space = \space 0 \qquad \iff \qquad \mathfrak{S}\left(f(z_1),f(z_2),f(z_3),f(0)\right) \enspace = \enspace 0 \end{equation}

NOTES

The symmetry in $\mathfrak{S}$ implies the algebraic identity \begin{equation} \begin{vmatrix} 1 & x_1 & x_1^2 & x_1^3 \\ 1 & x_2 & x_2^2 & x_2^3 \\ 1 & x_3 & x_3^2 & x_3^3 \\ 1 & x_4 & x_4^2 & x_4^3 \\ \end{vmatrix}^{-2} \prod_{\textsf{evens}} \begin{vmatrix} 1 & x_1 & x_1^2 & \hphantom{\pm}\sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3)(x_1 - e_4)} \\ 1 & x_2 & x_2^2 & \pm\sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3)(x_2 - e_4)} \\ 1 & x_3 & x_3^2 & \pm\sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3)(x_3 - e_4)} \\ 1 & x_4 & x_4^2 & \pm\sqrt{(x_4 - e_1)(x_4 - e_2)(x_4 - e_3)(x_4 - e_4)} \\ \end{vmatrix} \enspace = \enspace \begin{vmatrix} 1 & e_1 & e_1^2 & e_1^3 \\ 1 & e_2 & e_2^2 & e_2^3 \\ 1 & e_3 & e_3^2 & e_3^3 \\ 1 & e_4 & e_4^2 & e_4^3 \\ \end{vmatrix}^{-2} \prod_{\textsf{evens}} \begin{vmatrix} 1 & e_1 & e_1^2 & \hphantom{\pm}\sqrt{(e_1 - x_1)(e_1 - x_2)(e_1 - x_3)(e_1 - x_4)} \\ 1 & e_2 & e_2^2 & \pm\sqrt{(e_2 - x_1)(e_2 - x_2)(e_2 - x_3)(e_2 - x_4)} \\ 1 & e_3 & e_3^2 & \pm\sqrt{(e_3 - x_1)(e_3 - x_2)(e_3 - x_3)(e_3 - x_4)} \\ 1 & e_4 & e_4^2 & \pm\sqrt{(e_4 - x_1)(e_4 - x_2)(e_4 - x_3)(e_4 - x_4)} \\ \end{vmatrix} \end{equation} and a similar formula for odds.

It also leads to the following $\sigma$-product \begin{equation} \begin{vmatrix} 1 & e_1 & e_1^2 & e_1^3 \\ 1 & e_2 & e_2^2 & e_2^3 \\ 1 & e_3 & e_3^2 & e_3^3 \\ 1 & e_4 & e_4^2 & e_4^3 \\ \end{vmatrix}^{-1} \begin{vmatrix} 1 & e_1 & e_1^2 & \sqrt{(f(z_1) - e_1)(f(z_2) - e_1)(f(z_3) - e_1)(f(z_4) - e_1)} \\ 1 & e_2 & e_2^2 & \sqrt{(f(z_1) - e_2)(f(z_2) - e_2)(f(z_3) - e_2)(f(z_4) - e_2)} \\ 1 & e_3 & e_3^2 & \sqrt{(f(z_1) - e_3)(f(z_2) - e_3)(f(z_3) - e_3)(f(z_4) - e_3)} \\ 1 & e_4 & e_4^2 & \sqrt{(f(z_1) - e_4)(f(z_2) - e_4)(f(z_3) - e_4)(f(z_4) - e_4)} \\ \end{vmatrix} \enspace = \enspace - \frac {\sqrt{a}} {2} \cdot \frac {\sigma(2\rho) \prod\limits_{\textsf{evens}} \sigma\left(\frac {z_1} 2 \pm \frac {z_2} 2 \pm \frac {z_3} 2 \pm \frac {z_4} 2\right)} {\sqrt{\prod \sigma(z_i - \rho)\sigma(z_i + \rho)}} \end{equation}

As can be seen from the transvectant expression, $\mathfrak{R}$ is symmetric when $x$ and $e$ are interchanged, that is \begin{equation} \mathfrak{R}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \enspace = \enspace \mathfrak{R}(e_1,e_2,e_3,e_4,x_1,x_2,x_3,x_4) \end{equation} Surprisingly this implies that if \begin{equation} \begin{vmatrix} 1 & x_1 & x_1^2 & \sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3)(x_1 - e_4)} \\ 1 & x_2 & x_2^2 & \sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3)(x_2 - e_4)} \\ 1 & x_3 & x_3^2 & \sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3)(x_3 - e_4)} \\ 1 & x_4 & x_4^2 & \sqrt{(x_4 - e_1)(x_4 - e_2)(x_4 - e_3)(x_4 - e_4)} \\ \end{vmatrix} \enspace = \enspace 0 \end{equation} for some selection of signs on the square roots, then \begin{equation} \begin{vmatrix} 1 & e_1 & e_1^2 & \sqrt{(e_1 - x_1)(e_1 - x_2)(e_1 - x_3)(e_1 - x_4)} \\ 1 & e_2 & e_2^2 & \sqrt{(e_2 - x_1)(e_2 - x_2)(e_2 - x_3)(e_2 - x_4)} \\ 1 & e_3 & e_3^2 & \sqrt{(e_3 - x_1)(e_3 - x_2)(e_3 - x_3)(e_3 - x_4)} \\ 1 & e_4 & e_4^2 & \sqrt{(e_4 - x_1)(e_4 - x_2)(e_4 - x_3)(e_4 - x_4)} \\ \end{vmatrix} \enspace = \enspace 0 \end{equation} for some selection of signs on the square roots. More surprisingly, it can be numerically verified that for the same selection of signs, the ratio's of the first three minors for any row are equal, so taking the fourth row as an example we have \begin{equation} \frac { \begin{vmatrix} x_1 & x_1^2 & \sqrt{R(x_1)} \\ x_2 & x_2^2 & \sqrt{R(x_2)} \\ x_3 & x_3^2 & \sqrt{R(x_3)} \\ \end{vmatrix}} {\begin{vmatrix} e_1 & e_1^2 & \sqrt{X(e_1)} \\ e_2 & e_2^2 & \sqrt{X(e_2)} \\ e_3 & e_3^2 & \sqrt{X(e_3)} \\ \end{vmatrix}} \enspace = \enspace \frac { \begin{vmatrix} 1 & x_1^2 & \sqrt{R(x_1)} \\ 1 & x_2^2 & \sqrt{R(x_2)} \\ 1 & x_3^2 & \sqrt{R(x_3)} \\ \end{vmatrix}} {\begin{vmatrix} 1 & e_1^2 & \sqrt{X(e_1)} \\ 1 & e_2^2 & \sqrt{X(e_2)} \\ 1 & e_3^2 & \sqrt{X(e_3)} \\ \end{vmatrix}} \enspace = \enspace \frac { \begin{vmatrix} 1 & x_1 & \sqrt{R(x_1)} \\ 1 & x_2 & \sqrt{R(x_2)} \\ 1 & x_3 & \sqrt{R(x_3)} \\ \end{vmatrix}} {\begin{vmatrix} 1 & e_1 & \sqrt{X(e_1)} \\ 1 & e_2 & \sqrt{X(e_2)} \\ 1 & e_3 & \sqrt{X(e_3)} \\ \end{vmatrix}} \end{equation}

This in turn implies that if a quartic curve has roots $e_1,e_2,e_3,e_4$ and intersects a parabola at four points with $x$-coordinates $x_1,x_2,x_3,x_4$, then there exists a complementary quartic curve with roots $x_1,x_2,x_3,x_4$ which intersects the same parabola at four points with $x$-coordinates $e_1,e_2,e_3,e_4$.

NOTES

Substituting $(x,\space y) \longmapsto (x/z,\space y/z^2)$ in the quartic and generalising a little gives an addition formula for integer triples on the curve \begin{equation} h y^2 \enspace + \enspace (p x^2 + q x z + r z^2)\thinspace y \enspace + \enspace (a x^4 + b x^3 z + c x^2 z^2 + d x z^3 + e z^4) \enspace = \enspace 0 \end{equation} That is to say, if this curve has integer coefficients and $(x_1,y_1,z_1),(x_2,y_2,z_2),(x_3,y_3,z_3)$ are three integer points on this curve, then a fourth integer point is given by \begin{equation} \begin{aligned} x_4 \enspace &= \enspace \frac h {x_1 x_2 x_3} \begin{vmatrix} z_1^2 & x_1 z_1 & x_1^2 & y_1 \\ z_2^2 & x_2 z_2 & x_2^2 & y_2 \\ z_3^2 & x_3 z_3 & x_3^2 & y_3 \\ 1 & 0 & 0 & \alpha_1 \\ \end{vmatrix} \space \begin{vmatrix} z_1^2 & x_1 z_1 & x_1^2 & y_1 \\ z_2^2 & x_2 z_2 & x_2^2 & y_2 \\ z_3^2 & x_3 z_3 & x_3^2 & y_3 \\ 1 & 0 & 0 & \alpha_2 \\ \end{vmatrix} \\\\ y_4 \enspace &= \enspace \frac {a^2} {y_1 y_2 y_3} \begin{vmatrix} z_1^2 & x_1 z_1 & x_1^2 & y_1 \\ z_2^2 & x_2 z_2 & x_2^2 & y_2 \\ z_3^2 & x_3 z_3 & x_3^2 & y_3 \\ 1 & \beta_1 & \beta_1^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} z_1^2 & x_1 z_1 & x_1^2 & y_1 \\ z_2^2 & x_2 z_2 & x_2^2 & y_2 \\ z_3^2 & x_3 z_3 & x_3^2 & y_3 \\ 1 & \beta_2 & \beta_2^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} z_1^2 & x_1 z_1 & x_1^2 & y_1 \\ z_2^2 & x_2 z_2 & x_2^2 & y_2 \\ z_3^2 & x_3 z_3 & x_3^2 & y_3 \\ 1 & \beta_3 & \beta_3^2 & 0 \\ \end{vmatrix} \space \begin{vmatrix} z_1^2 & x_1 z_1 & x_1^2 & y_1 \\ z_2^2 & x_2 z_2 & x_2^2 & y_2 \\ z_3^2 & x_3 z_3 & x_3^2 & y_3 \\ 1 & \beta_4 & \beta_4^2 & 0 \\ \end{vmatrix} \\\\ z_4 \enspace &= \enspace \frac h {z_1 z_2 z_3} \begin{vmatrix} z_1^2 & x_1 z_1 & x_1^2 & y_1 \\ z_2^2 & x_2 z_2 & x_2^2 & y_2 \\ z_3^2 & x_3 z_3 & x_3^2 & y_3 \\ 0 & 0 & 1 & \gamma_1 \\ \end{vmatrix} \thinspace \begin{vmatrix} z_1^2 & x_1 z_1 & x_1^2 & y_1 \\ z_2^2 & x_2 z_2 & x_2^2 & y_2 \\ z_3^2 & x_3 z_3 & x_3^2 & y_3 \\ 0 & 0 & 1 & \gamma_2 \\ \end{vmatrix} \end{aligned} \end{equation} where the $\alpha_i$ are the roots of $ht^2 + rt + e = 0$, the $\beta_i$ are the roots of $at^4 + bt^3 + ct^2 + dt + e = 0$ and the $\gamma_i$ are the roots of $ht^2 + pt + a = 0$.

NOTES

$^*$ The attributions above are at the conceptual level. As far as I am aware none of these authors wrote down these precise formulae.