./MYSTERY.html,./Y2R4X2.html
In this section we gather together quartic elliptic formulae from previous pages to show that it is not necessary to reduce
to special forms to get elegant and reasonably concise formulae.
$\approx \textsf{Euler}^* \approx$ If the roots of two quartic polynomials can be mapped to one another by a Möbius transform $L$ then the change of variables $x = L(u)$ gives
\begin{equation*}
\sqrt[12] {\discrim(a,b,c,d,e)} \space \bigint \frac 1 {\sqrt{ax^4 + bx^3 + cx^2 + dx + e}} dx \enspace = \enspace
\sqrt[12] {\discrim(p,q,r,s,t)} \space \bigint \frac 1 {\sqrt{pu^4 + qu^3 + ru^2 + su + t}} du
\end{equation*}
where $\discrim$ denotes the discriminant of the fourth degree polynomials.
NOTES
The derivation of this formula is very similar to showing the cross-ratio is invariant under Möbius
transformations.
When evaluated over a closed contour in the Riemann surface the above expression is essentially the square of the Dedekind Eta function.
The three "Schwarz-Christoffel" elliptic integrals satisfy a similar formula under Möbius transforms.
For example if two cubic polynomials are mapped to one another by a Möbius transform then
\begin{equation*}
\sqrt[6] {\discrim(a,b,c,d)} \space \bigint \frac 1 {\left(ax^3 + bx^2 + cx + d\right)^{2/3}} dx \enspace = \enspace
\sqrt[6] {\discrim(p,q,r,s)} \space \bigint \frac 1 {\left(pu^3 + qu^2 + ru + s\right)^{2/3}} du
\end{equation*}
$\approx \textsf{Weierstrass}^* \approx$ Let $f$ be a solution of the differential equation
\begin{equation*}
f'(z)^2 \enspace = \enspace a f(z)^4 + b f(z)^3 + c f(z)^2 + d f(z) + e
\end{equation*}
with the boundary condition $f'(0) = 0$.
In general there are four solutions corresponding to the four roots of the polynomial.
Each solution $f$ is an even function and it's cross-ratio satisfies the formula
\begin{equation*}
\frac {\big[f(z_1)-f(z_2)\big]\thinspace\big[f(z_3)-f(z_4)\big]} {\big[f(z_1)-f(z_3)\big]\thinspace\big[f(z_2)-f(z_4)\big]} \enspace = \enspace
\frac {\sigma(z_1-z_2)\sigma(z_1+z_2)\sigma(z_3-z_4)\sigma(z_3+z_4)} {\sigma(z_1-z_3)\sigma(z_1+z_3)\sigma(z_2-z_4)\sigma(z_2+z_4)}
\end{equation*}
where $\sigma$ is the Weierstrass sigma function with the same periods as $f$.
NOTES
The most straight forward way to obtain this formula is to reduce the differential equation to Weierstrass form via a Möbius transform,
compute the cross-ratio for the Weierstrass $\wp$ function in terms of $\sigma$ functions, and observe that because of the invariance of the cross-ratio it is also true for
solutions of the original differential equation (under the specified boundary condition).
From this formula we can obtain various other expressions for $f$.
For example if $\rho$ is the pole of $f$ with residue $1/\sqrt{a}$ then by putting $z_4 = \rho$ followed by $z_3 \rightarrow \rho$ gives
\begin{equation*}
\frac {f(z_1) - f(z_2)} {f(z_1) - f(z_3)} \enspace = \enspace \frac {\sigma(z_1 - z_2) \sigma(z_1 + z_2) \sigma(z_3 - \rho) \sigma(z_3 + \rho)}
{\sigma(z_1 - z_3) \sigma(z_1 + z_3) \sigma(z_2 - \rho) \sigma(z_2 + \rho)} \qquad\qquad
f(z_1) \space - \space f(z_2) \enspace = \enspace -\frac {\sigma(2\rho) \thinspace \sigma(z_1 - z_2)\thinspace \sigma(z_1 + z_2)} {\sqrt{a} \thinspace \sigma(z_1 - \rho)\sigma(z_1 + \rho)
\sigma(z_2 - \rho)\sigma(z_2 + \rho)}
\end{equation*}
An alternative method, not involving the $\wp$ function, is to start with an arbitrary order 2 elliptic function $f(z)$ with simple poles at $\pm\rho$.
Then using the facts that the sum of residues of an elliptic function is zero, and that the sum of the poles equals the sum of the zeroes,
by considering the Laurent expansions at the poles, show that $f(z) - f(-z) \equiv 0$.
That is to say $f$ is an even function.
Then show that $\left\{1,f(z),f^2(z),f^3(z),f^4(z)\right\}$ is a basis of the vector space of even elliptic functions with poles of order at most 4 at $\pm\rho$.
Then since $f'^2(z)$ is even elliptic function with double poles at $\pm\rho$ it follows that it must be a linear combination of those basis functions.
Then compute this Vandermonde determinant by counting zeroes and poles
\begin{equation*}
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & \cdots & f(z_1)^{n-1} \\
1 & f(z_2) & f(z_2)^2 & \cdots & f(z_2)^{n-1} \\
\vdots & \vdots & \vdots & \ddots & \vdots \\
1 & f(z_n) & f(z_n)^2 & \cdots & f(z_n)^{n-1} \\
\end{vmatrix}
\enspace = \enspace const \cdot \frac {\prod\limits_{i \lt j}\sigma(z_i-z_j)\sigma(z_i+z_j)}
{\left[\prod\limits_{i=1}^n \sigma(z_i-\rho)\sigma(z_i+\rho) \right]^{\sfrac 1 2 n(n-1)}\hspace{-2.8em}}
\end{equation*}
Similar formulae hold for the three "Schwarz-Christoffel" elliptic functions.
For example if
\begin{equation*}
f'(z)^3 \enspace = \enspace \left( a f(z)^3 + b f(z)^2 + c f(z) + d \right)^2
\end{equation*}
with boundary condition $f'(0)=0$ then $f(\zeta z) = f(z)$ where $\zeta$ is a primitive third root of unity and
\begin{equation*}
\frac {\big[f(z_1)-f(z_2)\big]\thinspace\big[f(z_3)-f(z_4)\big]} {\big[f(z_1)-f(z_3)\big]\thinspace\big[f(z_2)-f(z_4)\big]} \enspace = \enspace
\frac {\sigma(z_1-z_2)\sigma(z_1-\zeta z_2)\sigma(z_1-\zeta^2 z_2) \thinspace \sigma(z_3-z_4)\sigma(z_3-\zeta z_4)\sigma(z_3-\zeta^2 z_4)}
{\sigma(z_1-z_3)\sigma(z_1-\zeta z_3)\sigma(z_1-\zeta^2 z_3) \thinspace \sigma(z_2-z_4)\sigma(z_2-\zeta z_4)\sigma(z_2-\zeta^2 z_4)}
\end{equation*}
$\approx \textsf{Frobenius and Stickelberger}^* \approx$ In a similar vein, if $\pm\rho$ are the poles of $f$, we have
\begin{equation*}
\{f\} \enspace = \enspace
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & f(z_4) & f(z_4)^2 & f'(z_4) \\
\end{vmatrix}
\enspace = \enspace \frac {2 \sigma^4(2\rho) \space \sigma(z_1 + z_2 + z_3 + z_4)\space \prod\limits_{i \lt j}\sigma(z_i-z_j)} {a^2 \prod\limits_{i=1}^4 \sigma^2(z_i-\rho)\sigma^2(z_i+\rho)}
\end{equation*}
This equation is the first minor miracle, in the computation of the addition formula, because we are able to deduce the $\sigma(z_1 + z_2 + z_3 + z_4)$ term from the fact
that the sum of the zeroes of an elliptic function is equal to the sum of the poles.
When $z_1 + z_2 + z_3 + z_4 = 0$ we have
\begin{equation*}
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & f(z_4) & f(z_4)^2 & f'(z_4) \\
\end{vmatrix} \enspace = \enspace 0
\end{equation*}
This is the symmetric addition formula for $f$.
NOTES
The reason these formulae hold is because $1,f,f^2,f'$
is a basis for the 4 dimensional vector space of elliptic functions with poles of order at most 2 at $\pm\rho$.
The formulae can be extended to $2n$ variables by, for example, using the basis $1,f \ldots f^n, f',ff' \ldots f^{n-2}f'$
for the $2n$ dimensional vector space of elliptic functions with poles of order at most $n$ at $\pm\rho$.
The original Frobenius-Stickelberger formula used the functions $1,\wp,\wp',\wp'' \ldots \wp^{(n-2)}$ which form a
basis for the $n$ dimensional vector space of elliptic functions
with a pole of order at most $n$ at zero.
When one period goes to infinity the above formula reduces to a trignometric identity .
\begin{equation*}
\left\{\space \cos(2z)\space \right\} \enspace = \enspace 32 \thinspace \sin(z_1 + z_2 + z_3 + z_4)\thinspace \prod\limits_{i \lt j} \sin(z_i-z_j)
\end{equation*}
When both periods go to infinity it reduces to a simple polynomial identity closely related to
depleted Vandermonde determinants.
\begin{equation*}
\left\{\space \frac 1 {z-\rho} - \frac 1 {z+\rho} \space\right\} \enspace = \enspace
\frac {32 \rho^4 \left(z_1 + z_2 + z_3 + z_4\right)\thinspace \prod\limits_{i \lt j}(z_i-z_j)} {\prod\limits_{i=1}^4 (z_i-\rho)^2(z_i+\rho)^2}
\end{equation*}
Let
\begin{equation*}
R(x) \enspace = \enspace ax^4 + bx^3 + cx^2 + dx + e
\end{equation*}
The differential equation for $f$ implies that $f^{-1}(x)$, is an anti-derivative of $\large{\frac 1 {\sqrt{R(x)}}}$.
Therefore another way to express the symmetric addition formula for $f$ is to say that if
\begin{equation*}
\bigint_{x_3}^{x_1} \frac {dx} {\sqrt{R(x)}} \enspace + \enspace \bigint_{x_3}^{x_2} \frac {dx} {\sqrt{R(x)}} \enspace = \enspace \bigint_{x_3}^{x_4} \frac {dx} {\sqrt{R(x)}}
\end{equation*}
then $x_1,x_2,x_3,x_4$ satisfy the algebraic relation
\begin{equation*}
\begin{vmatrix}
1 & x_1 & x_1^2 & \hphantom{+}\sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \hphantom{+}\sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & -\sqrt{R(x_3)} \\
1 & x_4 & x_4^2 & -\sqrt{R(x_4)} \\
\end{vmatrix} \space = \space 0
\end{equation*}
NOTES
Explicitly, putting $x_i = f(z_i)$ then, using the fact that $f^{-1}(x)$ is an anti-derivative of $\large{\frac 1 {\sqrt{R(x)}}}$ and that $f$ is even and $f'$ is odd, we have
\begin{equation*}
\bigint_{x_3}^{x_1} \frac {dx} {\sqrt{R(x)}} \enspace + \enspace
\bigint_{x_4}^{x_2} \frac {dx} {\sqrt{R(x)}} \enspace = \enspace 0 \qquad\qquad \implies \qquad\qquad
f^{-1}(x_1) \space - \space f^{-1}(x_3) \space + \space f^{-1}(x_2) \space - \space f^{-1}(x_4) \enspace = \enspace 0 \qquad\qquad \implies \qquad\qquad
z_1 \space + \space z_2 \space - \space z_3 \space - \space z_4 \enspace \equiv \enspace 0
\end{equation*}
\begin{equation*}
\implies \qquad\qquad
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(-z_3) & f(-z_3)^2 & f'(-z_3) \\
1 & f(-z_4) & f(-z_4)^2 & f'(-z_4) \\
\end{vmatrix}\enspace = \enspace 0 \qquad\qquad \implies \qquad\qquad
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & \hphantom{+}f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & \hphantom{+}f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & -f'(z_3) \\
1 & f(z_4) & f(z_4)^2 & -f'(z_4) \\
\end{vmatrix}\enspace = \enspace 0 \qquad\qquad \implies \qquad\qquad
\begin{vmatrix}
1 & x_1 & x_1^2 & \hphantom{+}\sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \hphantom{+}\sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & -\sqrt{R(x_3)} \\
1 & x_4 & x_4^2 & -\sqrt{R(x_4)} \\
\end{vmatrix} \enspace = \enspace 0
\end{equation*}
Using the previously mentioned basis, this result generalises to $2n$ variables, for example for $n=3$
\begin{equation*}
\bigint_{x_4}^{x_1} \frac {dx} {\sqrt{R(x)}} \enspace + \enspace
\bigint_{x_5}^{x_2} \frac {dx} {\sqrt{R(x)}} \enspace + \enspace
\bigint_{x_6}^{x_3} \frac {dx} {\sqrt{R(x)}} \enspace = \enspace 0 \qquad\qquad \implies \qquad\qquad
\begin{vmatrix}
1 & x_1 & x_1^2 & x_1^3 & \hphantom{+}\sqrt{R(x_1)} & \hphantom{+}x_1\sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & x_2^3 & \hphantom{+}\sqrt{R(x_2)} & \hphantom{+}x_2\sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & x_3^3 & \hphantom{+}\sqrt{R(x_3)} & \hphantom{+}x_3\sqrt{R(x_3)} \\
1 & x_4 & x_4^2 & x_4^3 & -\sqrt{R(x_4)} & -x_4\sqrt{R(x_4)} \\
1 & x_5 & x_5^2 & x_5^3 & -\sqrt{R(x_5)} & -x_5\sqrt{R(x_5)} \\
1 & x_6 & x_6^2 & x_6^3 & -\sqrt{R(x_6)} & -x_6\sqrt{R(x_6)} \\
\end{vmatrix} \enspace = \enspace 0
\end{equation*}
$\approx \textsf{Abel}^* \approx$ We can also express this in terms of the quartic curve
\begin{equation*}
y^2 \enspace = \enspace ax^4 + bx^3 + cx^2 + dx + e
\end{equation*}
If we run a parabola through three points $(x_1,y_1),(x_2,y_2),(x_3,y_3)$ on this curve, it will intersect the curve at a fourth point $(x_4, y_4)$ with coordinates given implicitly by
\begin{equation*}
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & x_4 & x_4^2 & y_4 \\
\end{vmatrix} \enspace = \enspace 0 \qquad\qquad \textsf{and} \qquad\qquad
y_4^2 \enspace = \enspace R(x_4)
\end{equation*}
QuarticAdditionDiagram
These two equations can be solved for $x_4$ by eliminating $y_4$ resulting in a quartic equation for $x_4$ and this is where the second minor miracle occurs.
Because the points $(x_1,y_1),(x_2,y_2),(x_2,y_3)$ lie on both the quartic and parabolic curves, we already know three roots of this equation, namely $x_1,x_2,x_3$.
When we factor out these roots we are left with a linear equation for $x_4$ which is easily solved.
If we use resultants, in their product form, to do the elimination we get
\begin{equation*}
x_4 \enspace = \enspace \frac 1 {x_1 x_2 x_3}
\frac {\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & 0 & 0 & \sqrt{e} \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & 0 & 0 & -\sqrt{e} \\
\end{vmatrix}}
{\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
0 & 0 & 1 & \sqrt{a} \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
0 & 0 & 1 & -\sqrt{a} \\
\end{vmatrix}}, \qquad\qquad
y_4 \enspace = \enspace \frac {a^2} {y_1 y_2 y_3}
\frac {\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & e_1 & e_1^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & e_2 & e_2^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & e_3 & e_3^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & e_4 & e_4^2 & 0 \\
\end{vmatrix}}
{\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
0 & 0 & 1 & \sqrt{a} \\
\end{vmatrix}^2 \thinspace
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
0 & 0 & 1 & -\sqrt{a} \\
\end{vmatrix}^2}
\end{equation*}
where $e_1,e_2,e_3,e_4$ are the roots of the polynomial $R(x)$.
Due to the symmetries both $x_4$ and $y_4$ are rational functions of $x_1,x_2,x_3,y_1,y_2,y_3$ and the coefficients $a,b,c,d,e$ of the quartic curve.
Which means that if the curve has rational coefficients and the first three points have rational coordinates then the fourth point will also have rational coordinates.
NOTES
Using the product form of the resultant, with respect to $y$, we can infer the existence of an identity for all $x_i,y_i$ of the form
\begin{equation*}
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & x & x^2 & \sqrt{R(x)} \\
\end{vmatrix}
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & x & x^2 & -\sqrt{R(x)} \\
\end{vmatrix} \enspace = \enspace
(x-x_1)(x-x_2)(x-x_3)(Dx-N) \enspace + \enspace \sum_{i=1}^3 Q_i \left(y_i^2 - R(x_i)\right)
\end{equation*}
where $D,N,Q_1,Q_2,Q_3$ are undetermined polynomials in $x_1,x_2,x_3,y_2,y_2,y_3$.
If $(x_1,y_1)$, $(x_2,y_2)$ and $(x_3,y_3)$ lie on the quartic curve, then the $x$-coordinate of the fourth point of intersection with the parabola passing through them is given by $Dx_4-N=0$.
In which case taking the ratio of two resultants computed at two arbitrary points on the quartic curve $(x_5,y_5)$ and $(x_6,y_6)$ gives
\begin{equation*}
\frac {(x_5-x_1)(x_5-x_2)(x_5-x_3)(x_5-x_4)} {(x_6-x_1)(x_6-x_2)(x_6-x_3)(x_6-x_4)} \enspace = \enspace
\frac {\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & x_5 & x_5^2 & y_5 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & x_5 & x_5^2 & -y_5 \\
\end{vmatrix}}
{\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & x_6 & x_6^2 & y_6 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & x_3 & x_3^2 & y_3 \\
1 & x_6 & x_6^2 & -y_6 \\
\end{vmatrix}}
\end{equation*}
Evaluating at $x_5=0$ and $x_6 \rightarrow \infty$ yields the stated formula for $x_4$.
The formula for $y_4$ can be obtained by a similar method.
By putting $e_4 = -b / a$ and letting $a \rightarrow 0$ the formula can be specialised to cubic polynomials .
When specialised to quadratic polynomials $R(x)$ this geometric construction gives a nice theorem on conic
sections:
“ if two axis-aligned conic sections intersect at four points, then those four points lie on a circle and the sum of their angles is zero modulo $2\pi$”
And from this it follows that
\begin{equation*}
\bigint_{x_3}^{x_1} \frac {dx} {\sqrt{R(x)}} \enspace + \enspace \bigint_{x_3}^{x_2} \frac {dx} {\sqrt{R(x)}} \enspace = \enspace \bigint_{x_3}^{x_4} \frac {dx} {\sqrt{R(x)}}
\qquad\qquad\implies\qquad\qquad
x_4 \enspace = \enspace \frac 1 {x_1 x_2 x_3}
\frac {\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & -\sqrt{R(x_3)} \\
1 & 0 & 0 & \sqrt{e} \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & -\sqrt{R(x_3)} \\
1 & 0 & 0 & -\sqrt{e} \\
\end{vmatrix}}
{\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & -\sqrt{R(x_3)} \\
0 & 0 & 1 & \sqrt{a} \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & -\sqrt{R(x_3)} \\
0 & 0 & 1 & -\sqrt{a} \\
\end{vmatrix}}
\end{equation*}
Putting $R(x)=(1-x^2)(1-k^2x^2)$ and $x_1 = u$, $x_2 = v$, $x_3 = 0$ and $x_4 = w$ gives Eulers classic addition formula for the elliptic integral
\begin{equation*}
\bigint_0^u \frac {dx} {\sqrt{R(x)}} \enspace + \enspace \bigint_0^v \frac {dx} {\sqrt{R(x)}} \enspace = \enspace
\bigint_0^w \frac {dx} {\sqrt{R(x)}} \qquad\qquad \implies \qquad\qquad
w \enspace = \enspace \frac {u\sqrt{R(v)} \space + \space v\sqrt{R(u)}} {1 \space - \space k^2u^2v^2} \qquad\qquad\qquad\qquad
\end{equation*}
NOTES
This last calculation is not quite as straight forward as it appears.
Writing $y_i = \sqrt{R(x_i)}$ then letting $x_3 \rightarrow 0$ gives
\begin{equation*}
x_4 \enspace = \enspace \frac 1 {x_1 x_2}
\frac {\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & 0 & 0 & -1 \\
1 & 0 & 0 & 1 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
0 & 1 & 0 & 0 \\
1 & 0 & 0 & -1 \\
\end{vmatrix}}
{\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & 0 & 0 & -1 \\
0 & 0 & 1 & k \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & y_1 \\
1 & x_2 & x_2^2 & y_2 \\
1 & 0 & 0 & -1 \\
0 & 0 & 1 & -k \\
\end{vmatrix}} \enspace = \enspace \frac {2(x_1-x_2) \big[x_1^2(y_2+1)-x_2^2(y_1+1) \big]}
{\big[x_1(y_2+1) - x_2(y_1+1)\big]^2 - k^2x_1^2x_2^2(x_1-x_2)^2}
\enspace = \enspace \frac {x_1y_2 \space + \space x_2y_1} {1 \space - \space k^2 x_1^2 x_2^2}
\end{equation*}
where the final step of rationalising the denominator, involves a hefty algebraic calculation with a lot of miraculous cancellation.
The reason for that being the large zero set shared between the numerator and denominator.
It is possible to explicitly rationalise the denominator of the general formula
using CAS but the formula is too large to display except for the Lemniscatic case.
Because $f$ is even it also follows that
\begin{equation*}
f(z_1 + z_2 + z_3) \enspace = \enspace \frac 1 {f(z_1) f(z_2) f(z_3)}
\frac {\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & 0 & 0 & \sqrt{e} \\
\end{vmatrix} \space
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & 0 & 0 & -\sqrt{e} \\
\end{vmatrix}}
{\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
0 & 0 & 1 & \sqrt{a} \\
\end{vmatrix} \space
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
0 & 0 & 1 & -\sqrt{a} \\
\end{vmatrix}}
\end{equation*}
and because $f'$ is odd
\begin{equation*}
f'(z_1 + z_2 + z_3) \enspace = \enspace -\frac {a^2} {f'(z_1) f'(z_2) f'(z_3)}
\frac {\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & e_1 & e_1^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & e_2 & e_2^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & e_3 & e_3^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & e_4 & e_4^2 & 0 \\
\end{vmatrix}}
{\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
0 & 0 & 1 & \sqrt{a} \\
\end{vmatrix}^2 \thinspace
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
0 & 0 & 1 & -\sqrt{a} \\
\end{vmatrix}^2}
\end{equation*}
NOTES
These formulae can also be obtained by directly verifying the "ratio of resultants" formula using the $\sigma$ products for the determinants and $f(u)-f(v)$.
That is for any $z_1,z_2,z_3,z_4,z_5$ we have
\begin{equation*}
\frac {\big[f(z_1) - f(z_4)\big] \thinspace \big[f(z_2) - f(z_4)\big] \thinspace \big[f(z_3) - f(z_4)\big] \thinspace \big[f(z_1 + z_2 + z_3) - f(z_4)\big]}
{\big[f(z_1) - f(z_5)\big] \thinspace \big[f(z_2) - f(z_5)\big] \thinspace \big[f(z_3) - f(z_5)\big] \thinspace \big[f(z_1 + z_2 + z_3) - f(z_5)\big]} \enspace = \enspace
\frac {\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & f(z_4) & f(z_4)^2 & f'(z_4) \\
\end{vmatrix} \space
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & f(z_4) & f(z_4)^2 & -f'(z_4) \\
\end{vmatrix}}
{\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & f(z_5) & f(z_5)^2 & f'(z_5) \\
\end{vmatrix} \space
\begin{vmatrix}
1 & f(z_1) & f(z_1)^2 & f'(z_1) \\
1 & f(z_2) & f(z_2)^2 & f'(z_2) \\
1 & f(z_3) & f(z_3)^2 & f'(z_3) \\
1 & f(z_5) & f(z_5)^2 & -f'(z_5) \\
\end{vmatrix}}
\end{equation*}
Now let $z_4$ be a zero of $f$ and let $z_5$ be a pole.
The formula for $f'$ can be obtained by taking the product over $z_4=0,\omega_1,\omega_2,\omega_3$.
These formulae also give us a two variable addition formulae for any solution $g$ of the differential equation $g'^2 = ag^4 + bg^3 + cg^2 + dg + e$ namely
\begin{equation*}
g(u + v) \enspace = \enspace \frac 1 {g(u) g(v) g(0)}
\frac {\begin{vmatrix}
1 & g(u) & g(u)^2 & g'(u) \\
1 & g(v) & g(v)^2 & g'(v) \\
1 & g(0) & g(0)^2 & -g'(0) \\
1 & 0 & 0 & \sqrt{e} \\
\end{vmatrix} \space
\begin{vmatrix}
1 & g(u) & g(u)^2 & g'(u) \\
1 & g(v) & g(v)^2 & g'(v) \\
1 & g(0) & g(0)^2 & -g'(0) \\
1 & 0 & 0 & -\sqrt{e} \\
\end{vmatrix}}
{\begin{vmatrix}
1 & g(u) & g(u)^2 & g'(u) \\
1 & g(v) & g(v)^2 & g'(v) \\
1 & g(0) & g(0)^2 & -g'(0) \\
0 & 0 & 1 & \sqrt{a} \\
\end{vmatrix} \space
\begin{vmatrix}
1 & g(u) & g(u)^2 & g'(u) \\
1 & g(v) & g(v)^2 & g'(v) \\
1 & g(0) & g(0)^2 & -g'(0) \\
0 & 0 & 1 & -\sqrt{a} \\
\end{vmatrix}}
\end{equation*}
and
\begin{equation*}
g'(u + v) \enspace = \enspace \frac {a^2} {g'(u) g'(v) g'(0)}
\frac {\begin{vmatrix}
1 & g(u) & g(u)^2 & g'(u) \\
1 & g(v) & g(v)^2 & g'(v) \\
1 & g(0) & g(0)^2 & -g'(0) \\
1 & e_1 & e_1^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & g(u) & g(u)^2 & g'(u) \\
1 & g(v) & g(v)^2 & g'(v) \\
1 & g(0) & g(0)^2 & -g'(0) \\
1 & e_2 & e_2^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & g(u) & g(u)^2 & g'(u) \\
1 & g(v) & g(v)^2 & g'(v) \\
1 & g(0) & g(0)^2 & -g'(0) \\
1 & e_3 & e_3^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & g(u) & g(u)^2 & g'(u) \\
1 & g(v) & g(v)^2 & g'(v) \\
1 & g(0) & g(0)^2 & -g'(0) \\
1 & e_4 & e_4^2 & 0 \\
\end{vmatrix}}
{\begin{vmatrix}
1 & g(u) & g(u)^2 & g'(u) \\
1 & g(v) & g(v)^2 & g'(v) \\
1 & g(0) & g(0)^2 & -g'(0) \\
0 & 0 & 1 & \sqrt{a} \\
\end{vmatrix}^2 \space
\begin{vmatrix}
1 & g(u) & g(u)^2 & g'(u) \\
1 & g(v) & g(v)^2 & g'(v) \\
1 & g(0) & g(0)^2 & -g'(0) \\
0 & 0 & 1 & -\sqrt{a} \\
\end{vmatrix}^2}
\end{equation*}
NOTES
We have $g(z) = f(z + z_0)$ for some $f$ and $z_0$.
Then put $z_1 = u + z_0, \space z_2 = v + z_0, \space z_3 = -z_0$ in the addition formula for $f$ and $f'$.
In the diagram parametrise the quartic curve by $f$ so that the $x,y$-coordinates of the points are given by $\left(x_i,y_i\right) = \left(f(z_i),f'(z_i)\right)$.
Then in terms of $z$-coordinates, if we vary the position of the four points on the quartic, while constraining them so that they also lie on a parabola,
then the equation $z_1 + z_2 + z_3 + z_4 = 0$ becomes a differential equation $dz_1 + dz_2 + dz_3 + dz_4 = 0$.
In terms of $x,y$ coordinates this translates into the differential equation
\begin{equation*}
\frac {dx_1}{y_1} \enspace + \enspace \frac {dx_2}{y_2} \enspace + \enspace \frac {dx_3}{y_3} \enspace + \enspace \frac {dx_4}{y_4} \enspace = \enspace 0
\end{equation*}
We can also hold the fourth point on the curve fixed and vary the other three to obtain the differential equation
\begin{equation*}
\frac {dx_1} {\sqrt{R(x_1)}} \enspace + \enspace \frac {dx_2} {\sqrt{R(x_2)}} \enspace + \enspace \frac {dx_3} {\sqrt{R(x_3)}} \enspace = \enspace 0
\end{equation*}
This implies the formula for the $x$-coordinate of the fourth point on the curve is an explicit algebraic integral of this differential equation namely
\begin{equation*}
X(x_1,x_2,x_3) \space = \space \textit{const.}
\end{equation*}
where $X$ is the algebraic function
\begin{equation*}
X(x_1,x_2,x_3) \enspace = \enspace \frac 1 {x_1 x_2 x_3}
\frac {\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\
1 & 0 & 0 & \sqrt{e} \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\
1 & 0 & 0 & -\sqrt{e} \\
\end{vmatrix}}
{\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\
0 & 0 & 1 & \sqrt{a} \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\
0 & 0 & 1 & -\sqrt{a} \\
\end{vmatrix}}
\end{equation*}
It also implies the formula for the $y$-coordinate of the fourth point is another algebraic integral of this differential equation namely
\begin{equation*}
Y(x_1,x_2,x_3) \space = \space \textit{const.}
\end{equation*}
where
\begin{equation*}
Y(x_1,x_2,x_3) \enspace = \enspace \frac {a^2} {\sqrt{R(x_1)R(x_2)R(x_3)}}
\frac {\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\
1 & e_1 & e_1^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\
1 & e_2 & e_2^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\
1 & e_3 & e_3^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\
1 & e_4 & e_4^2 & 0 \\
\end{vmatrix}}
{\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\
0 & 0 & 1 & \sqrt{a} \\
\end{vmatrix}^2 \thinspace
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\
0 & 0 & 1 & -\sqrt{a} \\
\end{vmatrix}^2}
\end{equation*}
The apparent contradiction of having two distinct algebraic integrals is explained by the fact that they satisfy an algebraic relation
\begin{equation*}
Y^2 \enspace = \enspace R(X)
\end{equation*}
NOTES
An obvious implicit algebraic integral of the differential equation is the equation of the parabola
\begin{equation*}
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3 & x_3^2 & \sqrt{R(x_3)} \\
1 & x_4 & x_4^2 & \sqrt{R(x_4)} \\
\end{vmatrix} \enspace = \enspace 0
\end{equation*}
where $x_4$ is the constant of integration (the $x$-coordinate of the fourth point on the curve).
We could also have written the differential equation in terms of $y$ instead of $x$
\begin{equation*}
\frac {dy_1} {R'(x_1)} \enspace + \enspace \frac {dy_2} {R'(x_2)} \enspace + \enspace \frac {dy_3} {R'(x_3)} \enspace = \enspace 0
\end{equation*}
For example when $R(x) = 1 - x^4$ this is a specific instance of the second Schwarz-Christoffel elliptic function.
The differential equation becomes
\begin{equation*}
\frac {dy_1} {\left(1 - y_1^2\right)^{3/4}} \enspace + \enspace \frac {dy_2} {\left(1 - y_2^2\right)^{3/4}} \enspace + \enspace \frac {dy_3} {\left(1 - y_3^2\right)^{3/4}} \enspace = \enspace 0
\qquad \text{with implicit algebraic integral} \qquad
\begin{vmatrix}
1 & \sqrt[4]{\smash[b]{1 - y_1^2}} & \sqrtsm{1 - y_1^2} & y_1 \\
1 & \sqrt[4]{\smash[b]{1 - y_2^2}} & \sqrtsm{1 - y_2^2} & y_2 \\
1 & \sqrt[4]{\smash[b]{1 - y_3^2}} & \sqrtsm{1 - y_3^2} & y_3 \\
1 & \sqrt[4]{\smash[b]{1 - y_4^2}} & \sqrtsm{1 - y_4^2} & y_4 \\
\end{vmatrix} \enspace = \enspace 0
\end{equation*}
where $y_4$ is the constant of integration, (the $y$-coordinate of the fourth point of intersection).
To motivate the next set of equations observe that by multiplying the differential equation
\begin{equation*}
\frac {dx_1} {x_1} \enspace + \enspace \frac {dx_2} {x_2} \enspace + \enspace \frac {dx_3} {x_3} \enspace = \enspace 0
\end{equation*}
by the algebraic integrating factor $x_1 x_2 x_3$ we can integrate it as follows
\begin{equation*}
x_2 x_3 \thinspace dx_1 \enspace + \enspace x_1 x_3 \thinspace dx_2 \enspace + \enspace x_1 x_2 \thinspace dx_3 \enspace = \enspace 0
\qquad\qquad \implies \qquad\qquad x_1 x_2 x_3 \enspace = \enspace \textit{const.}
\end{equation*}
That is we can integrate using an algebraic integrating factor rather than the transcendental $\log$ function.
We now seek to do the same thing for
\begin{equation*}
\frac {dx_1} {\sqrt{R(x_1)}} \enspace + \enspace \frac {dx_2} {\sqrt{R(x_2)}} \enspace + \enspace \frac {dx_3} {\sqrt{R(x_3)}} \enspace = \enspace 0
\end{equation*}
We can express the addition formulae for $f$ and $f'$ in terms of $X$ and $Y$
\begin{equation*}
f(z_1 + z_2 + z_3) \enspace = \enspace X(f(z_1), f(z_2), f(z_3)) \qquad\qquad\qquad
f'(z_1 + z_2 + z_3) \enspace = \enspace -Y(f(z_1), f(z_2), f(z_3))
\end{equation*}
and by differentiating these formulae with respect to $z_1,z_2,z_3$ and converting back to $x$-coordinates obtain the identities
\begin{equation*}
-Y \enspace = \enspace
\sqrtsm{R(x_1)} \frac {\partial X} {\partial x_1} \enspace = \enspace
\sqrtsm{R(x_2)} \frac {\partial X} {\partial x_2} \enspace = \enspace
\sqrtsm{R(x_3)} \frac {\partial X} {\partial x_3} \qquad\qquad\qquad
-\tfrac 1 2 R'(X) \enspace = \enspace
\sqrtsm{R(x_1)} \frac {\partial Y} {\partial x_1} \enspace = \enspace
\sqrtsm{R(x_2)} \frac {\partial Y} {\partial x_2} \enspace = \enspace
\sqrtsm{R(x_3)} \frac {\partial Y} {\partial x_3}
\end{equation*}
Then we may observe that the expression $-Y(x_1,x_2,x_3)$ is an algebraic integrating factor for the differential equation
\begin{equation*}
\frac {dx_1} {\sqrt{R(x_1)}} \enspace + \enspace \frac {dx_2} {\sqrt{R(x_2)}} \enspace + \enspace \frac {dx_3} {\sqrt{R(x_3)}} \enspace = \enspace 0
\end{equation*}
because after multiplication by $-Y$, utilising the above identities, it becomes
\begin{equation*}
\frac {\partial X}{\partial x_1} {dx_1} \enspace + \enspace \frac {\partial X}{\partial x_2} {dx_2} \enspace + \enspace \frac {\partial X}{\partial x_3} {dx_3} \enspace = \enspace 0
\end{equation*}
which integrates, as expected, to
\begin{equation*}
X(x_1,x_2,x_3) \space = \space \textit{const.}
\end{equation*}
Similarly the expression $-\tfrac 1 2 R'(X)$ is another algebraic integrating factor, multiplying by it and using the above identities gives
\begin{equation*}
\frac {\partial Y}{\partial x_1} {dx_1} \enspace + \enspace \frac {\partial Y}{\partial x_2} {dx_2} \enspace + \enspace \frac {\partial Y}{\partial x_3} {dx_3} \enspace = \enspace 0
\end{equation*}
which integrates, as expected, to
\begin{equation*}
Y(x_1,x_2,x_3) \space = \space \textit{const.}
\end{equation*}
More generally if $P(X,Y)$ is any non-trivial differentiable function, then $\displaystyle - \left[Y \frac {\partial P} {\partial X} + \tfrac 1 2 R'(X) \frac {\partial P} {\partial Y} \right]$
is an algebraic integrating factor, and multiplying by it gives
\begin{equation*}
\left[\frac {\partial P}{\partial X}\frac {\partial X}{\partial x_1} + \frac {\partial P}{\partial Y}\frac {\partial Y}{\partial x_1}\right] {dx_1} \enspace + \enspace
\left[\frac {\partial P}{\partial X}\frac {\partial X}{\partial x_2} + \frac {\partial P}{\partial Y}\frac {\partial Y}{\partial x_2}\right] {dx_2} \enspace + \enspace
\left[\frac {\partial P}{\partial X}\frac {\partial X}{\partial x_3} + \frac {\partial P}{\partial Y}\frac {\partial Y}{\partial x_3}\right] {dx_3} \enspace = \enspace 0
\end{equation*}
which integrates to
\begin{equation*}
P\big(X(x_1,x_2,x_3), Y(x_1,x_2,x_3)\big) \space = \space \textit{const.}
\end{equation*}
Taking a slightly different approach and writing the differential equation like this
\begin{equation*}
\frac {dx_1} {\sqrt{(x_1-e_1)(x_1-e_2)(x_1-e_3)(x_1-e_4)}} \enspace + \enspace
\frac {dx_2} {\sqrt{(x_2-e_1)(x_2-e_2)(x_2-e_3)(x_2-e_4)}} \enspace + \enspace
\frac {dx_3} {\sqrt{(x_3-e_1)(x_3-e_2)(x_3-e_3)(x_3-e_4)}} \enspace = \enspace 0
\end{equation*}
several simple determinant style implicit algebraic integrals can be found, with $x_4$ the constant of integration:
Table 1
Implicit Algebraic Integral
Basis And Polar Divisor
Lattice
$I_1$
\begin{equation*}
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3)(x_1 - e_4)} \\
1 & x_2 & x_2^2 & \sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3)(x_2 - e_4)} \\
1 & x_3 & x_3^2 & \sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3)(x_3 - e_4)} \\
1 & x_4 & x_4^2 & \sqrt{(x_4 - e_1)(x_4 - e_2)(x_4 - e_3)(x_4 - e_4)} \\
\end{vmatrix}
\enspace = \enspace 0
\end{equation*}
$\displaystyle 1, \enspace f, \enspace f^2, \enspace \sqrt{(f-e_1)(f-e_2)(f-e_3)(f-e_4)}$
$2(\rho) + 2(-\rho)$
$\left[2\omega_1,2\omega_2\right]$
$I_2$
\begin{equation*}
\begin{vmatrix}
1 & x_1 & \sqrt{(x_1-e_1)(x_1-e_4)} & \sqrt{(x_1 - e_2)(x_1 - e_3)} \\
1 & x_2 & \sqrt{(x_2-e_1)(x_2-e_4)} & \sqrt{(x_2 - e_2)(x_2 - e_3)} \\
1 & x_3 & \sqrt{(x_3-e_1)(x_3-e_4)} & \sqrt{(x_3 - e_2)(x_3 - e_3)} \\
1 & x_4 & \sqrt{(x_4-e_1)(x_4-e_4)} & \sqrt{(x_4 - e_2)(x_4 - e_3)} \\
\end{vmatrix}
\enspace = \enspace 0
\end{equation*}
$\displaystyle 1, \enspace f, \enspace \sqrt{(f-e_1)(f-e_4)}, \enspace \sqrt{(f-e_2)(f-e_3)}$
$(\rho) + (-\rho) + (\rho+2\omega_1) + (-\rho-2\omega_1)$
$\left[4\omega_1,2\omega_2\right]$
$I_3$
\begin{equation*}
\begin{vmatrix}
\sqrt{x_1 - e_1} & \sqrt{x_1 - e_2} & \sqrt{x_1 - e_3} & \sqrt{x_1 - e_4} \\
\sqrt{x_2 - e_1} & \sqrt{x_2 - e_2} & \sqrt{x_2 - e_3} & \sqrt{x_2 - e_4} \\
\sqrt{x_3 - e_1} & \sqrt{x_3 - e_2} & \sqrt{x_3 - e_3} & \sqrt{x_3 - e_4} \\
\sqrt{x_4 - e_1} & \sqrt{x_4 - e_2} & \sqrt{x_4 - e_3} & \sqrt{x_4 - e_4} \\
\end{vmatrix}
\enspace = \enspace 0
\end{equation*}
$\displaystyle 1, \enspace \sqrt{\frac {f-e_1} {f-e_4}}, \enspace \sqrt{\frac {f-e_2} {f-e_4}}, \enspace \sqrt{\frac {f-e_3} {f-e_4}}$
$(0) + (2\omega_1) + (2\omega_2) + (2\omega_3)$
$\left[4\omega_1,4\omega_2\right]$
$I_4$
\begin{equation*}
\begin{vmatrix}
1 & e_1 & \sqrt{(x_1-e_1)(x_2-e_1)} & \sqrt{(x_3 - e_1)(x_4 - e_1)} \\
1 & e_2 & \sqrt{(x_1-e_2)(x_2-e_2)} & \sqrt{(x_3 - e_2)(x_4 - e_2)} \\
1 & e_3 & \sqrt{(x_1-e_3)(x_2-e_3)} & \sqrt{(x_3 - e_3)(x_4 - e_3)} \\
1 & e_4 & \sqrt{(x_1-e_4)(x_2-e_4)} & \sqrt{(x_3 - e_4)(x_4 - e_4)} \\
\end{vmatrix}
\enspace = \enspace 0
\end{equation*}
$x \leftrightarrow e$ symmetry in $I_2$
$\left[4\omega_1,4\omega_2\right]$
$I_5$
\begin{equation*}
\begin{vmatrix}
1 & e_1 & e_1^2 & \sqrt{(x_1 - e_1)(x_2 - e_1)(x_3 - e_1)(x_4 - e_1)} \\
1 & e_2 & e_2^2 & \sqrt{(x_1 - e_2)(x_2 - e_2)(x_3 - e_2)(x_4 - e_2)} \\
1 & e_3 & e_3^2 & \sqrt{(x_1 - e_3)(x_2 - e_3)(x_3 - e_3)(x_4 - e_3)} \\
1 & e_4 & e_4^2 & \sqrt{(x_1 - e_4)(x_2 - e_4)(x_3 - e_4)(x_4 - e_4)} \\
\end{vmatrix}
\enspace = \enspace 0
\end{equation*}
$x \leftrightarrow e$ symmetry in $I_1$
$\left[4\omega_1,4\omega_2\right]$
$I_6$
\begin{equation*}
\begin{vmatrix}
1 & x_1 & \sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3) / (x_1 - e_4)} \\
1 & x_2 & \sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3) / (x_2 - e_4)} \\
1 & x_3 & \sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3) / (x_3 - e_4)} \\
\end{vmatrix}
\enspace = \enspace 0
\end{equation*}
$\displaystyle 1, \enspace f, \enspace \sqrt{\frac {(f-e_1)(f-e_2)(f-e_3)} {f-e_4}}$
$(0) + (\rho) + (-\rho)$
$\left[2\omega_1,2\omega_2\right]$
$I_7$
\begin{equation*}
\begin{vmatrix}
1 & \sqrt{\large{\frac {(x_1 - e_1)(x_1 - e_2)} {(x_1 - e_3)(x_1 - e_4)}}} & \sqrt{\large{\frac {(x_1 - e_1)(x_1 - e_3)} {(x_1 - e_2)(x_1 - e_4)}}} & \sqrt{\large{\frac {(x_1 - e_2)(x_1 -
e_3)} {(x_1 - e_1)(x_1 - e_4)}}} \\
1 & \sqrt{\large{\frac {(x_2 - e_1)(x_2 - e_2)} {(x_2 - e_3)(x_2 - e_4)}}} & \sqrt{\large{\frac {(x_2 - e_1)(x_2 - e_3)} {(x_2 - e_2)(x_2 - e_4)}}} & \sqrt{\large{\frac {(x_2 - e_2)(x_2 -
e_3)} {(x_2 - e_1)(x_2 - e_4)}}} \\
1 & \sqrt{\large{\frac {(x_3 - e_1)(x_3 - e_2)} {(x_3 - e_3)(x_3 - e_4)}}} & \sqrt{\large{\frac {(x_3 - e_1)(x_3 - e_3)} {(x_3 - e_2)(x_3 - e_4)}}} & \sqrt{\large{\frac {(x_3 - e_2)(x_3 -
e_3)} {(x_3 - e_1)(x_3 - e_4)}}} \\
1 & \sqrt{\large{\frac {(x_4 - e_1)(x_4 - e_2)} {(x_4 - e_3)(x_4 - e_4)}}} & \sqrt{\large{\frac {(x_4 - e_1)(x_4 - e_3)} {(x_4 - e_2)(x_4 - e_4)}}} & \sqrt{\large{\frac {(x_4 - e_2)(x_4 -
e_3)} {(x_4 - e_1)(x_4 - e_4)}}} \\
\end{vmatrix}
\enspace = \enspace 0
\end{equation*}
$\displaystyle 1, \enspace \sqrt{\frac {(f-e_1)(f-e_2)} {(f-e_3)(f-e_4)}}, \enspace \sqrt{\frac {(f-e_1)(f-e_3)} {(f-e_2)(f-e_4)}}, \enspace \sqrt{\frac {(f-e_2)(f-e_3)} {(f-e_1)(f-e_4)}}$
$(0) + (\omega_1) + (\omega_2) + (\omega_3)$
$\left[2\omega_1,2\omega_2\right]$
Where $\omega_1, \omega_2, \omega_3$ are half periods are defined by taking $f(\omega_1) = e_1, \space f(\omega_2) = e_2, \space f(\omega_3) = e_3, \space f(0) = e_4$.
By convention $\omega_1 + \omega_2 + \omega_3 = 0$ and the polar divisors all sum to zero.
NOTES
$I_1$, $I_2$, $I_3$, $I_6$ and $I_7$ are obtained by applying the Extended Frobenius-Stickelberger formula using
the specified basis.
Despite initial appearances the basis elements are meromorphic elliptic functions with respect to the specified lattice.
$I_4$ and $I_5$ are obtained by symmetry in $I_3$ along with the fact that the first three integrals must be algebraically equivalent.
From the cross-ratio formula for $f$ we can obtain the following explicit $\sigma$ products
\begin{equation*}
f(z) - e_1 \enspace = \enspace \space const. \frac {\sigma(z-\omega_1)\sigma(z+\omega_1)} {\sigma(z-\rho)\sigma(z+\rho)}
\end{equation*}
\begin{equation*}
\sqrt{\frac {f(z)-e_1} {f(z)-e_4}} \enspace = \enspace const. \frac {e^{\eta_1 z} \sigma(z-\omega_1)} {\sigma(z)}
\end{equation*}
\begin{equation*}
\sqrt{\big(f(z)-e_1\big)\big(f(z)-e_4\big)} \enspace = \enspace const. \frac {e^{\eta_1 z} \sigma(z) \sigma(z-\omega_1)} {\sigma(z - \rho)\sigma(z + \rho)}
\end{equation*}
\begin{equation*}
\sqrt{\big(f(z)-e_2\big)\big(f(z)-e_3\big)} \enspace = \enspace const. \frac {e^{-\eta_1 z} \sigma(z-\omega_2) \sigma(z-\omega_3)} {\sigma(z - \rho)\sigma(z + \rho)}
\end{equation*}
\begin{equation*}
\sqrt{\frac {(f(z)-e_1)(f(z)-e_2)(f(z)-e_3)} {f(z)-e_4}} \enspace = \enspace const. \frac {\sigma(z-\omega_1)\sigma(z-\omega_2) \sigma(z-\omega_3)} {\sigma(z)\sigma(z - \rho)\sigma(z +
\rho)}
\end{equation*}
\begin{equation*}
\sqrt{\frac {(f(z)-e_1)(f(z)-e_2)(f(z)-e_4)} {f(z)-e_3}} \enspace = \enspace const. \frac { \sigma(z)\sigma(z-\omega_1)\sigma(z-\omega_2)} {\sigma(z + \omega_3)\sigma(z - \rho)\sigma(z +
\rho)}
\end{equation*}
\begin{equation*}
\sqrt{\frac {\big(f(z)-e_2\big)\big(f(z)-e_3\big)} {\big(f(z)-e_1\big)\big(f(z)-e_4\big)} } \enspace = \enspace const. \frac {\sigma(z-\omega_2)\sigma(z-\omega_3)} {\sigma(z)\sigma(z +
\omega_1)}
\end{equation*}
Hence the algebraic identity
\begin{equation*}
\begin{vmatrix}
1 & \sqrt{\large{\frac {(x_1 - e_1)(x_1 - e_2)} {(x_1 - e_3)(x_1 - e_4)}} } & \sqrt{\large{\frac {(x_1 - e_1)(x_1 - e_3)} {(x_1 - e_2)(x_1 - e_4)}} } & \sqrt{\large{\frac {(x_1 - e_2)(x_1 -
e_3)} {(x_1 - e_1)(x_1 - e_4)}} } \\
1 & \sqrt{\large{\frac {(x_2 - e_1)(x_2 - e_2)} {(x_2 - e_3)(x_2 - e_4)}} } & \sqrt{\large{\frac {(x_2 - e_1)(x_2 - e_3)} {(x_2 - e_2)(x_2 - e_4)}} } & \sqrt{\large{\frac {(x_2 - e_2)(x_2 -
e_3)} {(x_2 - e_1)(x_2 - e_4)}} } \\
1 & \sqrt{\large{\frac {(x_3 - e_1)(x_3 - e_2)} {(x_3 - e_3)(x_3 - e_4)}} } & \sqrt{\large{\frac {(x_3 - e_1)(x_3 - e_3)} {(x_3 - e_2)(x_3 - e_4)}} } & \sqrt{\large{\frac {(x_3 - e_2)(x_3 -
e_3)} {(x_3 - e_1)(x_3 - e_4)}} } \\
1 & \sqrt{\large{\frac {(x_4 - e_1)(x_4 - e_2)} {(x_4 - e_3)(x_4 - e_4)}} } & \sqrt{\large{\frac {(x_4 - e_1)(x_4 - e_3)} {(x_4 - e_2)(x_4 - e_4)}} } & \sqrt{\large{\frac {(x_4 - e_2)(x_4 -
e_3)} {(x_4 - e_1)(x_4 - e_4)}} } \\
\end{vmatrix}
\enspace = \enspace - \frac 1 {\sqrt{\prod (x_i - e_j)}}
\begin{vmatrix}
1 & e_1 & e_1^2 \\
1 & e_2 & e_2^2 \\
1 & e_3 & e_3^2 \\
\end{vmatrix}
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3)(x_1 - e_4)} \\
1 & x_2 & x_2^2 & \sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3)(x_2 - e_4)} \\
1 & x_3 & x_3^2 & \sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3)(x_3 - e_4)} \\
1 & x_4 & x_4^2 & \sqrt{(x_4 - e_1)(x_4 - e_2)(x_4 - e_3)(x_4 - e_4)} \\
\end{vmatrix}
\end{equation*}
We can also get a six variable version equivalent to $1,f,f^2,f^3,f',ff'$ using the basis
\begin{equation*}
1, \enspace f, \enspace \sqrt{\frac {(f-e_1)(f-e_2)(f-e_3)} {f-e_4}}, \enspace \sqrt{\frac {(f-e_1)(f-e_2)(f-e_4)} {f-e_3}}, \enspace \ldots
\qquad\qquad \textsf{polar divisor:} \enspace (\rho) + (-\rho) + (0) + (\omega_1) + (\omega_2) + (\omega_3) \qquad\qquad \textsf{lattice:} \enspace \left[2\omega_1, 2\omega_2\right]
\end{equation*}
and a similar algebraic identity.
Specializations of these formulae can also be viewed as symmetric addition formulae for the Jacobi elliptic
functions.
Note these formulae are just the tip of the iceberg.
Any even, order 2 elliptic function $g$ whose period lattice intersects that of $f$ non-trivially is related to $f$ by an algebraic relation of genus zero.
(Because both functions are rational functions of the $\wp$ function on the lattice of intersection.)
If the algebraic relation is solved for $g$ then we get $g = A(f)$ where $A$ is some algebraic function.
And another implict integral of the differential equation is given by
\begin{equation*}
\begin{vmatrix}
1 & A(x_1) & A(x_1)^2 & A'(x_1) \\
1 & A(x_2) & A(x_2)^2 & A'(x_2) \\
1 & A(x_3) & A(x_3)^2 & A'(x_3) \\
1 & A(x_4) & A(x_4)^2 & A'(x_4) \\
\end{vmatrix}
\enspace = \enspace 0
\end{equation*}
Further the determinants $I_1 \ldots I_5$ are (algebraic) symmetric root differences in the sense of classical invariant theory.
For example the 2 + 2 generalised Laplace expansion of $I_2$ gives 12 terms like
\begin{equation*}
(x_2 - x_1)\sqrt{(x_3-e_1)(x_3 - e_4)(x_4 - e_2)(x_4 - e_3)}
\end{equation*}
which is a root difference of weight 1 in the $x_i$ and weight $\tfrac 1 2$ in the $e_j$.
Therefore the product of $I_3$ over all $2^6$ sign combinations of the last 3 rows will be a simultaneous invariant of the two quartic polynomials whose roots are $x_i$ and $e_j$.
$\approx \textsf{Cayley}^* \approx$ If
\begin{equation*}
\bigint_{x_2}^{x_1} \frac {dx} {\sqrt{(x - e_1)(x - e_2)(x - e_3)(x - e_4)}} \space = \space \bigint_{x_4}^{x_3} \frac {dx} {\sqrt{(x - e_1)(x - e_2)(x - e_3)(x - e_4)}}
\end{equation*}
then the $x_i$ and $e_i$ satisfy a polynomial relation
\begin{equation*}
\mathfrak{R}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \enspace = \enspace 0
\end{equation*}
Because the integral formula is invariant under Möbius transformations, $\mathfrak{R}$ must be a simultaneous invariant (in the sense of classical
Invariant Theory ) of the two quartic polynomials
$R(x) = (x - e_1)(x - e_2)(x - e_3)(x - e_4)$ and $S(x) = (x - x_1)(x - x_2)(x - x_3)(x - x_4)$.
It is given by
\begin{aligned}
\mathfrak{R}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \enspace &= \enspace
\begin{vmatrix}
1 & x_1 & x_1^2 & x_1^3 \\
1 & x_2 & x_2^2 & x_2^3 \\
1 & x_3 & x_3^2 & x_3^3 \\
1 & x_4 & x_4^2 & x_4^3 \\
\end{vmatrix}^{-4}
\prod_{\textsf{all signs}}
\begin{vmatrix}
1 & x_1 & x_1^2 & \hphantom{+} \sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3)(x_1 - e_4)} \\
1 & x_2 & x_2^2 & \pm \sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3)(x_2 - e_4)} \\
1 & x_3 & x_3^2 & \pm \sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3)(x_3 - e_4)} \\
1 & x_4 & x_4^2 & \pm \sqrt{(x_4 - e_1)(x_4 - e_2)(x_4 - e_3)(x_4 - e_4)} \\
\end{vmatrix} \\\\
&= \enspace \left(x_1x_2x_3 + x_1x_2x_4 + x_1x_3x_4 + x_2x_3x_4\right)^4\left(e_1 + e_2 + e_3 + e_4\right)^4 \quad + \quad \text{another 237 similar terms}
\end{aligned}
This can be simplified by letting
\begin{aligned}
\mathfrak{S}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \enspace &= \enspace
\begin{vmatrix}
1 & x_1 & x_1^2 & x_1^3 \\
1 & x_2 & x_2^2 & x_2^3 \\
1 & x_3 & x_3^2 & x_3^3 \\
1 & x_4 & x_4^2 & x_4^3 \\
\end{vmatrix}^{-2} \frac 1 2 \space \left(
\prod_{\textsf{evens}}
\begin{vmatrix}
1 & x_1 & x_1^2 & \hphantom{+} \sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3)(x_1 - e_4)} \\
1 & x_2 & x_2^2 & \pm \sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3)(x_2 - e_4)} \\
1 & x_3 & x_3^2 & \pm \sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3)(x_3 - e_4)} \\
1 & x_4 & x_4^2 & \pm \sqrt{(x_4 - e_1)(x_4 - e_2)(x_4 - e_3)(x_4 - e_4)} \\
\end{vmatrix} \enspace + \enspace \prod_{\textsf{odds}} \ldots \right) \\\\
&= \enspace\left(x_1x_2x_3 + x_1x_2x_4 + x_1x_3x_4 + x_2x_3x_4\right)^2\left(e_1 + e_2 + e_3 + e_4\right)^2 \quad + \quad \text{another 23 similar terms}
\end{aligned}
where evens means the subset of four factors with an even number of minus signs, and odds the subset of four factors with an odd number of minus signs.
Then we have
\begin{equation*}
\mathfrak{R}(R,S) \enspace = \enspace \mathfrak{S}(R,S)^2 \enspace - \enspace 64 \cdot \resultant(R,S)
\end{equation*}
The simultaneous invariant $\mathfrak{S}$, of the two quartics $R$ and $S$, can be concisely expressed in terms of scaled transvectants
\begin{equation*}
\mathfrak{S}(R,S) \enspace = \enspace 96 \thinspace \transvectant{\transvectant{R,R}_2,\transvectant{S,S}_2}_4 \enspace - \enspace 8 \thinspace \transvectant{R,S}_4^2 \enspace - \enspace 8
\thinspace \transvectant{R,R}_4 \transvectant{S,S}_4
\end{equation*}
which implies some unexpected symmetry , namely
\begin{equation*}
\mathfrak{S}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \space = \space \mathfrak{S}(e_1,e_2,e_3,e_4,x_1,x_2,x_3,x_4)\hspace{4em} \textsf{and} \hspace{4em}
\mathfrak{R}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \space = \space \mathfrak{R}(e_1,e_2,e_3,e_4,x_1,x_2,x_3,x_4)
\end{equation*}
NOTES
These formulae can (almost) be computed manually by utilising the formulae
\begin{aligned}
\prod_{\textsf{evens}} (w \pm x \pm y \pm z) \enspace &= \enspace \left(w^4 + x^4 + y^4 + z^4\right) \space - \space 2\left(w^2x^2 + w^2y^2 + w^2z^2 + x^2y^2 + x^2z^2 + y^2z^2\right) \space +
\space 8wxyz \\\\
\prod_{\textsf{odds}} (w \pm x \pm y \pm z) \enspace &= \enspace \left(w^4 + x^4 + y^4 + z^4\right) \space - \space 2\left(w^2x^2 + w^2y^2 + w^2z^2 + x^2y^2 + x^2z^2 + y^2z^2\right) \space -
\space 8wxyz \\\\
\prod_{\textsf{all signs}} (w \pm x \pm y \pm z) \enspace &= \enspace \left[\left(w^4 + x^4 + y^4 + z^4\right) \space - \space 2\left(w^2x^2 + w^2y^2 + w^2z^2 + x^2y^2 + x^2z^2 +
y^2z^2\right)\right]^2 \space - \space 64w^2x^2y^2z^2
\end{aligned}
The $k$-th scaled transvectant of two binary forms $F(x,y)$ and $G(x,y)$ is defined by
\begin{equation*}
\transvectant{F,G}_k \enspace = \enspace \frac {(m-k)!(n-k)!} {m!n!} \sum_{i=0}^k (-1)^i {k\choose i}
\frac {\partial^k F} { {\partial x}^{k-i} {\partial y}^i } \frac {\partial^k G} { {\partial x}^i {\partial y}^{k-i} }
\end{equation*}
where $m = \degree(F)$ and $n = \degree(G)$.
This formula can be used to compute simultaneous invariants of univariate polynomials, under Möbius transforms, by applying it to their homogenous counterparts.
Another observation (typical of invariants) that can be seen immediately from the determinant forms is that $\mathfrak{R}$ vanishes if either $R$ or $S$ is a perfect square.
So at a guess there are two symmetric polynomials of four roots say $\mathfrak{U}_1$ and $\mathfrak{U}_2$ which vanish when the polynomial is a perfect square and
there is some sort of formula like
\begin{equation*}
\mathfrak{R}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \enspace = \enspace \mathfrak{U}_1(x_1,x_2,x_3,x_4) . \mathfrak{U}_2(e_1,e_2,e_3,e_4) \enspace + \enspace \mathfrak{U}_2(x_1,x_2,x_3,x_4) .
\mathfrak{U}_1(e_1,e_2,e_3,e_4)
\end{equation*}
In terms of $f$, by taking the product over factors with an even number of minus signs we get this identity for any $z_1,z_2,z_3,z_4$
\begin{equation*}
\mathfrak{S}\left(f(z_1),f(z_2),f(z_3),f(z_4)\right) \enspace + \enspace 8 f'(z_1)f'(z_2)f'(z_3)f'(z_4) \enspace = \enspace
16 a^4 \sigma^4(2\rho) \space \frac {\prod\limits_{\textsf{evens}} \sigma(z_1 \pm z_2 \pm z_3 \pm z_4)} {\prod\limits_{i=1}^4 \sigma^2(z_i-\rho)\sigma^2(z_i+\rho)}
\end{equation*}
and a similar formula for an odd number of minus signs
\begin{equation*}
\mathfrak{S}\left(f(z_1),f(z_2),f(z_3),f(z_4)\right) \enspace - \enspace 8 f'(z_1)f'(z_2)f'(z_3)f'(z_4) \enspace = \enspace
16 a^4 \sigma^4(2\rho) \space \frac {\prod\limits_{\textsf{odds}} \sigma(z_1 \pm z_2 \pm z_3 \pm z_4)} {\prod\limits_{i=1}^4 \sigma^2(z_i-\rho)\sigma^2(z_i+\rho)}
\end{equation*}
The product over all signs gives the identity
\begin{equation*}
\mathfrak{R}\left(f(z_1),f(z_2),f(z_3),f(z_4)\right) \enspace = \enspace
256 a^8 \sigma^8(2\rho) \space \frac {\prod\limits_{\textsf{all signs}} \sigma(z_1 \pm z_2 \pm z_3 \pm z_4)} {\prod\limits_{i=1}^4 \sigma^4(z_i-\rho)\sigma^4(z_i+\rho)}
\end{equation*}
From this we get the symmetric four variable addition formula
\begin{equation*}
z_1 \pm z_2 \pm z_3 \pm z_4 \space = \space 0 \qquad \iff \qquad \mathfrak{R}(f(z_1),f(z_2),f(z_3),f(z_4)) \enspace = \enspace 0
\end{equation*}
and by the substitution $z_4=0$ the simpler but not quite so symmetric three variable addition formula
\begin{equation*}
z_1 \pm z_2 \pm z_3 \space = \space 0 \qquad \iff \qquad \mathfrak{S}\left(f(z_1),f(z_2),f(z_3),f(0)\right) \enspace = \enspace 0
\end{equation*}
NOTES
The three variable formula is half the degree of the four variable formula, but unlike the four variable formula, its coefficients are not simple symmetric functions of the $e_i$, because of the
presence of the $f(0)$ term.
However since $f(0)$ is a root, this can be remedied, at the expense of quadrupling it's degree, by taking a product over all roots
\begin{equation*}
\prod_{i=1}^4 \mathfrak{S}\left(f(z_1),f(z_2),f(z_3),e_i\right) \enspace = \enspace 0
\end{equation*}
The symmetry in $\mathfrak{S}$ implies the algebraic identity
\begin{equation*}
\begin{vmatrix}
1 & x_1 & x_1^2 & x_1^3 \\
1 & x_2 & x_2^2 & x_2^3 \\
1 & x_3 & x_3^2 & x_3^3 \\
1 & x_4 & x_4^2 & x_4^3 \\
\end{vmatrix}^{-2}
\prod_{\textsf{evens}}
\begin{vmatrix}
1 & x_1 & x_1^2 & \hphantom{\pm}\sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3)(x_1 - e_4)} \\
1 & x_2 & x_2^2 & \pm\sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3)(x_2 - e_4)} \\
1 & x_3 & x_3^2 & \pm\sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3)(x_3 - e_4)} \\
1 & x_4 & x_4^2 & \pm\sqrt{(x_4 - e_1)(x_4 - e_2)(x_4 - e_3)(x_4 - e_4)} \\
\end{vmatrix} \enspace = \enspace
\begin{vmatrix}
1 & e_1 & e_1^2 & e_1^3 \\
1 & e_2 & e_2^2 & e_2^3 \\
1 & e_3 & e_3^2 & e_3^3 \\
1 & e_4 & e_4^2 & e_4^3 \\
\end{vmatrix}^{-2}
\prod_{\textsf{evens}}
\begin{vmatrix}
1 & e_1 & e_1^2 & \hphantom{\pm}\sqrt{(e_1 - x_1)(e_1 - x_2)(e_1 - x_3)(e_1 - x_4)} \\
1 & e_2 & e_2^2 & \pm\sqrt{(e_2 - x_1)(e_2 - x_2)(e_2 - x_3)(e_2 - x_4)} \\
1 & e_3 & e_3^2 & \pm\sqrt{(e_3 - x_1)(e_3 - x_2)(e_3 - x_3)(e_3 - x_4)} \\
1 & e_4 & e_4^2 & \pm\sqrt{(e_4 - x_1)(e_4 - x_2)(e_4 - x_3)(e_4 - x_4)} \\
\end{vmatrix}
\end{equation*}
and a similar formula for odds .
It also leads to the following $\sigma$-product
\begin{equation*}
\begin{vmatrix}
1 & e_1 & e_1^2 & e_1^3 \\
1 & e_2 & e_2^2 & e_2^3 \\
1 & e_3 & e_3^2 & e_3^3 \\
1 & e_4 & e_4^2 & e_4^3 \\
\end{vmatrix}^{-1}
\begin{vmatrix}
1 & e_1 & e_1^2 & \sqrt{(f(z_1) - e_1)(f(z_2) - e_1)(f(z_3) - e_1)(f(z_4) - e_1)} \\
1 & e_2 & e_2^2 & \sqrt{(f(z_1) - e_2)(f(z_2) - e_2)(f(z_3) - e_2)(f(z_4) - e_2)} \\
1 & e_3 & e_3^2 & \sqrt{(f(z_1) - e_3)(f(z_2) - e_3)(f(z_3) - e_3)(f(z_4) - e_3)} \\
1 & e_4 & e_4^2 & \sqrt{(f(z_1) - e_4)(f(z_2) - e_4)(f(z_3) - e_4)(f(z_4) - e_4)} \\
\end{vmatrix} \enspace = \enspace - \frac {\sqrt{a}} {2} \cdot
\frac {\sigma(2\rho) \prod\limits_{\textsf{evens}} \sigma\left(\frac {z_1} 2 \pm \frac {z_2} 2 \pm \frac {z_3} 2 \pm \frac {z_4} 2\right)} {\sqrt{\prod \sigma(z_i - \rho)\sigma(z_i + \rho)}}
\end{equation*}
As can be seen from the transvectant expression, $\mathfrak{R}$ is symmetric when $x$ and $e$ are interchanged, that is
\begin{equation*}
\mathfrak{R}(x_1,x_2,x_3,x_4,e_1,e_2,e_3,e_4) \enspace = \enspace \mathfrak{R}(e_1,e_2,e_3,e_4,x_1,x_2,x_3,x_4)
\end{equation*}
Surprisingly this implies that if
\begin{equation*}
\begin{vmatrix}
1 & x_1 & x_1^2 & \sqrt{(x_1 - e_1)(x_1 - e_2)(x_1 - e_3)(x_1 - e_4)} \\
1 & x_2 & x_2^2 & \sqrt{(x_2 - e_1)(x_2 - e_2)(x_2 - e_3)(x_2 - e_4)} \\
1 & x_3 & x_3^2 & \sqrt{(x_3 - e_1)(x_3 - e_2)(x_3 - e_3)(x_3 - e_4)} \\
1 & x_4 & x_4^2 & \sqrt{(x_4 - e_1)(x_4 - e_2)(x_4 - e_3)(x_4 - e_4)} \\
\end{vmatrix} \enspace = \enspace 0
\end{equation*}
for some selection of signs on the square roots, then
\begin{equation*}
\begin{vmatrix}
1 & e_1 & e_1^2 & \sqrt{(e_1 - x_1)(e_1 - x_2)(e_1 - x_3)(e_1 - x_4)} \\
1 & e_2 & e_2^2 & \sqrt{(e_2 - x_1)(e_2 - x_2)(e_2 - x_3)(e_2 - x_4)} \\
1 & e_3 & e_3^2 & \sqrt{(e_3 - x_1)(e_3 - x_2)(e_3 - x_3)(e_3 - x_4)} \\
1 & e_4 & e_4^2 & \sqrt{(e_4 - x_1)(e_4 - x_2)(e_4 - x_3)(e_4 - x_4)} \\
\end{vmatrix} \enspace = \enspace 0
\end{equation*}
for some selection of signs on the square roots.
More surprisingly, it can be numerically verified that for the same selection of signs, the ratio's of the first three minors for any row are equal,
so taking the fourth row as an example we have
\begin{equation*}
\frac {
\begin{vmatrix}
x_1 & x_1^2 & \sqrt{R(x_1)} \\
x_2 & x_2^2 & \sqrt{R(x_2)} \\
x_3 & x_3^2 & \sqrt{R(x_3)} \\
\end{vmatrix}}
{\begin{vmatrix}
e_1 & e_1^2 & \sqrt{X(e_1)} \\
e_2 & e_2^2 & \sqrt{X(e_2)} \\
e_3 & e_3^2 & \sqrt{X(e_3)} \\
\end{vmatrix}} \enspace = \enspace
\frac {
\begin{vmatrix}
1 & x_1^2 & \sqrt{R(x_1)} \\
1 & x_2^2 & \sqrt{R(x_2)} \\
1 & x_3^2 & \sqrt{R(x_3)} \\
\end{vmatrix}}
{\begin{vmatrix}
1 & e_1^2 & \sqrt{X(e_1)} \\
1 & e_2^2 & \sqrt{X(e_2)} \\
1 & e_3^2 & \sqrt{X(e_3)} \\
\end{vmatrix}} \enspace = \enspace
\frac {
\begin{vmatrix}
1 & x_1 & \sqrt{R(x_1)} \\
1 & x_2 & \sqrt{R(x_2)} \\
1 & x_3 & \sqrt{R(x_3)} \\
\end{vmatrix}}
{\begin{vmatrix}
1 & e_1 & \sqrt{X(e_1)} \\
1 & e_2 & \sqrt{X(e_2)} \\
1 & e_3 & \sqrt{X(e_3)} \\
\end{vmatrix}}
\end{equation*}
DualQuarticsDiagram
This in turn implies that if a quartic curve has roots $e_1,e_2,e_3,e_4$ and intersects a parabola at four points with $x$-coordinates $x_1,x_2,x_3,x_4$,
then there exists a complementary quartic curve with roots $x_1,x_2,x_3,x_4$ which intersects the same parabola at four points with $x$-coordinates $e_1,e_2,e_3,e_4$.
NOTES
It's easy to directly verify this for the 2 point addition formula on a quadratic curve ie. the ellipse/hyperbola $y^2=K(x-e_1)(x-e_2)$ intersected by a horizontal straight line, we have
\begin{equation*}
\frac {
\begin{vmatrix}
1 & \sqrt{(x_1-e_1)(x_1-e_2)} \\
1 & \sqrt{(x_2-e_1)(x_2-e_2)} \\
\end{vmatrix}
\begin{vmatrix}
1 & \sqrt{(x_1-e_1)(x_1-e_2)} \\
1 & -\sqrt{(x_2-e_1)(x_2-e_2)} \\
\end{vmatrix}}
{\begin{vmatrix}
1 & x_1 \\
1 & x_2 \\
\end{vmatrix}} \enspace = \enspace (x_1+x_2) - (e_1 + e_2) \enspace = \enspace
- \frac {
\begin{vmatrix}
1 & \sqrt{(e_1-x_1)(e_1-x_2)} \\
1 & \sqrt{(e_2-x_1)(e_2-x_2)} \\
\end{vmatrix}
\begin{vmatrix}
1 & \sqrt{(e_1-x_1)(e_1-x_2)} \\
1 & -\sqrt{(e_2-x_1)(e_2-x_2)} \\
\end{vmatrix}}
{\begin{vmatrix}
1 & e_1 \\
1 & e_2 \\
\end{vmatrix}}
\end{equation*}
When the determinant vanishes we get $x_1+x_2=e_1+e_2$, which implies the ratio of the minors of the first column are equal, ie.
\begin{equation*}
\frac {\sqrt{(x_1-e_1)(x_1-e_2)}} {\sqrt{(e_1-x_1)(e_1-x_2)}} \enspace = \enspace
\frac {\sqrt{e_1e_2 - x_1x_2}} {\sqrt{x_1x_2 - e_1e_2}} \enspace = \enspace
\frac {\sqrt{(x_2-e_1)(x_2-e_2)}} {\sqrt{(e_2-x_1)(e_2-x_2)}}
\end{equation*}
Substituting $(x,\space y) \longmapsto (x/z,\space y/z^2)$ in the quartic and generalising a little gives an addition formula for integer triples on the curve
\begin{equation*}
h y^2 \enspace + \enspace (p x^2 + q x z + r z^2)\thinspace y \enspace + \enspace (a x^4 + b x^3 z + c x^2 z^2 + d x z^3 + e z^4) \enspace = \enspace 0
\end{equation*}
That is to say, if this curve has integer coefficients and $(x_1,y_1,z_1),(x_2,y_2,z_2),(x_3,y_3,z_3)$ are three integer points on this curve, then a fourth integer point is
given by
\begin{aligned}
x_4 \enspace &= \enspace \frac h {x_1 x_2 x_3}
\begin{vmatrix}
z_1^2 & x_1 z_1 & x_1^2 & y_1 \\
z_2^2 & x_2 z_2 & x_2^2 & y_2 \\
z_3^2 & x_3 z_3 & x_3^2 & y_3 \\
1 & 0 & 0 & \alpha_1 \\
\end{vmatrix} \space
\begin{vmatrix}
z_1^2 & x_1 z_1 & x_1^2 & y_1 \\
z_2^2 & x_2 z_2 & x_2^2 & y_2 \\
z_3^2 & x_3 z_3 & x_3^2 & y_3 \\
1 & 0 & 0 & \alpha_2 \\
\end{vmatrix} \\\\
y_4 \enspace &= \enspace \frac {a^2} {y_1 y_2 y_3}
\begin{vmatrix}
z_1^2 & x_1 z_1 & x_1^2 & y_1 \\
z_2^2 & x_2 z_2 & x_2^2 & y_2 \\
z_3^2 & x_3 z_3 & x_3^2 & y_3 \\
1 & \beta_1 & \beta_1^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
z_1^2 & x_1 z_1 & x_1^2 & y_1 \\
z_2^2 & x_2 z_2 & x_2^2 & y_2 \\
z_3^2 & x_3 z_3 & x_3^2 & y_3 \\
1 & \beta_2 & \beta_2^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
z_1^2 & x_1 z_1 & x_1^2 & y_1 \\
z_2^2 & x_2 z_2 & x_2^2 & y_2 \\
z_3^2 & x_3 z_3 & x_3^2 & y_3 \\
1 & \beta_3 & \beta_3^2 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
z_1^2 & x_1 z_1 & x_1^2 & y_1 \\
z_2^2 & x_2 z_2 & x_2^2 & y_2 \\
z_3^2 & x_3 z_3 & x_3^2 & y_3 \\
1 & \beta_4 & \beta_4^2 & 0 \\
\end{vmatrix} \\\\
z_4 \enspace &= \enspace \frac h {z_1 z_2 z_3}
\begin{vmatrix}
z_1^2 & x_1 z_1 & x_1^2 & y_1 \\
z_2^2 & x_2 z_2 & x_2^2 & y_2 \\
z_3^2 & x_3 z_3 & x_3^2 & y_3 \\
0 & 0 & 1 & \gamma_1 \\
\end{vmatrix} \thinspace
\begin{vmatrix}
z_1^2 & x_1 z_1 & x_1^2 & y_1 \\
z_2^2 & x_2 z_2 & x_2^2 & y_2 \\
z_3^2 & x_3 z_3 & x_3^2 & y_3 \\
0 & 0 & 1 & \gamma_2 \\
\end{vmatrix}
\end{aligned}
where the $\alpha_i$ are the roots of $ht^2 + rt + e = 0$, the $\beta_i$ are the roots of $at^4 + bt^3 + ct^2 + dt + e = 0$ and the $\gamma_i$ are the roots of $ht^2 + pt + a = 0$.
NOTES
Despite appearances, the above formulae are polynomials in $x_1,x_2,x_3,y_1,y_2,y_3,z_1,z_2,z_3,a,b,c,d,e,h,p,q,r$ with integral coefficients when reduced
modulo the three algebraic relations between the $x_i,y_i,z_i$.
Using the extended basis this formula, like others on this web page, can be extended to $2n$ points, for example for 6 points we get
\begin{equation*}
x_6 \enspace = \enspace \frac h {x_1 x_2 x_3 x_4 x_5}
\begin{vmatrix}
z_1^3 & x_1 z_1^2 & x_1^2 z_1 & x_1^3 & y_1 z_1 & x_1 y_1 \\
z_2^3 & x_2 z_2^2 & x_2^2 z_2 & x_2^3 & y_2 z_2 & x_2 y_2 \\
z_3^3 & x_3 z_3^2 & x_3^2 z_3 & x_3^3 & y_3 z_3 & x_3 y_3 \\
z_4^3 & x_4 z_4^2 & x_4^2 z_4 & x_4^3 & y_4 z_4 & x_4 y_4 \\
z_5^3 & x_5 z_5^2 & x_5^2 z_5 & x_5^3 & y_5 z_5 & x_5 y_5 \\
1 & 0 & 0 & 0 & \alpha_1 & 0 \\
\end{vmatrix} \space
\begin{vmatrix}
z_1^3 & x_1 z_1^2 & x_1^2 z_1 & x_1^3 & y_1 z_1 & x_1 y_1 \\
z_2^3 & x_2 z_2^2 & x_2^2 z_2 & x_2^3 & y_2 z_2 & x_2 y_2 \\
z_3^3 & x_3 z_3^2 & x_3^2 z_3 & x_3^3 & y_3 z_3 & x_3 y_3 \\
z_4^3 & x_4 z_4^2 & x_4^2 z_4 & x_4^3 & y_4 z_4 & x_4 y_4 \\
z_5^3 & x_5 z_5^2 & x_5^2 z_5 & x_5^3 & y_5 z_5 & x_5 y_5 \\
1 & 0 & 0 & 0 & \alpha_2 & 0 \\
\end{vmatrix} \qquad\qquad\qquad\textsf{etc.}
\end{equation*}
$^*$ The attributions above are at the conceptual level. As far as I am aware none of these authors wrote down these precise formulae.