Home

Papers

Continuation Methods

Manifolds

Implicitly Defined Manifolds

Invariant Manifolds

A Union of Balls

Demo

Finite Convex Polyhedra

Animations

Implicitly Defined Manifolds

An implicitly defined manifold $M$ is the set of points (the Solution Set) satisfying a set of equations $${\bf F}({\bf u})=0,\qquad\qquad{\bf F}:R^n\rightarrow R^{n-k}$$ with ${\bf F}$ a smooth mapping.

Continuation Methods were originally created for one dimensional manifolds and are sometimes called path following algorithms. They were formulated as a way of finding solutions of $$G({\bf u},\lambda)=0,\qquad F:R^n\times R\rightarrow R^n.$$ This can obviously be rewritten in the form above.

Examples

Implicitly defined manifolds include a large range of surfaces. The simple sphere has an implicit representation $${\bf u}^*{\bf u} = R^2$$. Wikipedia has a list of other examples, and there are other lists here, here, and others. They are popular in computer graphics as isosurfaces because of their flexibility and occur naturally in scientific visualization and medical imaging.

However, they are many more examples in higher dimensions. Many laws in physics are conservation laws and those define implicit manifolds, as points in infinite dimensional spaces with parameters. In dynamics special trajectories can be posed as two point boundary value problems. Fixed points, periodic orbits, homo and heteroclinic orbits, but also problems in control theory, deformation of elastic beams, robotics and orbital mechanics, all can be written as implicit manifolds. You might say that most problems in applied math are solved by trying to find a regular system of equations whose solution is the object you're trying to find. We'd prefer an explicit answer $x=something$, but instead we have to be satisfied with a list of properties the object has $f(x)=something$ and be satisfied that the only object with those properties is the one we're looking for (i.e. the system is regular). And while I usually fall back on Newton's method, there is a large catalogue of iterative methods for solving these systems.

Local approximations

If the Jacobian ${\bf F}_{\bf u}$ at a point is full rank ($n-k$) the Implicit Function Theorem says that the solution set near that point is a smooth $k$ dimensional manifold. If we parameterize using a mapping from ${\bf s}\in R^k$ to a point ${\bf u}$ on $M$, we can develop a Taylor expansion for the parameterization ${\bf u}({\bf s})$ with ${\bf F}({\bf u}({\bf s}))=0$ and ${\bf F}({\bf u}^0)=0$. We can keep differentiating the composition of ${\bf F}$ and ${\bf u}({\bf s})$ to get linear equations for the derivatives of ${\bf u}$ $$\begin{align} &{\bf F}_{\bf u}^0{\bf u}_{\bf s}^0=0\\ &{\bf F}_{\bf u}^0{\bf u}_{\bf ss}^0+{\bf F}_{\bf uu}^0{\bf u}_{\bf s}^0{\bf u}_{\bf s}^0=0\\ &{\bf F}_{\bf u} {\bf u}_{\bf sss} + 3 {\bf F}_{\bf uu} {\bf u}_{\bf s} {\bf u}_{\bf ss} + {\bf F}_{\bf uuu} {\bf u}_{\bf s} {\bf u}_{\bf s} {\bf u}_{\bf s}=0\\ \end{align}$$ Here the super script $0$ on ${\bf F}$ and its derivatives means evaluated at ${\bf u}^0$ and on ${\bf u}$ and its derivatives it means evaluated at ${\bf s}=0$.

It gets a little messy, but these can be solved in sequence to any desired order. These are not square systems and there is freedom in each to chose to add any element of the null space of ${\bf F}_{\bf u}^0$. This corresponds to a smooth mapping of $R^k\rightarrow R^k$ to compose with ${\bf s}$. One choice is arclength parameterization $$ {\bf u}_{\bf s}^*({\bf s}){\bf u}_{\bf s}({\bf s})=I.$$ The derivatives of this w.r.t. ${\bf s}$ adds $k$ constraints for each derivative of ${\bf u}({\bf s})$. The first is quadratic, the rest are linear $$\begin{align} &{({\bf u}_{\bf s}^0)}^* {\bf u}_{\bf s}^0=I\\ &{({\bf u}_{\bf s}^0)}^* {\bf u}_{\bf ss}^0=0\\ &{({\bf u}_{\bf s}^0)}^* {\bf u}_{\bf sss}^0+{({\bf u}_{\bf ss}^0)}^* {\bf u}_{\bf ss}^0=0 \end{align}$$

To third order $$ {\bf u}({\bf s})\sim {\bf u}^0 + {\bf u}_{\bf s}^0 {\bf s} + \frac{1}{2} {\bf u}_{\bf ss}^0 {\bf s}{\bf s} + \frac{1}{3!} {\bf u}_{\bf sss}^0 {\bf s}{\bf s}{\bf s}.$$ As in other polynomial approximations, higher order approximations tend to sacrifice convergence radius for order. So a third order approximation might be more accurate near the origin than a first order approximation, it might only be better nearer ${\bf s}=0$. The question of how to locally approximate ${\bf u}({\bf s})$ is regular perturbation theory, or asymptotic analysis, and it can get complicated. The group down the hall when I worked at IBM in the 1980's worked on how to evaluate things like square roots and trig functions, and they had many tricks.

The chart mapping near regular points

This local approximation is enough to define a chart centered at a point ${\bf u}_0\in M$. An orthonormal basis ${\bf u}_{\bf s}^0$ for the tangent space is an orthonormal basis for the null space of the Jacobian ${\bf F}_{\bf u}^0$. A point ${\bf s}\in R^k$, $|{\bf s}|\leq R$ to be projected to a point ${\bf u}({\bf s})\in M$ with ${\bf u}({\bf s})={\bf u}^0$ by solving the nonlinear system $$\begin{align} &{\bf F} ({\bf u}({\bf s}))=0 \\ &{({\bf u}_{\bf s}^0)}^* ( {\bf u}({\bf s}) - {\bf u}^0 ) = {\bf s} \end{align}$$ My favorite is to use Newton's method with an initial guess of ${\bf u}({\bf s})\sim {\bf u}^0 + \Phi {\bf s}$ where $\Phi$ is the matrix whose columns are the tangent vectors (an $n\times k$ matrix). Newton converges if initial guess is close to the solution (the Kantorovich condition). Keeping the radius $R$ small enough that $$ \frac{1}{2} |{\bf u}_{\bf ss}| R^2 \lt \epsilon $$ is a good idea. This can be interpreted geometrically as the distance from the tangent space to $M$. Another approach is to say the radius if more than three iterations are required and too small if fewer than two.

The chart mapping near singular points

This is when things get interesting. If the left and right null spaces of the Jacobian are finite dimensional, with orthonormal bases $\Phi$ and $\Psi$ respectively, the Lyapunov-Schmidt reduction can be used to separate the problem into a linear part and a smaller nonlinear part. The chart mapping is no longer guaranteed to be a mapping from $R^k\rightarrow M$, but may be one to many. <\p>

The Lyapunov-Schmidt decomposition splits the system ${\bf F}({\bf u})=0$ into the equivalent system $$\begin{align} & (I-\Psi\Psi^*){\bf F}({\bf u}^0 + \Phi {\bf s} + {\bf a}({\bf s}))=0\\ & \Psi\Psi^*{\bf F}({\bf u}^0 + \Phi {\bf s} + {\bf a}({\bf s}))=0\\ & \Phi^* {\bf a}({\bf s}) = 0\\ & {\bf a}(0) = 0 \end{align}$$ A big improvement, no? Well, yes as a matter of fact. The system $$\begin{align} & (I-\Psi\Psi^*){\bf F}({\bf u}^0 + \Phi {\bf s} + {\bf a})=0\\ & \Phi^* {\bf a} = 0 \end{align}$$ for ${\bf a}$ has a regular solution ${\bf a}=0$ if ${\bf F}({\bf u}^0)=0$. The inverse of the Jacobian is the pseudo-inverse of the Jacobian of ${\bf F}$. Another way to view it is via the fundemental theorem of linear algebra. The restriction of the linear operator to a map from the complement of its null space (spanned by $(I-\Phi\Phi^*)$ to its range (spanned by $(I-\Psi\Psi^*)$) is full rank.

So the Implicit Function Theorem says that in some neighborhood of ${\bf s}=0$ there is a unique mapping ${\bf a}({\bf s})$ with ${\bf a}(0) = 0$ which satisfies $(I-\Psi\Psi^*){\bf F}({\bf u}^0 + \Phi {\bf s} + {\bf a}({\bf s}))=0$. We can churn out the coefficients in a Taylor series. They satisfy $$\begin{align} & {\bf a}^0 = 0 \\ \\ & (I-\Psi\Psi^*) {\bf F}^0_{\bf u} \left( \Phi + {\bf a}^0_{\bf s} \right) = 0 \\ & \Phi^* {\bf a}^0_{\bf s} = 0 \\ \\ & (I-\Psi\Psi^*) {\bf F}^0_{\bf u} {\bf a}^0_{\bf ss} + (I-\Psi\Psi^*){\bf F}^0_{\bf uu}\left( \Phi + {\bf a}^0_{\bf s} \right)\left( \Phi + {\bf a}^0_{\bf s} \right)= 0 \\ & \Phi^* {\bf a}^0_{\bf ss} = 0 \\ \\ & (I-\Psi\Psi^*) {\bf F}^0_{\bf u} {\bf a}^0_{\bf sss} + 3 (I-\Psi\Psi^*){\bf F}^0_{\bf uu}\left( \Phi + {\bf a}^0_{\bf s} \right) {\bf a}^0_{\bf ss} + (I-\Psi\Psi^*){\bf F}^0_{\bf uuu}\left( \Phi + {\bf a}^0_{\bf s} \right)\left( \Phi + {\bf a}^0_{\bf s} \right)\left( \Phi + {\bf a}^0_{\bf s} \right)= 0 \\ & \Phi^* {\bf a}^0_{\bf sss} = 0 \\ \end{align}$$ That leaves us with the bifurcation equations $$\Psi^*{\bf F}({\bf u}^0 + \Phi {\bf s} + {\bf a}({\bf s}))=0$$ with the known function ${\bf a}({\bf s})$. The range of $\Psi^*{\bf F}$ is $dim(\Psi)$ dimensional, and the domain is $dim(\Phi)$ dimensional. The Jacobian w.r.t. ${\bf s}$ is $\Psi^*{\bf F}_{\bf u}\Phi\equiv0$. The bifucration equations are usually solved by reducing them to an equivalent algebraic system called the algebraic bifurcatiuon equations. When the solution set is one dimensional this can be done by introducing a parameter $\epsilon$ and scaling ${\bf s}=\epsilon^{1/m} \tilde{\bf s}$ and ${\bf F}$ by $1/\epsilon$, then look at the Jacobian w.r.t $\epsilon$. If $m$ is the leading order of the system (i.e. quadratic $m=2$), this brings the higher order derivatives down to order zero and you choose $\tilde {\bf s}$ so that term is zero (the algebraic system) and get conditions on the new linear term being full rank. That allows the IFT to be used to give solution curves in $\epsilon$. This shows that the solutions of the bifurcation equations are contect equivalent to the solutions of the algebraic bifurcation equations.

I don't know how to approach higher dimensional solution sets this way. Maybe introduce a general parameterization with more $\epsilon$'s? Singular perturbation theory I guess.


Michael E. Henderson michael.e.henderson.10590 at gmail.com