M721: Index Theory

The Algebraic Hodge Theorem and the Fundamental Theorem of Elliptic Operators

Statement of the Fundamental Theorem of Elliptic Operators

Definition.   Let $V$ and $W$ be inner product spaces. Let $L: V\rightarrow W$ be a linear map. Then a linear map $L^*: W\rightarrow V$ is called the formal adjoint of $L$ if $\langle Lv,w\rangle=\langle v,L^* w\rangle$ for any $v\in V$ and any $w\in W$.

Lemma.   (1) If a formal adjoint exists, it is unique.  (2) If $\dim V<\infty$, then $L^*$ exists.

Example.   The map $L: \Bbb{R}^\infty \rightarrow \Bbb{R}$ defined by summing the coodinates has no formal adjoint, where $\Bbb{R}^\infty$ is the colimit of $\Bbb{R}^n$.

If $V$ and $W$ are Hilbert spaces, then we have

Theorem.    Any continuous linear map $L:V\rightarrow W$ of Hilbert spaces has a formal adjoint.

Example.   Let $D:C^\infty (E) \rightarrow C^\infty (F)$ be a differential operator. Suppose $E, F$ and $T X$ have smooth inner product structures, then we have the “$L^2$-inner products” on $C^\infty (E)$ and $C^\infty (F)$, given by $\langle s_1,s_2\rangle :=\int_X \langle s_1, \overline{s_2}\rangle$. Then $D$ has a formal adjoint $D^*:C^\infty (F) \rightarrow C^\infty (E)$. If we write $D$ locally as $\Sigma A_\alpha (x) D^\alpha$, then $D^* t=\Sigma (-1)^{|\alpha|} D^\alpha (\overline{A_\alpha}^{tr}t)$.

An elliptic operator $D: C^\infty(E) \to C^\infty(E)$ is self-adjoint if $D = D^*$.

We can now state the Fundamental Theorem of Elliptic Operators. Later in this entry we will give some corollaries and much later in the course we will outline a proof using the method of elliptic regularity.

Fundamental Theorem of Elliptic Operators. For a self-adjoint elliptic operator $D : C^\infty(E) \to C^\infty(E)$, there is an orthogonal decomposition $C^\infty(E) = \ker D \oplus \mathrm{im } D$ with $\ker D$ finite-dimensional.

It is important here that the manifold $X$ is closed.

The algebraic Hodge theorem

Suppose now we have (co)chain complex $(C^\cdot,d)$ over $\Bbb{R}$ or $\Bbb{C}$:

$\dots \rightarrow C^{p-1} \xrightarrow{d} C^p\xrightarrow{d} C^{p+1}\rightarrow \cdot\cdot\cdot$

Give each $C^p$ an inner product. Assume each $d$ has a formal adjoint $d^*$. Define the Laplacian $\Delta :=d d^*+d^* d:C^p\rightarrow C^p$. Then we have

Lemma.   $\Delta\alpha=0$ iff $d\alpha=0$ and $d^*\alpha =0$.

Proof.   Suppose $\Delta\alpha =0$. Then we have

$0=\langle (d d^*+d^* d) \alpha, \alpha\rangle=\langle dd^*\alpha, \alpha\rangle +\langle d^* d\alpha,\alpha\rangle =\langle d^*\alpha, d^*\alpha\rangle +\langle d\alpha,d\alpha\rangle,$

and hence $d\alpha=d^*\alpha=0$.

$\Box$

Theorem.   Let $(C^\cdot,d)$ be a (co)chain complex over a field $k$. Then there exist decompositions $C^p=H^p\oplus B^p \oplus \hat{B}^p$ such that the (co)chain complex can be written as

.

$\Box$

When $k=\Bbb{R}$ or $\Bbb{C}$ and $C^p$ is finite dimensional for each $p$, setting $B^p=\mathrm{Im}\ d$ and $\hat{B}^p=\mathrm{Im}\ d^*$, the theorem above becomes a corollary of the following Algebraic Hodge Theorem:

Algebraic Hodge Theorem.   Let $(C^\cdot,d)$ be a (co)chain complex over $\Bbb{R}$ or $\Bbb{C}$. Suppose that $C^p$ has inner product for each $p$ and that formal adjoint $d^*:C^{p+1}\rightarrow C^p$ exists for each $p$. Let $\mathcal{H}^p=\ker\Delta :=dd^*+d^*d:C^p\rightarrow C^p$, then

(1) TFAE: (a) $\Delta\alpha=0$, (b) $d\alpha=0$ and $d^*\alpha=0$, (c) $(d+d^*)\alpha=0$.

(2) $\Delta(C^p)\subset (\mathcal{H}^p)^\perp$.

(3) If $C^p$ is finite dimensional, then $C^p=\Delta(C^p)\oplus \mathcal{H}^p$.

(4) If $C^p=\Delta(C^p)\oplus\mathcal{H}^p$ for any $p$, then there are orthogonal decompositions

$C^p=\mathcal{H}^p\oplus d(C^{p-1})\oplus d^* (C^{p+1})=\mathcal{H}^p\oplus dd^*(C^p)\oplus d^* d(C^p).$

Proof.   (1) (a)$\Rightarrow$ (b) $\Rightarrow$ (c) $\Rightarrow$ (a).

(2) Let $\beta \in \mathcal{H}^p$, then $\langle \Delta\alpha, \beta\rangle=\langle\alpha,\Delta\beta\rangle=0$.

(3) Show the inclusion in (2) is an equality by counting dimensions.

(4) It suffices to show the following orthogonal decomposition:

$\Delta (C^p)=d(C^{p-1})\oplus d^*(C^{p+1}) = dd^* (C^p)\oplus d^* d(C^p).$

However easily we have

$\Delta(C^p)\subset dd^*(C^p)\oplus d^*d(C^p)\subset d(C^{p-1})\oplus d^*(C^{p+1})\subset (\mathcal{H}^p)^\perp=\mathrm{Im}\ \Delta.$

$\Box$

By checking the decomposition diagram above, we can obtain:

Corollary.   $\phi : \mathcal{H}^p=\ker \Delta \rightarrow H^p(C^\cdot,d)$ is an isomorphism, where $\phi(\alpha)=[\alpha].$

Corollary.   $H^*(C^\cdot,d)=0$ iff $\Delta: C^p\rightarrow C^p$ is an isomorphism for any $p$.

Wrapping up

Corollary.    Algebraic Wrapping up. For $(C,d)$ as above:  $H^*(C^\cdot,d)=0 \Rightarrow d+d^*: C^{even} =: \oplus C^{2i}\rightarrow C^{odd}=: \oplus C^{2i+1}$ is an isomorphism. Hence $\Delta : C^p \to C^p$ is an isomorphism for all p.

This corollary “wraps up” a (co)chain complex into a single map.

Next, we consider wrapping up an ellliptic complex.

Definition. An elliptic complex of differential operators is a cochain complex of differential operators

$0 \to C^\infty(E^0) \xrightarrow{D} C^\infty(E^1) \xrightarrow{D} \cdots \xrightarrow{D} C^\infty(E^k) \to 0$

so that for all $0\neq\xi\in T_x^*X$ the associated symbol complex is exact.

If we define the symbol of differential operator $D$ of order $m$ by $\sigma_D(x,\xi):=(-i)^m\Sigma_{|\alpha|=m}A_\alpha \xi^\alpha$, then we have $(\sigma_D)^*=\sigma_{D^*}$.

Proposition.   Let $(C^\infty E^\cdot, D)$ be an elliptic complex of differential operators. Give $E^p$ and $TX$ metrics for each $p$. Then $D+D^*:C^\infty E^{even}\rightarrow C^\infty E^{odd}$ is an elliptic operator.

Proof.   For any $0\neq\xi\in T_x^*X$, since the complex is elliptic, we have the exact sequence

$\dots\rightarrow E_x^{p-1}\rightarrow E_x^p\xrightarrow{\sigma_p(\xi)} E_x^{p+1}\rightarrow\dots$

Thus, $\sigma_D+(\sigma_D)^*:E_x^{even}\rightarrow E_x^{odd}$ is an isomorphism.

Finally note that $\sigma_D+(\sigma_D)^*=\sigma_D+\sigma_{D^*}=\sigma_{D+D^*}$.

$\Box$

Consequences of the fundamental theorem.

We deduce the following corollaries of the Fundamental Theorem, the Algebraic Hodge Theorem, and Wrapping Up.

Corollary. Let $(C^\infty(E),D)$ be an elliptic complex of differential operators.

(1) For any $p$, $\mathcal{H}^p:=\ker \Delta : C^\infty E^p\rightarrow C^\infty E^p$ is finite dimensional.

(2) For any $p$, $C^\infty E^p=\mathrm{Im}\ \Delta \oplus\mathcal{H}^p$.

(3) $\mathrm{Index}\ (D+D^*: C^\infty(E^{even}) \to C^\infty(E^{odd}) ) =\Sigma (-1)^p \mathrm{dim}\ \mathcal{H}^p$.

Corollary.   If $D: C^\infty E \rightarrow C^\infty F$ is an elliptic differential operator, then we have isomorphisms $\ker\ D \xrightarrow{\cong} \mathrm{cok}\ D^*, \ker\ D^*\xrightarrow{\cong} \mathrm{cok}\ D$. Hence the kernel and the cokernel of an elliptic differential operator are finite dimensional.

Corollary.   If $D: C^\infty E \rightarrow C^\infty E$ is self-adjoint, then $\mathrm{Index}\ D=0$.

Written by topoclyb

October 19, 2012 at 6:29 pm

Posted in Uncategorized

Symbols

This post contains various definitions of the symbol of a differential operator. We will state a local version, then a global version and then we will finally view the symbol in its most abstract form: a section of a bundle over the total space of a cotangent bundle.

Review of Local Definitions

Let’s start by recalling that a differential operator of order ${m}$ on the manifold is ${X={\mathbb R}^n}$ is defined by:

$\displaystyle D=\sum_{|\alpha|\leq m}A_\alpha(\hspace{1ex})D^\alpha:\:C^\infty({\mathbb R}^n,{\mathbb R}^N)\rightarrow C^\infty({\mathbb R}^n,{\mathbb R}^N)\ \ \ \ \ (1)$

where ${A_\alpha(\hspace{1ex}): X={\mathbb R}^n\rightarrow M_{M\times N}({\mathbb R})}$ is smooth and if ${\alpha=(\alpha_1,\ldots,\alpha_n)}$, then ${\displaystyle D^\alpha=\frac{\partial^{\alpha_1}}{\partial x_1^{\alpha_1}}\frac{\partial^{\alpha_2}}{\partial x_2^{\alpha_2}}\cdots\frac{\partial^{\alpha_n}}{\partial x_n^{\alpha_n}}}$

Let ${x\in X={\mathbb R}^n}$ and ${\xi\in{\mathbb R}^n=T^*X}$.  The Symbol of ${D}$, denoted by ${\sigma_D}$ is then

$\displaystyle \sigma_D(x,\xi)=\sum_{|\alpha|\leq m}A_\alpha(x)\xi_1^{\alpha_1}\xi_2^{\alpha_2}\cdots\xi_n^{\alpha_n}\quad\in M_{M\times N}({\mathbb R})=\text{Hom}({\mathbb R}^N,{\mathbb R}^M)$

A differential operator ${D:\:C^\infty({\mathbb R}^n,{\mathbb R}^N)\rightarrow C^\infty({\mathbb R}^n,{\mathbb R}^N)}$ is said to be elliptic if for all ${x\in X}$ and every ${\xi\neq 0}$ we have that ${\sigma_D(x,\xi)}$ is invertible.

Global definition of the Symbol

Consider a globally defined differential operator

$\displaystyle D:\: C^\infty(E)\rightarrow C^\infty(F)$

for ${x_0\in X}$ and ${\xi\in T_{x_0}^*X}$ we want to define a linear map
$\displaystyle \sigma_D(x_0,\xi):E_{x_0}\rightarrow F_{x_0}$
in a coordinate free way.

With this in mind let ${e\in E_{x_0}}$ and choose:
1.    ${f:X\rightarrow{\mathbb R}}$ such that ${\mbox{d} f_{x_0}=\xi}$
2.    ${s\in C^\infty(E)}$ such that ${s(x_0)=e}$

Then we define

$\displaystyle \sigma_D(x_o,\xi)(e)=\frac{1}{m!}D(f^ms)(x_0)\ \ \ \ \ (2)$

Notice that even though this is a coordinate free definition of the symbol, it is still unclear how it changes in ${x}$ and ${\xi}$. We will later see that ${\sigma_D}$ is actually smooth on ${(x,\xi)}$. Before this, we should prove that this definition is in fact independent on the choice of ${f}$ and ${s}$.

${\sigma_D}$ does not depend on ${f}$

Claim 1 If ${g: X\rightarrow {\mathbb R}}$ is a smooth function such that ${\mbox{d} g_{x_0}=\xi}$ and ${g(x_0)=0}$, then
$\displaystyle D((f^m-g^m)s)(x_0)=0$

Proof: For any differential operator ${D}$, any section ${s}$ and any function ${\varphi:X\rightarrow{\mathbb R}}$,

$\displaystyle [D,\varphi](s)=D(\varphi s)-\varphi D(s)$

Setting ${\varphi=f^m-g^m}$ we have

$\displaystyle [D,f^m-g^m](s)(x_0)=D((f^m-g^m)s)(x_0)\ \ \ \ \ (3)$

Induction on the order of ${D}$ and (3) will give us the result:

Let ${D\in \text{DO}_0(E,F)}$, then by definition

$\displaystyle D((f^m-g^m)s)=(f^m-g^m)D(s)$

and so

$\displaystyle D((f^m-g^m)s)(x_0)=0$

Now assume the claim is true for every differential operator of order less than ${m}$ and suppose ${D\in \text{DO}_m(E,F)}$. By definition, ${[D,f^m-g^m]\in \text{DO}_{m-1}(E,F)}$
Thus, by induction

$\displaystyle [D,f^m-g^m](s)(x_0)=0$

and notice that (3) gives us

$\displaystyle [D,f^m-g^m](s)(x_0)=D((f^m-g^m)s)(x_0)$

so that

$\displaystyle D((f^m-g^m)s)(x_0)=0$

$\Box$

${\sigma_D}$ does not depend on ${s}$

Claim 2 Let ${s_1,s_2\in C^\infty(E)}$ be such that ${e=s_1(x_0)=s_2(x_0)}$, then
$\displaystyle D(f^m(s_1-s_2))(x_0)=0$

Proof: It is easier if we use the easy direction of Peetre’s Theorem so that we can use the fact that ${D}$ is local, that is

$\displaystyle \text{supp}(Ds)\subseteq \text{supp}(s)$

equivalently

$\displaystyle X\setminus\text{supp}(s)\subseteq X\setminus\text{supp}(Ds)$

equivalently

$\displaystyle s(x_0)=0\Rightarrow Ds(x_0)=0$

So, since ${f^m(s_1-s_2)(x_0)=0}$, we have ${D(f^m(s_1-s_2))(x_0)=0}$ as sought. $\Box$

Let us finish the section with a short remark:

${\sigma_D(x_0,\xi)}$ is homogeneous of degree ${m}$ in ${\xi}$. That is, for every ${\rho>0}$,
$\displaystyle \sigma_D(x_0,\rho\xi)=\rho^m\sigma_D(x_0,\xi)$

Proof: Simply take ${\rho f}$ instead of ${f}$ in the definition for ${\sigma_D}$.$\Box$

Local=Global

Lemma 1 For ${D=\sum_{|\alpha|\leq m}A_\alpha(\hspace{1ex})D^\alpha:\:C^\infty({\mathbb R}^n,{\mathbb R}^N)\rightarrow C^\infty({\mathbb R}^n,{\mathbb R}^N)}$ a differential operator of order ${m}$, the two definitions of symbol coincide under the identification ${{\mathbb R}^n\cong T_{x_0}^*{\mathbb R}^n}$ given by ${\xi\rightarrow\sum_{i=1}^n\xi_i\mbox{d} x_i}$

Proof: Let ${x_0,\xi\in{\mathbb R}^n}$. The function ${f(x)=\langle x-x_0,\xi\rangle}$ satisfies the conditions stated in the coordinate free definition of ${\sigma_D}$.  Let ${s}$ be the constant section ${e}$, that is, ${s(x)=e}$ for every ${x\in X}$.

$\displaystyle \sigma_D(x_0,\xi)=\frac{1}{m!}D(f^me)(x_0)$

where by (1)

$\displaystyle \frac{1}{m!}D(f^me)(x_0)=\frac{1}{m!}\sum_{|\alpha|\leq m}A_\alpha(x_0)D^\alpha(f^me)(x_0)=\frac{1}{m!}\sum_{|\alpha|\leq m}A_\alpha(x_0)\frac{\partial^{\alpha_1}}{\partial x_1^{\alpha_1}}\frac{\partial^{\alpha_2}}{\partial x_2^{\alpha_2}}\cdots\frac{\partial^{\alpha_n}}{\partial x_n^{\alpha_n}}(f^me)(x_0)$

Notice that here

$\displaystyle D^\alpha(f^me)=\left(\begin{array}{c}D^\alpha(f^me_1)\\D^\alpha(f^me_2)\\ \vdots\\ D^\alpha(f^me_N)\end{array}\right)= \left(\begin{array}{c}D^\alpha(f^m)e_1\\D^\alpha(f^m)e_2\\ \vdots\\ D^\alpha(f^m)e_N\end{array}\right)= D^\alpha(f^m)e$

since ${e}$ is a constant section.

Also notice that
1.    ${D^\beta(f^m)(x_0)=0}$ for every ${|\beta|\leq m-1}$: This is because there is always a factor of ${f}$ in the expression for ${D^\beta(f^m)}$ whenever ${|\beta|\leq m-1}$.
2.    ${D^\alpha(f^m)(x_0)=m!\,\xi_1^{\alpha_1}\xi_2^{\alpha_2}\cdots\xi_n^{\alpha_n}}$: This is a simple calculation.

Consolidating all the information we conclude

$\displaystyle \sigma_D(x_0,\xi)=\frac{1}{m!}D(f^me)(x_0)=\frac{1}{m!}\sum_{|\alpha|= m}A_\alpha(x_0)D^\alpha(f^me)(x_0)=\sum_{|\alpha|= m}A_\alpha(x_0)\xi_1^{\alpha_1}\xi_2^{\alpha_2}\cdots\xi_n^{\alpha_n}$

$\Box$

Symbol as a section

By consolidating definitions (*) and (1) of ${\sigma_D}$ we get ${\sigma_D\in C^{\infty}(\pi^*(\text{Hom}_{\mathbb R}(E,F)))}$. Here ${\pi}$ is the bundle map ${\pi:T^*X\rightarrow X}$ and we are just looking at the diagram

To be explicit, if ${\omega\in T^*X}$, then ${\omega=(x_0,\xi)}$ with ${\xi\in T_{x_0}^*X}$. So

$\displaystyle \sigma_D(\omega)=\sigma_D(x_0,\xi): E_{x_0}\rightarrow F_{x_0}$

that is, ${\sigma_D(x_0,\xi)\in \text{Hom}_{\mathbb R}( E_{x_0}, F_{x_0})}$ and we are using the identification ${\text{Hom}_{\mathbb R}( E_{x_0}, F_{x_0})\cong\pi^*(\text{Hom}_{\mathbb R}( E_{\omega}, F_{\omega}))}$.

Smoothness follows from the smoothness of the local definition and the fact that both definitions coincide locally.

Finally, let

$\displaystyle \text{sym}_m(E,F)=\{\sigma\in C^{\infty}(\pi^*(\text{Hom}_{\mathbb R}(E,F))) \mid \text{ for all } \rho>0,\: \omega\in T^*X\:,\: \sigma(\rho\omega)=\rho^m\sigma(\omega)\}$

then we have

Proposition 2 There is an exact sequence

$\displaystyle 0\rightarrow \text{DO}_{m-1}(E,F)\rightarrow \text{DO}_{m}(E,F)\rightarrow\text{Symbol}_m(E,F)$

Notice that this proposition (re)captures the fact that the symbol of an operator only sees’ the top’ degree of the operator.

Fundamental Theorem of Elliptic Operators

Now that we have a global definition of the symbol of a differential operator, we can state what it means for a differential operator to be elliptic. Namely, ${D}$ is elliptic if for every ${\omega\in T^*X\setminus\{X\}}$ (i.e ${\omega}$ is in the complement of the zero section of the cotangent bundle), the map ${\sigma_D(\omega)}$ is invertible.  The most important result involving elliptic operators is the following theorem:

Theorem 3 Fundamental Theorem of Elliptic Operators
If ${D:\: C^\infty(E)\rightarrow C^\infty(F)}$ is an elliptic differential operator over a compact manifold ${X}$, then both ${\text{ker}D}$ and ${\text{coker}D}$ are finite dimensional vector spaces.

Written by jpinzon84

September 11, 2012 at 10:42 pm

Posted in Uncategorized

with one comment

Let ${X}$ be a topological space. Let ${C(X)}$ be the set of continuous functions from ${X}$ to ${\mathbb{R}}$. ${C(X)}$ can also be thought of as set of smooth sections of the trivial bundle ${ X \times \mathbb{R} \longrightarrow X}$. Anyway, we get a contravariant functor

$\displaystyle C: Top \longrightarrow Ring$

where ${Top}$ is the category of ${topological}$ ${spaces}$. ${Ring}$ is the category of ${rings}$ with ${1}$ and every ${ring}$ ${homomorphism}$ sends ${1}$ to ${1}$. The two beautiful ${theorems}$ to be discussed here are the following. (Hewitt)For ${X}$, ${Y}$ compact Hausdorff there is a bijection

$\displaystyle Top(X, Y) \longrightarrow Ring(C(Y), C(X))$

(Swan)If ${X}$ is compact Hausdorff then taking sections gives a bijection from

$\displaystyle \lbrace \text{ isomorphism class of vector bundles over } X \rbrace \longrightarrow \lbrace \text{finitely generated projective C(X)-modules} \rbrace$

These two beautiful theoremshave some remarkable consequences If ${X}$, ${Y}$ are compact and Hausdorff then,

$\displaystyle X \cong Y \Longleftrightarrow C(X) \cong C(Y)$

${Theorem}$ \textup{2} leads to the following result in ${K}$${Theory}$

$\displaystyle K^{0}(X) \cong K_{0}(C(X))$

${Theorem}$ ${1}$ is a consequence of the following ${Lemma}$ Let ${X}$ be compact, Hausdorff topological space

1. For ${x_{0} \in X}$,$\displaystyle M(x_{0}) = \lbrace f \in C(X): f(x_{0}) = 0 \rbrace$

is a maximal ideal,

2. If ${M \lhd C(X)}$ is a maximal ideal, then ${\exists !}$ ${x_{0} \in X}$ such that ${M = M(x_{0})}$,
3. ${MaxSpec(C(X)) \cong X}$ where ${MaxSpec(R)}$ is the set of all maximal ideals of a ring ${R}$ equipped with ${Zariski}$ topology. The isomorphism takes ${M(x_{0})}$ to ${x_{0}}$

Proof:

1. Clearly ${M(x_{0})}$ is maximal as ${C(X) / M(x_{0}) \simeq \mathbb{R}}$ which is a field.
2. Notice, if ${f \notin M(x_{0})}$, then ${f(x_{0}) \neq 0}$ If ${I}$ is an ${ideal}$ such that ${I \nsubseteq M(x_{0})}$ for all ${x_{0}}$ in ${X}$ then for every ${x \in X}$, ${\exists}$ ${f_{x} \in I}$ such that ${f_{x}(x) \neq 0}$. Each ${f_{x}}$ there exists ${U_{x} \ni x}$ such that ${f_{x}(t) \neq 0}$ ${\forall t \in U_{x}}$. Since ${X}$ is compact ${\lbrace U_{x_{0}}, \ldots U_{x_{n}} \rbrace}$ cover ${X}$. Using ${bump}$ ${functions}$ ${b_{i}}$ which do not vanish on ${U_{x_{i}}}$ respectively, define$\displaystyle f = |f_{x_{0}}|b_{0} + \ldots +|f_{x_{n}}|b_{n}$

. Observe, ${f(x) \neq 0}$ ${\forall x \in X}$. Define ${g(x) = \frac{1}{f(x)}}$. Clearly ${g(x) \in C(X)}$ and ${f.g = 1}$. Thus ${I = C(X)}$. Thus the only maximal ideals of ${C(X)}$ is of the form ${M(x_{0})}$ for some ${x_{0} \in X}$.

3. For any ideal ${I}$ of a ring ${R}$ define,$\displaystyle V(I) = \lbrace M \text{ maximal in } R : I < M \rbrace$

${V(I)}$ is the basis for all ${closed}$ sets in the space ${MaxSpec(R)}$ under ${Zariski}$ topology. The map$\displaystyle F : X \longrightarrow MaxSpec(X)$

which sends$\displaystyle x \longmapsto M(x_{0})$

is already a bijection. All we need to show is$\displaystyle C \text{ closed} \Leftrightarrow F(C) \text{ closed}$

${(\Rightarrow):}$
IF ${C}$ closed then define ${I_{C} = \lbrace f \in C(X): f(C) = 0 \rbrace}$ and ${F(C) = V(I_{C}) = \bigcup_{x \in C} M(x)}$ \vspace{5pt}
${(\Leftarrow):}$
IF ${C}$ be a basic closed set in ${MaxSpec(C(X))}$, ie, ${C= V(I)}$ for some ${I \in C(X)}$ then, define$\displaystyle D = \lbrace x \in X: f(x) = 0 \forall f \in I \rbrace$

. Then ${D}$ is clearly a closed set and clearly ${D = F^{-1}(C)}$.

$\Box$ Proof: (of Theorem 1) In fact the ${C}$ gives the map between the ${Hom}$ sets of the respective category.
One-one
Let ${f: X \longrightarrow Y}$. Then

$\displaystyle C(f)(g) =f^{*}g = g \circ f$

, where ${g \in C(Y)}$. If ${C(f) = C(f')}$ then

${\Rightarrow g \circ f = g \circ f' \forall g \in C(Y)}$
${\Rightarrow f(x) = g(x)}$ by using bump functions near each point
Onto
Given a map ${F:C(Y) \longrightarrow C(X)}$, we induce a map

$\displaystyle \overline{F} : Spec(C(X))\longrightarrow Spec(C(Y))$

By ${Lemma}$ \textup{5} we get a map

$\displaystyle \overline{F}: X \longrightarrow Y$

It is clear that ${\overline{F}^{*} = F}$. $\Box$ Proof: ( sketch of proof of theorem 2)
Notice that Let ${G}$ be the map
G: ${\lbrace}$ isomorphism class of vector bundles over ${X}$ ${\rbrace \longrightarrow \lbrace}$ finitely generated C(X)-modules ${\rbrace}$
where given a vector bundle ${\xi}$
${G(\xi)}$ = ${\lbrace}$ smooth sections of ${\xi \rbrace}$.
Since ${X}$ is compact, any vector bundle ${\xi}$ is a ${subbundle}$ of a trivial bundle of finite dimension, ie ${X \times \mathbb{R}^{n}}$. Hence ${G(\xi)}$ is a sub-module of ${\bigoplus_{1}^{n} C(X)}$ due to the following isomorphism.
${ \tau: \bigoplus_{1}^{n} C(X) \cong \lbrace }$ smooth sections on the trivial bundle ${X \times \mathbb{R}^{n} \rbrace }$
Thus ${G(\xi)}$ is a finitely generated module. Moreover every bundle ${\xi}$ of finite dimension over a compact space has a complement, say ${\xi^{-1}}$, hence ${G(\xi) \oplus G(\xi^{-1})=\bigoplus_{1}^{n} C(X)}$. Hence its projective. Given a finitely generated projective module over ${C(X)}$, say ${M}$, find ${n}$ and a ${C(X)}$ module ${N}$, such that

$\displaystyle M \oplus N = \bigoplus_{1}^{n} C(X)$

Then define ${G^{-1}(M) = \tau^{-1}(M)}$. This is a ${vector}$ ${bundle}$ over ${X}$. The proof is non-trivial and is a ${theorem}$ of ${Swan}$. $\Box$

Written by prasit0605

September 5, 2012 at 1:33 am

Posted in Uncategorized

A local view of the global definition of a differential operator

Recall the global definition of a differential operator (of order m) $D: \mathcal C^\infty(E) \to \mathcal C^\infty(F), \ \text{with} \ E,F \ \text{being smooth vector bundles over a manifold} \ X:\\ DO_{-1}(E,F) = \{ \mathbf 0\}, \text{and inductively}, \ D \in DO_m(E,F) \!\iff\! [D, f] \in DO_{m-1}(E,F).$

Also recall that differential operators are local, so that in particular, they induce (linear) operators between the spaces of sections over chart neighbourhoods in X.  This allows us to reduce to the Euclidean case, and for the remainder of this discussion, we will assume $D \in \mathrm{Hom}_\mathbb R\left(\mathcal C^\infty(\mathbb R^n, \mathbb R^N) \to \mathcal C^\infty(\mathbb R^n, \mathbb R^M)\right)$ (noting that the space of (smooth) sections of $\mathbb R^{n + N} \to \mathbb R^n$ is naturally isomorphic to the space of (smooth) maps from $\mathbb R^n \ \text{to} \ \mathbb R^N$, etc.)

Proposition: If D has the form of a “local differential operator” of order m, then $D \in DO_m\left(\mathcal C^\infty(\mathbb R^n\times \mathbb R^N), \mathcal C^\infty(\mathbb R^n\times \mathbb R^M)\right) \quad \text{(which we henceforth refer to as simply}\ DO_m).$

Proof:

Set $D = \sum_{|\alpha| \le m} A_\alpha D^\alpha, \ \text{with} \ \alpha \ \text{a multi-index,}\ A_\alpha \in M_{M \times N}(\mathcal C^\infty(\mathbb R^n, \mathbb R)),\ \text{and} \ D^\alpha$ as in the definition of a local differential operator.  If $f \in \mathcal C^\infty(\mathbb R^n, \mathbb R) \ \text{and}\ s \in \mathcal C^\infty(\mathbb R^n, \mathbb R^N),$ then Leibniz’s differentiation rule yields $D^\alpha(fs) = \sum_{\mu + \nu = \alpha}\binom{\alpha}{\nu} D^\mu fD^\nu s, \ \text{where}\ \mu,\nu$ are multi-indices, and sums of mulit-indices are taken term-wise, and $\binom{\alpha}{\nu} = \frac{\alpha!}{\nu! (\alpha - \nu)!} = \frac{\alpha_1!\cdots \alpha_n!}{\nu_1!\cdots \nu_n!(\alpha_1 - \nu_1)!\cdots (\alpha_n - \nu_n)!}$.

We proceed by induction on m, the order of D (as a local D.O.).  By the linearity of $[\cdot , f]$, it suffices to prove $D^\alpha \in DO_{|\alpha|}$.  Now, $D^0 s = s, \ \forall s \in \mathcal C^\infty(\mathbb R^n, \mathbb R^N), \ \text{and so}\ [D^0,f](s) = D^0(fs) - f D^0s = 0$, which gives $[D^0,f] = \mathbf 0 \in DO_{-1}, \ \forall f\in \mathcal C^\infty(\mathbb R^n, \mathbb R) \implies D^0 \in DO_0$, establishing the base case.  Now assume that all local differential operators of order at most$m-1$ are (global) differential operators of the same order.  The Leibniz rule above gives:
$[D^\alpha, f](s) = D^\alpha(fs) - fD^\alpha s = \left(\sum_{\mu + \nu = \alpha} \binom{\alpha}{\nu} D^\mu f D^\nu s\right) - fD^\alpha s = \sum_{\mu + \nu = \alpha, \; |\mu| > 0} \binom{\alpha}{\nu} D^\mu f D^\nu s.$

Since $|\mu| > 0, \ |\nu| = |\alpha| - |\mu| < |\alpha| = m$—i.e., each $D^\nu$ in the sum has order at most $m-1$, and so by the inductive hypothesis (and noticing that $DO_{k-1} \subset DO_k, \forall k \in \mathbb N$), we find that $[D^\alpha, f] \in DO_{m-1} \implies D^\alpha \in DO_m.$   QED.

Now we begin to prove the converse to the above proposition.  For the purposes of the next lemma, we may relax the assumption that D is an operator between sections of vector bundles over Euclidean space.

Lemma 1: Let $D \in DO_m(E,F)$, then for every $s \in \mathcal C^\infty(E), \ \text{and} f_1, ..., f_{m+1} \in \mathcal C^\infty(X,\mathbb R).$  If every $f_i$ vanishes at a point $x_0 \in X, \ \text{then} \ (D(f_1 \cdots f_{m+1})s)(x_0) = 0$.

Proof:

Again, by induction on m.  If $D \in DO_{-1}(E,F), \ \text{then}\ D = \mathbf 0, \ \text{whence}\ (Ds)(x_0) = 0, \ \forall x_0 \in X$, and thus the base case is established.  Now suppose the result for $DO_{m-1}$, and let $D \in DO_m$.  Now $(D(f_1 \cdots f_{m+1})s)(x_0) = [D, f_{m+1}]((f_1 \cdots f_m)s)(x_0) + f_{m + 1}(x_0)(D(f_1\cdots f_m)s)(x_0)$.  By assumption, $[D, f_{m+1}] \in DO_{m-1},$ and so $[D, f_{m+1}]((f_1 \cdots f_m)s)(x_0) = 0$.  On the other hand, $f_{m+1}(x_0) = 0$ by hypothesis, and so $(D(f_1 \cdots f_{m+1})s)(x_0) =0.$   QED.

Lemma 2: Let $D \in DO_m\left(\mathcal C^\infty(\mathbb R^n\times \mathbb R^N), \mathcal C^\infty(\mathbb R^n\times \mathbb R^M)\right).$  Now, for every $k \in \mathbb N, \ \exists!\{A_\alpha\}_{|\alpha| \le k}$ with the property that $\forall |\beta| \le k, \ D(x^\beta) = \sum_{|\alpha| \le k} A_\alpha(x)D^\alpha(x^\beta).$

The proof is left as an exercise.  We give an outline of the inductive construction.  To find $A_0$, allow $D$ to act on the constant co-ordinate sections $e_i, \ i = 1, ..., N$, and expand each $De_i$ in terms of the basis $f_1, ..., f_M\ \text{for}\ \mathbb R^M$.  Take $A_0$ to be the matrix thus determined—since D maps smooth vector fields to smooth vector fields, the co-ordinates of $A_0$ will be given by smooth functions of x.  This gives a $0^\mathrm{th}$ order approximation of D.

Now we take these sections and begin to multiply them by functions on X to obtain new sections—e.g., take $s_{j,i}: x\mapsto x_je_i\ \text{and let the}\ i^\mathrm{th}\ \text{column of}\ \ A_\beta\ \text{be}\ Ds_{i,j} - De_i = D(x_j-1)e_i,$ where $\beta$ is the multi-index with only a single 1 in position j.  In general, define the  $i^\mathrm{th}\ \text{column of }\ A_\beta$ by $D\left(\left(\frac{x^\beta}{\beta!} - \sum_{\mu + \nu = \beta} \frac{x^\nu}{\nu!}\right)e_i\right)$ for some multi-index $\mu$.  It is clear (or an exercise) that these matrices produce the desired result, and moreover, Lemma 1 shows that at most $2^m$ of these matrices will be non-zero (i.e., that the process terminates).  Uniqueness is, of course, automatic.

Theorem: Differential operators are precisely local operators whose local form is that of a local differential operator.

Proof:

Because differential operators are local, we can immediately reduce to the Euclidean case.  Showing “$\supseteq$” was the content of the proposition, and so it suffices to show “$\subseteq$“.  Invoking the construction from lemma 2 (with k = m), let $\tilde D = \sum_{|\alpha| \le m}A_\alpha D^\alpha$.

Consider the section $s: x \mapsto s_i(x)e_i, \ s_1, ..., s_N \in \mathcal C^\infty(\mathbb R^n, \mathbb R)\ \text{of}\ \mathbb R^{N + n} \to \mathbb R^n$.  Recall Taylor’s Theorem: given $f \in \mathcal C^\infty(\mathbb R^n, \mathbb R), \ \forall x_0 \in \mathbb R^n, \ \text{and}\ \forall m \in \mathbb N, \exists p$, a polynomial of degree m, a neighbourhood $U\ \text{of}\ x_0$, and for each multi-index $|\alpha| = m + 1, \exists h_\alpha \in \mathcal C^\infty(\mathbb R^n, \mathbb R)$ such that $f(x) = p(x) + \sum_{|\alpha| = m + 1}h_\alpha(x) (x - x_0)^\alpha$ for all $x \in U$.  Choose $x_0 \in X$ take m to be the order of D, and apply Taylor’s Theorem to each $s_i$ individually to obtain $s_i(x) = p_i(x) + \sum_{|\alpha| = m + 1} h_{i,\alpha}(x)(x - x_0)^\alpha = p_i(x) + h_i(x)$ on a neighbourhood $U_i\ \text{of}\ x_0$.  Take $U = \bigcap_{i = 1}^N U_i$.

Now by locality, $(Ds)\bigl|_U \ \text{depends only on}\ s\bigl|_U,$ and so $(D - \!\tilde D)(s)(x_0) \!=\! (D - \!\tilde D)\!\!\left(\sum_{i = 1}^N(p_i + h_i)e_i\right)\!\!(x_0) \!=\!\! \sum_{i = 1}^N ((D - \!\tilde D)(p_i + h_i)e_i)(x_0).$  Hence, $\tilde D(p_ie_i) = D(p_ie_i)$ by construction, and so $(D - \tilde D)(s) = \sum_{i = 1}^N (D - \tilde D)(h_ie_i)(x_0)$.  On the other hand, $h_i(x) = \sum_{|\alpha| = M + 1} h_{i,\alpha}(x)(x - x_0)^\alpha$, and so linearity reduces the question to showing $(D - \tilde D)((x - x_0)^\alpha h_{i,\alpha}(x)e_i)(x_0) = 0$.

Finally, $D,\tilde D$ are differential operators of order m (use the proposition to get this for $\tilde D$), and since $(x - x_0)^\alpha\ \text{decomposes into}\ m+1$ smooth functions which all vanish at $x_0\ \text{for}\ |\alpha| = m + 1$, we find by lemma 1, that $(D(x - x_0)^\alpha h_{i,\alpha}(x)e_i)(x_0) = (\tilde D(x - x_0)^\alpha h_{i,\alpha}(x)e_i)(x_0) = 0$, whence $Ds(x_0) = \tilde Ds(x_0) \implies D = \tilde D,\ s,x_0$ having been arbitrary.    QED.

Written by rallymoore

August 31, 2012 at 7:48 pm

Posted in Uncategorized

A global definition of a differential operator

Background

${E\rightarrow X}$ and ${F\rightarrow X}$ are smooth vector bundles over a smooth manifold ${X}$. We have seen how to define differential operators locally, that is, over ${\mathbb{R}^n}$. How should we define global differential operators?

Definitions

Let us write ${Op(E,F) = Hom_\mathbb{R}(C^\infty E, C^\infty F)}$. We define a Leibniz bracket ${Op(E,F)\times C^\infty(X)\rightarrow Op(E,F)}$ by ${(D,f)\mapsto [D,f] = Df - fD,}$ which acts on a section ${s\in C^\infty E}$ by ${[D,f]s = D(fs) - fDs}$. That is,

$\displaystyle DO_m(E,F) = \{ D\in Op(E,F)\ |\ \forall f\in C^\infty(X)\ [D,f]\in DO_{m-1}(E,F)\}.$

This is similar to the Lie bracket of two vector fields, ${[X,Y]f = X(Yf) - Y(Xf)}$.

We set ${DO_{-1}(E,F) = \{0\}}$ and inductively define differential operators of order at most ${m}$ by requiring that ${D\in DO_m(E,F)}$ provided ${[D,f]\in DO_{m-1}(E,F)}$ for each ${f\in C^\infty(X)}$.

Recall that for a section $s \in C^\infty E$, its support $supp (s)$ is the closure of $\{ x \in X \mid s(x) \not = 0 \}$.

We would like to compare differential operators to local operators, those operators which only use local information in the following sense: An operator ${T\in Op(E,F)}$ is local if, for each ${s\in C^\infty E}$, ${supp(Ts) \subset supp(s)}$.

Lemma 1 A differential operator is local.

Proof: The only obvious way to prove this lemma is by induction. It is clearly true for ${m = -1}$: if ${s}$ is a section, then ${supp (s) = \emptyset\subset supp (s)}$.

Now suppose that this is true for ${m-1}$. Let ${D\in DO_m(E,F)}$, ${s\in C^\infty E}$ be arbitrarily chosen. Let ${U}$ be any open set for which ${supp (s)\subset U}$. By Urysohn’s lemma, there is some smooth function ${f}$ with ${f|_{supp (s)} = 1}$ and ${supp (s) \subset supp (f)\subset U}$. In particular, ${fs = s}$. (For brevity’s sake, call such an ${f}$ a “support function.”) Then since ${[D,f]s = D(fs) - f(Ds)}$, we observe that

$\displaystyle Ds = D(fs) = [Df]s + f(Ds).$

The support of the sum of two sections is the union of the supports of the sections since the sum is zero exactly when both sections are zero. Therefore,

$\displaystyle supp (Ds) = supp [D,f]s \cup supp (f(Ds)).$

The support of the product of two sections is the intersection of the supports of the sections, since the product is zero exactly when at least one of the sections is zero. Thus ${supp (f(Ds)) = supp(f) \cap supp(Ds)\subset supp(f)}$.

Additionally, ${supp[D,f]s \subset supp(s)}$ by the inductive assumption. So

$\displaystyle supp (Ds) \subset supp(s) \cup supp(f) = supp (f) \subset U.$

This containment holds for arbitrary ${U}$ containing ${supp(s)}$. Therefore, the support of ${Ds}$ is contained in the intersection of the closures of these open sets:

$\displaystyle supp(Ds) \subset \cap \bar{U} = supp (s).$

$\Box$

One might naively wonder if this proves anything substantial: Are there ever any operators that aren’t local? In fact, yes; here’s an example. Let ${X = \mathbb{S}^1}$, ${E = F = \mathbb{S}^1\times\mathbb{R}}$, so that the sections of ${E}$ and ${F}$ both constitute smooth functions on ${\mathbb{S}^1}$. Let ${\phi}$ be a function with support on the northern hemisphere of ${\mathbb{S}^1}$, i.e., ${supp(\phi) = \{e^{i\theta}\ |\ \theta\in[0,\pi]\}}$. Define an operator ${A_\phi\in Op(E,F)}$ by

$\displaystyle A_\phi f(\omega) = \int_\mathbb{S}^1 f(\theta)\phi(\omega-\theta)d\theta.$

We see that ${A_\phi}$ cannot be local. Consider, say, a function ${f}$ with support only on the southern hemisphere of ${\mathbb{S}^1}$, so that ${supp(f) \cap supp(\phi) = \{1,-1\}}$. But by examining the integral ${A}$, it’s clear that ${supp(Af)}$ is not a subset of ${supp (f)}$.

This gives intuition to the word “local:” a local operator ${D}$ acting on a section ${s}$ of ${E}$ determines ${Ds(x)}$ based only on the behavior of ${s}$ in a neighborhood of ${x}$, rather than on any global information. This corollary justifies the intuition: If on a neighborhood ${U}$ in ${X}$ two sections of ${E}$ agree, i.e., ${(s_1 - s_2)|_U = 0}$, then, restricted to ${U}$, ${Ds_1 = Ds_2}$.

Proof: Put ${s = s_1 - s_2}$. Then ${supp(Ds_1 - Ds_2) = supp (Ds) \subset supp (s) = supp(s_1 - s_2)}$ and ${supp (s) \cap U = \emptyset}$. Therefore, ${supp(Ds_1 - Ds_2)\cap U = \emptyset}$, so on ${U}$ we have that ${Ds_1 = Ds_2}$. $\Box$

Since we’ve seen that differential operators are local, the natural next question is whether there are any non-differential local operators. This is answered by a theorem of Peetre, proved in the 1960s:

Theorem 2 All local operators are differential operators.

The next question is whether in local coordinates differential operators can be represented in the form

$\displaystyle \sum_{|\alpha|\leq m} A_\alpha(x)D^\alpha.$

Theorem 3 All differential operators are, locally, differential operators.

Written by necoleman

August 29, 2012 at 2:15 am

Posted in Uncategorized

Smooth vector bundles and local coordinates

In this entry we remind the reader of the definition of a smooth vector bundle and give a local coordinates for a smooth section.

Definition [Milnor-Stasheff] A rank n vector bundle is a map $p : E \rightarrow X$ and a vector space structure on $p^{-1}x$ for all $x \in X$ so that ${\forall x \in X, \exists \text { nbhd } U}$ and fiber-preserving homeo ${\phi: p^{-1} U \rightarrow U \times {\bf R}^n}$ so that ${p^{-1}x \rightarrow x \times {\bf R}^n}$ is a v.s. iso. To define a smooth vector bundle one requires that E and X are manifolds and the $\phi$‘s are diffeomorphisms.

Definition [Steenrod (see also Davis-Kirk)] A rank n vector bundle is a map $p : E \rightarrow X$ and a collection of homeos ${\cal B} = \{\phi_i : p^{-1}U_i \rightarrow U_i \times {\bf R}^n \}$ so that

• each ${\phi_i}$ is fiber-preserving over $U_i$.
• ${\{U_i\}}$ is an open cover of ${X}$
• ${\forall i,j, \exists \text{ cont } \theta_{ij} : U_i \cap U_j \rightarrow GL_n({\bf R})}$ so that $\phi_i^{-1}(x,v) = \phi_j^{-1} (x,(\theta_{ij}(x)v)$
• ${\cal B}$ is max’l with respect to the above three properties.
• To define a smooth vector bundle one requires that X is a manifold and the ${\theta_{ij}}$ are smooth.

A smooth section is a smooth map ${s : X \rightarrow E}$ s.t. ${p \circ s = \text{Id}_X}$.

The vector space ${C^\infty(E)}$ of smooth sections is a module over the ring of continuous functions ${C^\infty(X,{\bf R}) \qquad (f,s) \mapsto fs = (x \mapsto f(x)s(x))}$.
e.g. ${C^\infty(X\times {\bf R}^M) = C^\infty(X,{\bf R})^M}$

A smooth section of an ${M}$-plane bundle over an ${n}$-manifold is locally an element of ${C^\infty({\bf R}^n,{\bf R}^M)}$. To make this precise ${\forall x \in X, \exists \text{ nbhd } U}$ and charts ${\phi : p^{-1}U \rightarrow U \times {\bf R}^M}$ and ${h : U \rightarrow {\bf R}^n}$. Then ${C^\infty({\bf R}^n,{\bf R}^M) \cong C^\infty(E|_{U})}$ via ${t \mapsto (u \mapsto \phi^{-1}(u,t(h(u))}$

Written by jfdavis

August 27, 2012 at 1:30 am

Posted in Uncategorized

Local differential operators

with one comment

In this post we define a differential operator on Euclidean space and give some familiar examples.

For $\alpha = (\alpha_1, \ldots, \alpha_n)$, the differential operator $D^\alpha=\frac{\partial_1^{\alpha}}{\partial x_{\alpha_1}}\cdots \frac{\partial_n^{\alpha}}{\partial x_{\alpha_n}}$ takes a smooth function on $\mathbb{R}^n$ to a smooth function on $\mathbb{R}^n$. We call $|\alpha|=\alpha_1+\ldots + \alpha_n$ the order of this differential operator.

A linear map $D:C^\infty(\mathbb{R}^n,\mathbb{R}^N)\to C^\infty(\mathbb{R}^n,\mathbb{R}^M)$ is a (linear) differential operator of order $m$ if it is of the form $\displaystyle D = \sum_{\alpha: \,|\alpha|=m} A^{\alpha}D^\alpha$, where $A^\alpha$ is an $M\times N$ matrix over $C^\infty(\mathbb{R}^n,\mathbb{R})$. Given $\xi\in \mathbb{R}^n$, the symbol $\sigma_D(\xi)$ is an $M\times N$ matrix over $\mathbb{R}$ such that $\sigma_D(\xi) = \sum_{\alpha: |\alpha|=m} A^\alpha \xi^\alpha$, where $\xi^\alpha = \xi_1^{\alpha_1}\cdots \xi_n^{\alpha_n}$.

We say that $D$ is elliptic if $M=N$ and for all $\xi\in \mathbb{R}^n-\{0\}$, $\sigma_D(\xi)$ is an isomorphism of $\mathbb{R}^M$ to itself.

Examples

$\bullet$ Grad: $C^\infty(\mathbb{R}^3,\mathbb{R})\to C^\infty(\mathbb{R}^3,\mathbb{R}^3)$,

$Grad = \left(\begin{array}{c} 1 \\ 0 \\ 0 \end{array}\right) \frac{\partial }{\partial x_1} + \left(\begin{array}{ccc} 0 \\ 1 \\ 0 \end{array}\right) \frac{\partial }{\partial x_2}+ \left(\begin{array}{ccc} 0 \\ 0 \\ 1 \end{array}\right)\frac{\partial }{\partial x_3}$

We have
$Grad\, f = \left(\begin{array}{c} \frac{\partial f}{\partial x_1}\\ \frac{\partial f}{\partial x_2}\\ \frac{\partial f}{\partial x_3} \end{array}\right) = \left(\begin{array}{c} \frac{\partial }{\partial x_1} \\ \frac{\partial }{\partial x_2} \\ \frac{\partial }{\partial x_3} \end{array}\right) f$

$\leadsto \sigma_\text{grad}(\xi) = \left(\begin{array}{c} \xi_1\\ \xi_2\\ \xi_3 \end{array}\right)$

$\bullet$ Div: $C^\infty(\mathbb{R}^3,\mathbb{R})\to C^\infty(\mathbb{R}^3,\mathbb{R})$,

$Div\, = \frac{\partial }{\partial x_1} + \frac{\partial }{\partial x_2} + \frac{\partial }{\partial x_3}$

We have
$Div f = \frac{\partial f}{\partial x_1} + \frac{\partial f}{\partial x_2} + \frac{\partial f}{\partial x_3} = \left(\frac{\partial}{\partial x_1} + \frac{\partial}{\partial x_2} + \frac{\partial}{\partial x_3}\right) f$

$\leadsto \sigma_{Div}(\xi) = \xi_1+ \xi_2+ \xi_3$.

$\bullet$ Curl: $C^\infty(\mathbb{R}^3,\mathbb{R}^3)\to C^\infty(\mathbb{R}^3,\mathbb{R}^3)$,

$Curl\, = \left(\begin{array}{ccc} 0 & 0 & 0 \\ 0 & 0 & -1 \\ 0 & 1 & 0 \end{array} \right) \frac{\partial}{\partial x_1} + \left(\begin{array}{ccc} 0 & 0 & 1 \\ 0 & 0 & 0 \\ -1 & 0 & 0 \end{array} \right) \frac{\partial}{\partial x_2} +\left(\begin{array}{ccc} 0 & -1 & 0 \\ 1 & 0 & 0 \\ 0 & 0 & 0 \end{array} \right) \frac{\partial}{\partial x_3}$

We have
$Curl \left(\begin{array}{c} f_1 \\ f_2 \\ f_3 \end{array}\right) = \left(\begin{array}{c} \frac{\partial f_3}{\partial x_2} - \frac{\partial f_2}{\partial x_3} \\ \frac{\partial f_1}{\partial x_3} - \frac{\partial f_3}{\partial x_1} \\ \frac{\partial f_2}{\partial x_1} - \frac{\partial f_1}{\partial x_2} \end{array}\right) = \left(\begin{array}{ccc}0&-\frac{\partial}{\partial x_3}&\frac{\partial}{\partial x_2}\\ \frac{\partial}{\partial x_3}&0&-\frac{\partial}{\partial x_1}\\-\frac{\partial}{\partial x_2}&\frac{\partial}{\partial x_1}&0\end{array}\right)\left(\begin{array}{c} f_1\\f_2\\ f_3\end{array}\right)$

$\leadsto \sigma_{\text{Curl}}(\xi) = \left(\begin{array}{ccc}0&-\xi_3&\xi_2\\ \xi_3&0&-\xi_1\\-\xi_2&\xi_1&0\end{array}\right)$.

$\bullet$ Exterior derivative $d: C^\infty(\Lambda^{1}T^\ast \mathbb{R}^3)\to C^\infty(\Lambda^{2}T^\ast \mathbb{R}^3)$,

We have
$d(f_1 dx_1 + f_2 dx_2 + f_3 dx_3) = df_1\wedge dx_1+df_2\wedge dx_2+df_3\wedge dx_3 \\ = \left(\frac{\partial f_1}{\partial x_2} dx_2\wedge dx_1+\frac{\partial f_1}{\partial x_3} dx_3\wedge dx_1\right) + \left(\frac{\partial f_2}{\partial x_1} dx_1\wedge dx_2 + \frac{\partial f_2}{\partial x_3} dx_3\wedge dx_2\right) + \left(\frac{\partial f_3}{\partial x_1} dx_1\wedge dx_3 + \frac{\partial f_3}{\partial x_2} dx_2\wedge dx_3\right) \\ = \left(\frac{\partial f_3}{\partial x_2}-\frac{\partial f_2}{\partial x_3} \right) dx_2\wedge dx_3 + \left(\frac{\partial f_1}{\partial x_3} - \frac{\partial f_3}{\partial x_1}\right) dx_3\wedge dx_1 + \left(\frac{\partial f_2}{\partial x_1}-\frac{\partial f_1}{\partial x_2}\right) dx_1\wedge dx_2.$

Now, with respect to the ordered bases $\{dx_1, dx_2, dx_3\}$ and $\{dx_2\wedge dx_3, dx_3\wedge dx_1, dx_1\wedge dx_2\}$ for $C^\infty(\Lambda^{1}T^\ast \mathbb{R}^3)$ and $C^\infty(\Lambda^{2}T^\ast \mathbb{R}^3)$, respectively, we may write this as

$d\left(\begin{array}{c} f_1 \\ f_2 \\ f_3 \end{array}\right) = \left(\begin{array}{c} \frac{\partial f_3}{\partial x_2} - \frac{\partial f_2}{\partial x_3} \\ \frac{\partial f_1}{\partial x_3} - \frac{\partial f_3}{\partial x_1} \\ \frac{\partial f_2}{\partial x_1} - \frac{\partial f_1}{\partial x_2} \end{array}\right)$,

so by the previous example we see that $\sigma_d = \sigma_{\text{Curl}}$.

$\bullet$ Laplacian $\Delta$: $C^\infty(\mathbb{R}^3,\mathbb{R})\to C^\infty(\mathbb{R}^3,\mathbb{R})$,

$\Delta = \frac{\partial^2 }{\partial x_1^2}+ \frac{\partial^2 }{\partial x_2^2}+ \frac{\partial^2 }{\partial x_3^2}$

We have
$\Delta f = \frac{\partial^2 f}{\partial x_1^2}+ \frac{\partial^2 f}{\partial x_2^2}+ \frac{\partial^2 f}{\partial x_3^2} = \left( \frac{\partial^2 }{\partial x_1^2}+ \frac{\partial^2 }{\partial x_2^2}+ \frac{\partial^2 }{\partial x_3^2} \right) f$

$\leadsto \sigma_\Delta(\xi) = \xi_1^2+ \xi_2^2+ \xi_3^2$.

All of the examples above are order 1 except for the Laplacian, which is order 2.

Which of the above are elliptic?

For dimensional reasons, the only candidates are $\sigma_{\text{Div}}$, $\sigma_{d}$ and $\sigma_\Delta$. The first two are not elliptic since, for example, $\sigma_{\text{Div}}((1,-1,0))=0$ and $\sigma_d((1,0,0)) = \left(\begin{array}{ccc}0&0&0\\ 0&0&-1\\ 0&1&0\end{array}\right)$ has determinant $0$. $\sigma_\Delta$ is elliptic, on the other hand, since $\sigma_\Delta(\xi)=0 \Leftrightarrow \xi=0$.

Written by aclightf

August 22, 2012 at 1:57 pm

Posted in Uncategorized