I. Integration of functions on $\mathbb R^k$

Definition (Integral on a $k$-cell). Suppose $I^k= \{\mathbf x=(x_1,x_2,\dots, x_k): a_i\le x_i \le b_i, 1\le i\le k\}$ is a $k$-cell, and let $f:I^k\to \mathbb R$ be continuous. Then, $f$ is uniformly continuous on $I^k$, the following sequence of iterative integrals converge and we define the integral of $f$ on $I^k$ by $$\int_{I^k}f = \int_{I^k} f(\mathbf x) d\mathbf x = \int_{a_1}^{b_1} \int_{a_2}^{b_2} \dots \int_{a_k}^{b_k} f(x_1, x_2, \dots, x_k) dx_k dx_{k-1} \dots dx_1,$$
Remark.
  • On the one hand, the definition of the integral on $I^k$ as a sequence of integrals is useful since, in the end, the only computations that one can perform is for integrals of one single variable, simply because for those we can use antiderivatives (note that we did not see any such thing for functions of two or more variables!).
  • On the other hand, it is also an annoying definition since we are bound to this iterative integral approach; in particular, in bounding the difference between the integrals of two functions that differ on very small sets (typically a discontinuous function and is smoothed version) one cannot directly rely on bounds on the area/volume on which these functions differ!
  • With this regards, we will (soon) give more material that provide another way to define integrals that rely on one single limit at once, rather than a sequence of limits (one for each of the iterative itegrals); of course, it it still a Riemann integral and it coincides with the one here when the domain is a $k$-cell!.
Theorem (Integral of continuous functions on a $k$-cell). Let $I^k$ be a $k$-cell $[a_1,b_1]\times \dots \times [a_k,b_k]$, and $f:I^k\to \mathbb R$ be continous. For a permutation $\pi$ of $\{1,2, \dots, k\}$, write $$L_\pi = \int_{a_{\pi_1}}^{b_{\pi_1}} \dots \int_{a_{\pi_k}}^{b_{\pi_k}} f(x_1,\dots, x_k) dx_{\pi_k} \dots dx_{\pi_1}.$$ Then, for any permutations $\pi$ and $\sigma$ of $\{1,\dots, k\}$, we have $L_\pi=L_\sigma$.
Remark. The flexibility in the order in which the sequence of integrals are computed is usually crucial since it is possible that in some order one can always find antiderivatives to perform the computation, while in some other order it looks quite impossible to do. So, when stuck, it is not a bad idea to try to exhange the order of the integrals.
Definition (Support). The support of a function $f:\mathbb R^k\to \mathbb R$, denoted by $\operatorname{supp}(f)$, is the smallest closed set containing all the points $\mathbf x\in \mathbb R^k$ where $f(\mathbf x)\ne 0$: $$\operatorname{supp}(f)=\overline{\{\mathbf x\in \mathbb R^k: f(\mathbb x)\ne 0\}}.$$
Definition (Integral of continous function with compact support). Suppose that $f:\mathbb R^k\to \mathbb R$ is continuous on $\mathbb R^k$ and has compact support. Then, there exists a $k$-cell $I^k$ such that $\operatorname{supp}(f)\subset I^k$, and we define $$\int_{\mathbb R^k} f = \int_{\mathbb R^k} f(\mathbf x) d\mathbf x = \int_{I^k} f.$$
Remarks.
  • It should be clear that this definion indeed makes sense since if the $k$-cell $J^k$ contains $I^k$, then $\int_{I^k} f= \int_{J^k}f$.
  • Note that the function $f$ is assumed to be continuous on $\mathbb R^k$, and in particular, $f(\mathbf x)=0$ if $\mathbf x$ is a boundary point of $\operatorname{supp}(f)$; in particular, it is not clear that the integral of a function that is not continuous on $\mathbb R^k$ exists, even if it has compact support, unless that support turns out to be a $k$-cell.
  • We will (soon) provide additional material that tell exactly under which condition a function with compact support that is not continuous on $\mathbb R^n$ is Riemann integrable.
Theorem (Change of variables). Let $E\subseteq \mathbb R^k$ be open and suppose that $T:E\to \mathbb R^k$ is one-to-one and $\mathscr C^1$ in $E$. Suppose further that the Jacobian $J_T(\mathbf x)\ne 0$ for every $x\in E$. If $f$ is a continuous function on $\mathbb R^k$ with compact support included in $T(E)$ then $$\int_{\mathbb R^k} f(\mathbf y) d\mathbf y = \int_{\mathbb R^k} f(T(\mathbf x)) |J_T(\mathbf x)| d\mathbf x.$$
Remarks.
  • Note the absolute value for the Jacobian; it is easy to see that one needs it since if $f(\mathbf y)\ge 0$ for every $\mathbf y$, then the integral on the left is non-negative while the other would potentially change sign depending on the sign of $J_T$ if it were not for this absolute value.
  • Of course, $\operatorname{supp}(f) \subseteq T(E)$ is crucial, since otherwise not all the points $\mathbf y$ that matter for the integral on the left-hand side could be achieved as $T(\mathbf x)$ for some $\mathbf x\in E$.
  • The requirements that $J_T\ne 0$ on $E$ and that $T$ be one-to-one on $E$ can be slightly relaxed; but they can not be removed altogether.
  • In $\mathbb R$ the change of variable formula is merely used to transform the integrand into one for which one knows an antiderivative; in $\mathbb R^k$, the story is different since the change of variable formula can also be used to change the shape of the support of the function (or the domain of integration) and make it a $k$-cell or a $k$-simplex.
  • It is always possible to find a sequence of change of variables that modify only one of the variables at a time (primitive mappings), but the main interest of the theorem in $\mathbb R^k$ is to allow to change all variables at once.

II. Differential Forms

1. Definitions
Convention. We say that a function $f$ is differentiable, or of class $\mathscr C^p$ on a compact set $K$ if there exists an open set $E$ containing $K$ and a map $g$ on $E$ such that on which $g$ is $\mathscr C^p$ on $E$ and $f(\mathbf x)=g(\mathbf x)$ for all $\mathbf x\in K$.
Definition ($k$-surface). Suppose $E\subset \mathbb R^n$ is open. A $k$-surface in $E$ is a $\mathscr C^1$ map $\Phi$ from a compact set $D\subset\mathbb R^k \to E$; here, we restrict ourselves to the case where $D$ is a $k$-cell or a $k$-simplex.
Definition (Differential form). Suppose that $E$ is an open subset of $\mathbb R^n$. A differential form of order $k\ge 1$ is a map $\omega$, symbolically represented by the formal sum $$\omega = \sum_{1\le i_1, i_2,\dots,i_k \le n} a_{i_1i_2\dots i_k}(\mathbf x)~ dx_{i_1} \wedge dx_{i_2} \wedge \dots \wedge dx_{i_k},$$ where the all the real-valued functions $a_{i_1i_2\dots i_k}$ are continuous in $E$, which assigns to each $k$-surface in $E$ $\Phi$ with domain $D$ a value $\omega(\Phi)$ given by $$\omega(\Phi) = \int_\Phi \omega = \sum_{1\le i_1, i_2, \dots, i_k \le n} \int_D a_{i_1i_2\dots i_k}(\Phi(\mathbf u)) \frac{\partial (\Phi_{i_1}, \Phi_{i_2}, \dots, \Phi_{i_k})}{\partial (u_1,u_2,\dots, u_k)} d\mathbf u.$$ A $0$-form in $E$ is defined to be a continuous function on $E$.
Remark. At that point, the representation of $\omega$ is only symbolic and merely provides the information to compute the integral of $\omega$ on all the $k$-surfaces in $E$; one could as well give an array of $n^k$ entries containing the continuous functions $a_{i_1i_2\dots i_k}:E\to \mathbb R$ for $1\le i_1,i_2,\dots, i_k \le n$.
Definition (Basic $k$-form). Let $E\subseteq \mathbb R^n$ be open. The forms $dx_{i_1} \wedge dx_{i_2} \wedge \dots \wedge dx_{i_k}$ with $1\le i_1\le i_2 \le \dots \le i_k\le n$ are called basic $k$-forms. In such a case, we call $I=\{i_1,i_2,\dots,i_k\}$ an increasing $k$-index and write $dx_I = dx_{i_1}\wedge \dots \wedge dx_{i_k}$.
Theorem. Let $E\subseteq \mathbb R^n$ be open. Let $\omega$ be a $k$-form in $E$. Then, there exists a unique family of functions $(b_I)_I$ continuous in $E$ such that $$\omega = \sum_I b_I(\mathbf x) dx_I,$$ where the sum ranges over increasing $k$-indices in $\{1,2,\dots, n\}.$ This represention is called the standard presentation of $\omega$.
Remark. Seeing the $k$-forms as alternating $k$-tensors (as in HW 8), the previous theorem relates to the fact that the family $(dx_I)$ when $I$ ranges in the $k$-indices in $\{1,\dots, n\}$ is a basis of the vector space of alternating $k$-tensors.
Definition (Wedge product). Let $E\subseteq \mathbb R^n$ be open.
  • If $I$ and $J$ are respectively $p$ and $q$ indices in $\mathbb R^n$, with $$I=\{i_1, i_2,\dots, i_p\} \qquad \text{and} \qquad J=\{j_1,j_2,\dots, j_q\},$$ we define the $(p+q)$-form $dx_I \wedge dx_J$ by $$dx_I \wedge dx_J = dx_{i_1} \wedge \dots \wedge dx_{i_p} \wedge dx_{j_1} \wedge \dots \wedge dx_{j_q}.$$
  • If $\omega$ and $\lambda$ are $p$ and $q$-forms in $E$ with standard presentations $$\omega = \sum_I b_I(\mathbf x) dx_I \qquad \text{and} \qquad \lambda = \sum_J c_J(\mathbf x) dx_J,$$ then we define the $(p+q)$-form $\omega \wedge \lambda$ by $$\omega \wedge \lambda = \sum_{I,J} b_I(\mathbf x) c_J(\mathbf x) dx_I \wedge dx_J.$$
2. Computation rules
Proposition. Let $\omega, \lambda, \eta$ be $k$, $\ell$ and $m$-forms in an open $E\subseteq \mathbb R^n$, respectively. Then
  • $\omega \wedge (\lambda \wedge \eta) = (\omega \wedge \lambda ) \wedge \eta = \omega \wedge \lambda \wedge \eta$
  • $\omega \wedge \lambda = (-1)^{k \ell } \lambda \wedge \omega$
  • if $\ell=m$ then $\omega \wedge (\lambda + \eta) = \omega \wedge \lambda + \omega \wedge \eta$.
Definition. (Differentiation) Let $E\subset \mathbb R^n$ be open. Given $\omega$ a $\mathscr C^1$ differential form of order $k$ in $E$, we associate a $(k+1)$-form in $E$, denoted by $d\omega$ that is defined by
  • if $k=0$, then $\omega = f\in \mathscr C^1(E)$, and $$df = \sum_{i=1}^n D_if(\mathbf x) dx_i,$$
  • if $k\ge 1$, then $\omega = \sum_{I} b_I(\mathbf x) dx_I$, for $b_I\in \mathscr C^1(E)$, for all $I$ increasing $k$-index in $\{1,2,\dots, n\}$ and $$d\omega = \sum_{I} db_I \wedge dx_I.$$
Theorem (Product rule for differential forms). Let $E\subset \mathbb R^n$ be open.
  • If $\omega$ and $\lambda$ are $\mathscr C^1$ differential forms in $E$, of order $k$ and $m$, respectively, we have $$d(\omega \wedge \lambda) = d\omega \wedge \lambda + (-1)^k \omega \wedge d\lambda.$$
  • If $\omega$ is of class $\mathscr C^2$, then $d^2 \omega := d(d\omega) = 0$.
3. Change of variables
Definition (Pull-back) Let $E\subseteq \mathbb R^n$ be open, $T\in \mathscr C^1(E,V)$ where $V\subset \mathbb R^m$ is open. Suppose that for $\mathbf x\in E$, $\mathbf y=T(\mathbf x) = \sum_{i=1}^m t_i(\mathbf x) \mathbf e_i \in V$. Let $\omega$ be a $k$-form in $V$ whose standard presentation is $$\omega = \sum_I b_I(\mathbf y) dy_I.$$ We define the pull-back of $\omega$ by $T$ as the $k$-form in $E$ given by $$\omega_T = \sum_I b_I(T(\mathbf x)) dt_{I}:=\sum_{I} b_I(T((\mathbf x))) dt_{i_1} \wedge \dots \wedge dt_{i_k},$$ where the sum ranges over increasing $k$-indices $I=\{i_1,i_2,\dots, i_k\}$ and we wrote $dt_I=dt_{i_1}\wedge dt_{i_2} \wedge \dots \wedge dt_{i_k}$.
Remark. In particular, when $\omega$ is a $0$-form given by a function $f$ continuous on $E$, we have $f_T(\mathbf x) = f(T(\mathbf x))$ for every $x\in E$.
Theorem. Let $E\subseteq \mathbb R^n$ be open, $T\in \mathscr C^1(E,V)$ where $V\subset \mathbb R^m$ is open. Suppose that $\omega$ and $\lambda$ are $k$ and $m$-forms in $V$, respectively. Then
  • if $k=m$ then $(\omega + \lambda)_T = \omega_T + \lambda_T$;
  • $(\omega\wedge \lambda)_T = \omega_T \wedge \lambda_T$;
  • if $\omega$ is $\mathscr C^1$ and $T$ is $\mathscr C^2$, then $(d\omega)_T = d(\omega_T)$.
Theorem (Change of variable formula for differential forms). Suppose that $T$ is a $\mathscr C^1$ mapping from an open set $E\subseteq \mathbb R^n$ in an open set $V\subseteq \mathbb R^m$, and $\omega$ is a $k$-form in $V$. Then, if $\Phi$ is a $k$-surface in $E$, we have $$\int_{T\Phi} \omega = \int_\Phi \omega_T.$$

III. Simplices and Chains

Definition (Oriented affine $k$-simplex). Given $\mathbf p_0, \mathbf p_1, \dots, \mathbf p_k\in \mathbb R^n$, the $k$-simplex $\sigma$, $k\ge 1$, denoted by $[\mathbf p_0,\mathbf p_1,\dots, \mathbf p_k]$, is defined as the affine map from the standard $k$-simplex $Q^k=\{\mathbf u = (u_1,\dots, u_k): u_i\ge 0, \sum u_i \le 1\}\subseteq \mathbb R^k$ in $\mathbb R^n$ given by $$\sigma(\mathbf u) = \mathbf p_0 + \sum_{i=1}^k (\mathbf p_i-\mathbf p_0) u_i.$$ If $k=0$, a $0$-oriented affine simplex is simply a signed point: $\sigma = + \mathbf p_0$ or $\sigma = - \mathbf p_0$.
Remarks.
  • If $k\ge 1$, then an oriented affine $k$-simplex $\sigma$ is a $k$-surface and we thus, for any $k$-form $\omega$ in $\mathbb R^n$, the intetral $$\int_\sigma \omega $$ is well-defined.
  • If $k=0$, we define the integral of a $0$-form $f$ on $\mathbb R^n$ on the oriented affine $0$-simplex $\sigma=\varepsilon \mathbf p_0$ ($\varepsilon\in \{+1,-1\}$) by $$\int_\sigma f = \varepsilon f(\mathbf p_0).$$
Definition (Orientation of affine $k$-simplices). Given $\mathbf p_0, \mathbf p_1,\dots, \mathbf p_k\in \mathbb R^n$, there is a natural equivalence relation on the collection of all the oriented affine $k$-simplices that have corners $\{\mathbf p_0,\dots, \mathbf p_k\}$: for a permutation $(i_0,\dots, i_k)$ of $\{0,1\dots, k\}$, we write $$[\mathbf p_{i_0}, \mathbf p_{i_1}, \dots, \mathbf p_{i_k}] = s(i_0,i_1,\dots, i_k) [\mathbf p_0, \mathbf p_1,\dots, \mathbf p_k],$$ where $s(i_0,\dots, i_k)$ is the signature of $(i_0,i_1,\dots, i_k)$. This equivalence relation divides the set of oriented affine $k$-simplicies on $\{\mathbf p_0,\dots, \mathbf p_k\}$ into two groups: two simplices are in the same group precisely if one can be obtained from the other by interchanging the corners according to a permutation positive signature.
Theorem. If $\sigma$ is an oriented affine $k$-simplex in $E\subseteq \mathbb R^n$ open, and if $\tilde \sigma = \varepsilon \sigma$, then, for every $k$-form $\omega$ in $E$, $$\int_{\tilde \sigma} \omega = \varepsilon \int_\sigma \omega.$$
Definition (Affine $k$-chain). An affine $k$-chain $\Gamma$ in an open set $E\subset \mathbb R^n$ is a finite collection of oriented affine $k$-simplices in $E$, say $\sigma_1, \sigma_2,\dots, \sigma_r$. It is symbolically denoted by the formal sum $\Gamma = \sigma_1 + \dots + \sigma_r$. For any $k$-form $\omega$ in $E$, we have by definition $$\int_\Gamma \omega = \sum_{i=1}^r \int_{\sigma_i} \omega.$$
Definition (Boundary of an oriented affine $k$-simplex). If $\sigma = [\mathbf p_0, \mathbf p_1,\dots, \mathbf p_k]$ is an affine $k$-simplex in $\mathbb R^n$, the boundary $\partial \sigma$ of $\sigma$ is defined as the $(k-1)$-chain in $\mathbb R^n$ given by $$\partial \sigma = \sum_{i=0}^n (-1)^i [\mathbf p_0, \dots, \mathbf p_{i-1}, \mathbf p_{i+1}, \dots, \mathbf p_k].$$
Definition (Differentiable simplices and chains). Suppose $E\subseteq \mathbb R^n$ and $V\subseteq \mathbb R^m$ are open. Let $T\in \mathscr C^1(E,V)$, Then:
  • The map $\Phi= T \circ \sigma$ is a $k$-surface in $V$ that we call an oriented $k$-simplex of class $\mathscr C^2$ in $V$.
  • If $\Phi_1, \dots, \Phi_r$ are oriented $k$-simplicies of class $\mathscr C^2$ in $V$, then the collection $\Psi$ formally denoted by $\Psi=\Phi_1 + \Phi_2 + \dots + \Phi_r$ is called a $k$-chain of class $\mathscr C^2$ in $V$.
  • The boundary $\partial \Phi=\partial(T\circ \sigma)$ of $\Phi$ is defined to be the $(k-1)$-chain $T(\partial \sigma)$.
  • The boundary of the $k$-chain $\Psi$ is defined as $\sum_{i=1}^r \partial \Phi_i$.
Definition (Positively oriented boundary of sets).
  • For the standard $k$-simplex $Q^k$, we define $\partial Q^k$ as $\partial \sigma$, where $\sigma = [0,\mathbf e_1,\dots, \mathbf e_k]$.
  • If $E\subseteq \mathbb R^n=T(Q^n)$ where $T$ is a $\mathscr C^2$ one-to-one map from $Q^n$ into $\mathbb R^n$ with positive Jacobian, then we define $\partial E = \partial T=T(\partial \sigma)$.
  • If $\Omega = E_1 \cup E_2 \cup \dots \cup E_r$, where the $E_i$ have pairwise disjoint interior, and $E_i=T_i(Q^n)$ with $T_i$ a $\mathscr C^2$ one-to-one map from $Q^n$ to $\mathbb R^n$ with positive Jacobian, then one defines $\partial \Omega = \sum_{i=1}^r \partial T_i$
Remark. Clearly, one could define the simplices and chains above under the sole condition that $T$ be $\mathscr C^1$; it is because of what follows that one restricts oneself to $T\in \mathscr C^2$.

IV. Stokes' Theorem, closed forms and exact forms

Theorem (Stokes' Theorem). If $\Psi$ is a $k$-chain of class $\mathscr C^2$ in an open $E$ of $\mathbb R^n$, and $\omega$ is a $(k-1)$-form of class $\mathscr C^1$ in $E$, then $$\int_\Psi d\omega = \int_{\partial \Psi} \omega.$$
Definition (Closed and Exact forms).
  • A form $\omega$ of class $\mathscr C^1$ in $E$ is said to be closed if $d\omega = 0 $.
  • A $k$-form $\omega$ in $E$ is said to be exact if there exists a $(k-1)$-form $\lambda$ that is $\mathscr C^1$ in $E$ and such that $\omega = d\lambda$.
Remarks.
  • The fact that a form $\omega$ is closed on $E$ can be checked locally at every point of $E$ by computing $d \omega$;
  • The fact that $\omega$ is exact on $E$, on the other hand, amounts to solving a system of partial differential equations, and the fact that there exists a solution that is $\mathscr C^1$ on $E$ does depend on the geometry of $E$.
Corollary. Let $E\subseteq \mathbb R^n$ be open.
  • Let $\omega$ be a closed $k$-form on $E$ and $\Psi$ a $(k+1)$-chain of class $\mathscr C^2$ in $E$. Then $$\int_{\partial \Psi} \omega = 0.$$
  • Let $\omega$ be an exact form in $E$. If $\Psi_1$ and $\Psi_2$ are $k$-chains of class $\mathscr C^2$ in $E$ with $\partial \Psi_1 = \partial \Psi_2$, then $$\int_{\Psi_1} \omega = \int_{\Psi_2} \omega.$$
  • Let $\omega$ be an exact form in $E$. If $\Psi$ is a $k$-chain of class $\mathscr C^2$ in $E$ with $\partial \Psi = 0$, then $$\int_{\Psi} \omega = 0.$$
  • If $\omega$ is a $k$-form of class $\mathscr C^2$ in $E$ and $\Psi$ is a $(k+2)$-chain of class $\mathscr C^2$ in $E$ then $$\int_{\partial^2 \Psi} \lambda = 0,$$ where $\partial^2 \Psi$ is the $k$-chain $\partial(\partial \Psi)$.
Remark. The fact that the integral of all $k$-forms $\omega$ that are $\mathscr C^2$ on $E$ vanish on $\partial^2 \Psi$ when $\Psi$ is a $(k+2)$-chain of class $\mathscr C^2$ on $E$ actually extends to all $k$-forms on $E$, which actually means that $\partial^2 \Psi=0$ when $\Psi$ is $\mathscr C^2$ (but the proof cannot use Stokes theorem.)

V. Classical vector calculus

Typical exercises you should be able to solve (To be continuued ...).
  • Given a function $f$ on a $k$-cell, compute the integral of $f$.
  • Use the change of variables formula for integrals of functions on a $k$-cell or related regions.
  • Given a $k$-form $\omega$ and a $k$-chain $\Phi$, compute $\int_\Phi \omega$; in particular, use the dictionary to transform it into an integral of a function on a $k$-cell or $k$-simplex, and compute the relevant Jacobians.
  • Basic manipulations of differential forms: multiplication, differentiation, swaps in the wedge products, etc.
  • Find a $k$-surface whose images are sets defined by geometric considerations: circles, disks, ellipses, annuli, sphere, ellipsoids, cones, tori, etc, as well as portions, or simple transformations of those.
  • Verify that a form is closed.
  • Compute the boundary of a $k$-surface.
  • Verify that a form is
  • Apply Stokes Theorem; more importantly, be able to know when to apply it to simplify calculations.