Archive for the ‘jdr’ Tag

Quick Comments on the Trace of a Function

[Most of this is taken from Salsa, Partial Differential Equations in Action and Evans, Partial Differential Equations. It’s not terribly rigorous, mostly because I don’t really know this stuff yet. ]

Consider a general boundary value problem in {\mathbf{u}(\mathbf{x}),} {\mathbf{u}=(u_1,u_2,\ldots,u_n)} and {\mathbf{x}=(x_1,x_2,\ldots,x_m)},

{\mathcal{L}[\mathbf{u}] = \mathbf{F}(\mathbf{u},\mathbf{u}_{x_1},\mathbf{u}_{x_2},\ldots,\mathbf{x}) \qquad \mathbf{x}\in\Omega}
{\mathbf{u}} and its derivatives take on values on {\partial\Omega}

for a domain {\Omega\in\mathbb{R}^m} with boundary {\partial\Omega}, a function {\mathbf{F}}, and a function of differential operators {\mathcal{L}}. Under “nice” assumptions the PDE has explicit, strong solutions {u}. However, in general, not much can be assumed about {u}. In such cases, we search for weak solutions. This can considerably complicate things. In this post, we discuss one such complication – that of defining solutions and their gradients on the boundary.

In what follows, we drop bold text on vectors and assume it will be clear from context. Assume {u} is a weak solution to some PDE defined on a domain {\Omega} with boundary {\partial\Omega}. Since {u} is a solution to the weak formulation of the PDE, we have that {u\in W(D)}, where {D\subseteq\Omega} and {W} is a Soblev space. In particular, consider the Soblev space of functions in {L^2(\Omega)} whose first derivatives in the sense of distributions are functions in {L^2(\Omega)}. That is, {W(D)=H^1(\Omega)=\{v\in L^2(\Omega):\nabla v\in L^2(\Omega;\mathbb{R}^n)\}.} Note that {H^1(\Omega)} is a separable Hilbert space, continuously embedded in {L^2(\Omega)} and that the gradient is continuous from {H^1(\Omega)} to {L^2(\Omega;\mathbb{R}^n)}.

In a boundary value problem, we specify the value {u} or its gradient on {\partial \Omega}. While this poses no issues in classical PDE theory (where {\partial \Omega} and {u} are usually assumed to be {C^1}), problems arise in more general settings; since {\mu(\partial\Omega)=0} for almost any domain (true so long as {\partial\Omega} isn’t fractal), the value of {u} on {\partial\Omega} is completely arbitrary. We need to define an extension to {u} so that the boundary value problem (and its solution) are better-posed. To do this, we define a trace. Essentially, we do this by approximating {u} by a sequence of smooth functions on {\bar{\Omega}}. Since these functions are smooth, we remove the obstacle.

Before defining a trace, we define one more space – that of continuous functions with compact support which converge in a suitable way.

Definition Denote by {C_0^\infty(\Omega)} the set of functions in {C^\infty(\Omega)} with compact support. Using the multiindex notation, denote

\displaystyle D^\alpha = \frac{\partial^{\alpha_1}}{\partial x_1^{\alpha_1}}\ldots\frac{\partial^{\alpha_n}}{\partial x_n^{\alpha_n}}, \ \ \ \ \ (1)
for {\alpha=(\alpha_1,\ldots,\alpha_n)} and {|\alpha|=\alpha_1+\ldots+\alpha_n}. Then, for a sequence of functions {\{\phi_k\}\subset C_0^\infty(\Omega)} and a function {\phi\in C_0^\infty(\Omega)}, we say

\displaystyle \phi_k\rightarrow\phi \qquad \text{in } C_0^\infty(\Omega) \quad \text{as } k\rightarrow \infty \ \ \ \ \ (2)
if

1) {D^\alpha\phi_k\rightarrow D^\alpha\phi} uniformly in {\Omega} for all {\alpha} and;
2) there exists a compact set {K\subset \Omega} containing the support of every {\phi_k}

We denote the space {C_0^\infty(\Omega)} endowed with this definition of convergence by {\mathcal{D}(\Omega)}. Moreover, denote by {\mathcal{D}(\bar{\Omega})} the set of restrictions to {\bar{\Omega}} of functions in {\mathcal{D}(\mathbb{R}^n)}.

Now, we have sufficient knowledge to define the trace of a function. (Note: Salsa and Evans take this to be a theorem. It seems just as reasonable to define a trace operator as below and state its existence as a theorem.)

Definition Consider {u} and {\Omega} defined as above. The trace operator of {u} on {\partial\Omega} is the linear operator {T:H^1(\Omega)\rightarrow L^2(\partial\Omega)} which satisfies

1) {T[u] = u|_{\partial\Omega}} for {u\in \mathcal{D}(\bar{\Omega})} and;
2) {\|T[u]\|_{L^2(\partial\Omega)}\leq c(\Omega,n)\|u\|_{H^1(\Omega)}}, where {c} is a constant and {n} is the size of the space (e.g., {\mathbb{R}^n}).

Then, the trace of {u} on {\partial\Omega} is {T[u]}, also denoted by {u|_{\partial\Omega}}.

It can be shown that the trace operator exists (theorem 7.11 in Salsa). It is constructed via the following process: first, consider {T:\mathcal{D}(\bar{\Omega})\rightarrow L^2(\partial\Omega)}. We want {T} to be continuous from {\mathcal{D}(\Omega)\subset H^1(\Omega)} into {L^2(\Omega)}. As such, we need to ensure that {\|T[u]\|_{L^2(\partial\Omega)}\leq c(\Omega,n)\|u\|_{H_1(\Omega)}.} These are exactly the two conditions given in the definition of the trace operator, but for {\mathcal{D}(\Omega)} rather than {H_1(\Omega)}. Thus, we further need to extend {T} to the whole space {H^1(\Omega)}. This can be done by exploiting the fact that {\mathcal{D}(\bar{\Omega})} is dense in {H^1(\Omega)}. Specifically, this is done with continuous functions that converge to {u}.

Trace functions, operators, and trace spaces form a rather important part of the theory of elliptic PDEs. While this is only a quick note, any book on the subject will yield much more information on the topic.

Advertisements

Linear Transformations, Rank-Nullity, and Matrix Representations (Part 1)

One of the most fundamental theorems in Linear Algebra is the Rank-Nullity, or Dimension, Theorem. This post and the next (in an attempt to study for my preliminary exams) will discuss this theorem, along with linear transformations and their relationship to matrices. In what follows, we follow Hoffman and Kunze by letting greek letters be vectors and roman lowercase letters be scalars.

We begin with a few preliminary remarks.

Definition 1 Let {V} and {W} be vector spaces over the field {\mathcal{F}}. A linear transformation {T:V\rightarrow W} is a mapping such that

\displaystyle  T[c\alpha+\beta] = cT[\alpha]+T[\beta]\notag \ \ \ \ \ (1)

for all {\alpha,\beta\in V} and {c\in\mathcal{F}}.

Linear Transformations are wildly important in all of mathematics. In {\mathbb{R}}, for example, any function which defines a line through the origin is a linear transformation. Why only lines that pass through the origin, rather than any line? Say there were a linear transformation {T:\mathbb{R}\rightarrow\mathbb{R}} such that {T[0]=\beta}. Then, {T[\alpha-\alpha]=\beta}, so by linearity, {T[\alpha]=T[\alpha]+\beta}. This is only true if {\beta=0}. Indeed, for any linear transformation {T:V\rightarrow W}, {T[0_V]=0_W}, where {0_V} and {0_W} are the zero-elements of {V} and {W}, respectively. A more important example of a linear transformation is the derivative function from the space of differentiable functions to itself. Finally, any {m\times n} matrix is a linear transformation from {\mathbb{R}^n} to {\mathbb{R}^m}. We’ll see later that any linear transformation can be written as such a matrix.

Two important properties of any linear transformation are its rank and null space (or kernel).

Definition 2 Let {V} and {W} be vector spaces over a field {\mathcal{F}} with {V} finite-dimensional and let {T:V\rightarrow W} be a linear transformation. Then,

  1. The range of {T}, denoted range{(T)}, is the set {\{\beta\in W : T[\alpha]=\beta \text{ for some } \alpha \in V\}}. The rank of {T} is dimension of range{(T)}.
  2. The null space of {T}, denoted null{(T)}, is the set {\{\alpha\in V: T[\alpha]=0\}}. The nullity of {T} is the dimension of null{(T)}.

Notice that by naming the their dimensions, we’ve implicitly assumed that the range and nullspace of a linear transformation are themselves vector spaces. This is not too hard to justify, so we’ll do it here.

Theorem 3 Let {V} and {W} be vector spaces over the filed {\mathcal{F}} and let {T:V\rightarrow W} be a linear transformation. Then, range{(T)} is a subspace of {W} and null{(T)} is a subspace of {V}.

Proof: To show our result, we use the subspace test. So, we will show that range{(T)} and null{(T)} are nonempty sets which are closed under vector addition and scalar multiplication.

First recall from above that {T[0_V]=0_W}. This shows that both the range and nullspace of {T} are never empty.

Now, we want to show that if {\beta_1,\beta_2\in} range{(T)}, then {c\beta_1+\beta_2\in} range{(T)} as well. So, assume {T[\alpha_1]=\beta_1} and {T[\alpha_2]=\beta_2} so that {\beta_1,\beta_2\in} range{(T)}. Then, by linearity, {T[c\alpha_1+\alpha_2=c\beta_1+\beta_2}. This means, however, that there is an {\alpha\in V} such that {T[\alpha]=c\beta_1+\beta_2} whenever {\beta_1} and {\beta_2} are in range{(T)} – i.e., {c\beta_1+\beta_2} is in the range of {T}. This gives us that range{(T)} is a subspace of {W}.

We follow the same process for the nullspace. So, assume {\alpha_1,\alpha_2\in} null{(T)}. Then, {T[c\alpha_1+\alpha_2] = cT[\alpha_1]+T[\alpha_2] = c(0)+(0)=0.} So, {c\alpha_1+\alpha_2\in} null{(T)} whenver {\alpha_1} and {\alpha_2} are. That is, null{(T)} is a subspace of {V}. \Box

With all of this in hand, we’re finally ready to state and prove the Rank-Nullity theorem for transformations.

Theorem 4 (Rank-Nullity) Let {V} and {W} be vector spaces over the field {\mathcal{F}}, {V} finite-dimensional, and let {T:V\rightarrow W} be a linear transformation. Then,

\displaystyle  \text{rank}(T) + \text{nullity}(T) = \text{dim}(V). \ \ \ \ \ (2)

Proof: Our proof will proceed as follows: first, we’ll assume a basis for null{(T)}. We know this exists, since {V} is finite-dimensional and null{(T)} is a subspace of {V}. From this and the basis for {V}, we’ll construct a spanning set for range{(T)} and show that it’s linearly independent.
Assume that dim{(V)=n}. Then, assume {\{\alpha_1,\ldots,\alpha_k\}} is a basis for null{(T)} (certainly, {k\leq n}). We contend that every non-empty linearly indepedent set of vectors in a finite dimensional vector space is part of a basis for that vector space. Why? Assume not. If only one element, say {x}, from this linearly independent set, say {S}, is absent from the basis, then either {x} would be linearly independent from the basis, which would be a contradiction, or there would be an element of the basis, say {y} linearly dependent on {x}. If this were the case, however, {x} could take the place of {y} in the basis. This argument can be extended to more elements of {S} until we have shown that no element of {S} can be in the basis. This is clearly absurd. So, since {\{\alpha_1,\ldots,\alpha_k\}} are a linearly independent subset of {V}, we can construct a basis for {V} as {\{\alpha_1,\ldots,\alpha_k,\alpha_{k+1},\ldots,\alpha_n\}}.
Now, consider the set {\{T[\alpha_1],\ldots,T[\alpha_n]\}}. This certainly spans range{(T)}. Moreover, since {\alpha_1,\ldots,\alpha_k} are in the nullspace of {T}, {T[\alpha_i]=0} for {i=1,\ldots,k}. We have, then, that {\{T[\alpha_{k+1}],\ldots,T[\alpha_n]\}} is a spanning set of range{(T)} and want to show that it’s linearly independent. So consider the linear combination

\displaystyle  \sum_{i=k+1}^n c_i T[\alpha_i] = 0 \ \ \ \ \ (3)

for scalars {c_i}. We want to show that {c_i=0} for all {i}. Since {T} is linear, (3) is equivalent to

\displaystyle  T[\sum_{i=k+1}^n c_i\alpha_i] = 0. \ \ \ \ \ (4)

Thus, {\alpha=\sum_{i=k+1}^n c_i\alpha_i} is an element of the nullspace of {T}. Since {\{\alpha_1,\ldots,\alpha_k\}} is a basis for the nullspace, we know that

\displaystyle  \sum_{i=1}^k b_i \alpha_i = \alpha = \sum_{k=i+1}^n c_i\alpha_i. \ \ \ \ \ (5)

The {\alpha_i} form a linearly independent set, however, so {b_i=c_i=0} for all {i}. Thus, we have that {\{T[\alpha_{k+1}],\ldots,T[\alpha_n]\}} is a linearly indepenent set and so a basis for range{(T)}. Therefore,

\displaystyle  \text{rank}(T) + \text{nullity}(T) = (n-k) + k = n = \text{dim}(V), \ \ \ \ \ (6)

as desired. \Box

In introductory Linear Algebra classes and texts, the Rank-Nullity Theorem is usually written in terms of solvability of a matrix. We in the next post, we discuss the representation of transformations by matrices. We will see that this can always be done. Certainly, it’s not hard to see that there are times when it is easier to manipulate matrices than it is linear transformations, and vise versa. Indeed, in practice, to quote Hoffman and Kunze, “We may do this when it is convenient, and when it is not convenient we shall not.” The following post will discuss how to write transformations in terms of matrices as well as how to write the Rank-Nullity Theorem in terms of matrices.

Some Links

I came across some links which seem like they’ll be helpful over the next few years of Ph.D school. Posting them here to refer to later.

Some poster design tips: http://www2.lib.uct.ac.za/infolit/poster.htm

A nice paper on ‘how to prove things.’ It’s written for someone in their first proof-based course, but still has good tips for the general prover (pdf): http://cheng.staff.shef.ac.uk/proofguide/proofguide.pdf

A paper on why determinants shouldn’t be introduced in Linear Algebra classes until, e.g., eigenvalues have already been established. It’s the basis for Linear Algebra Done Right (both the paper and the book by S. Axler): http://www.axler.net/DwD.html

Math 213: How to Integrate

After the second test, most of Calc III revolves around various methods for finding line and surface integrals (in Rogawski’s text, this corresponds to chapter 16). Evaluating these integrals is straightforward, particularly once you know the shortcuts (e.g., divergence theorem). Even without the shortcuts, however,  integrals of this type are simple so long as you know which formula to use when. So, what follows is a flowchart you can follow to figure out which formula to use. After the chart are a number of example problems solved using the flowchart.

Before I introduce the flowchart, however, keep a few things in mind:

  • Your final answer should always be a number. So, if you integrate and your answer is a vector, you’ve done something wrong! Specifically, if you’re trying to integrate a vector-valued function (e.g., \langle x^2y, ye^z,\sin y \rangle ), you should do a dot product at some point.
  • Sometimes, you’ll have to figure out which parametrization to use. If you’re given something like y=f(x) , you should probably parametrize like  {\mathbf c}(t) = \langle t, f(t) \rangle . Here, t will have the same bounds as x. If you’re given something else, you’ll have to work a little harder. Some common cases are the parametrization of a line and parametrization of a circle. Say you’re given two points, {\mathbf a} and {\mathbf b}. Then, it’s probably easiest to parametrize like {\mathbf c}(t)={\mathbf a}+({\mathbf b}-{\mathbf a})t for 0\leq t \leq 1 (Note: You can also find an equation for you line in terms of y=mx+b and use {\mathbf c}(t) = \langle t, f(t) \rangle . Sometimes this is easier, sometimes it’s harder – it’s hard to say a priori how it will go. Really, the parametrization you use is a matter of personal preference and how comfortable you are with each method.). Parameterizing a circle is a little more straightforward. Unless you have a particularly compelling reason not to (and off the top of my head, I certainly can’t think of any), always use {\mathbf c}(t)=\langle r\cos \theta, r\sin \theta \rangle. Then, make your \theta bounds correspond to how much of the circle you’re considering.
  • The above point is, of course, a little trickier for 3D surfaces, but the same ideas roughly apply. For example, if you’re given z=f(x,y), you can parametrize using {\mathbf\phi}(r,s) = \langle r,s,f(r,s)\rangle. Here, your bounds on r and s correspond to your bounds on x and y.  See your book (or me or Gus) for more details.

With those points out of the way, we can now look at the flowchart. Though it doesn’t explicitly say so, start in the upper-left corner (click on the image to make it bigger).

This flowchart describes (essentially) four cases. We’ll give an example of each.

Example 1: Contour Integral of a scalar function

Say we’re trying to find \int_c xy ds over the contour described by x^2+y^2=16. Following the flowchart, we answer a few questions. First, we are given a function (xy), so we move on to answer if our function is scalar or vector. Obviously, we know from the title of this example that we have a scalar function. If we weren’t given that information, however, we would still know we have a scalar function (recall, a scalar function is anything which isn’t a vector function). Moreover, since we’re integrating over something in 2D, x^2+y^2=16, we know that we have a contour (or line) integral. So, following the flowchart, we evaluate our integral using \int_a^b f({\mathbf c}(t))||{\mathbf c}^\prime(t)||dt. That is, we only have three more things to do until we’ve found our answer: first, we must find {\mathbf c}(t) and its derivative. Second, we substitute what we’ve just found into the equation for scalar line integrals. Finally, we evaluate the integral.

Since we’re integrating over a circle of radius 4 (recall that a circle with radius r is written implicitly as x^2+y^2=r^2), we want to parametrize using {\mathbf c}(t) = \langle 4\cos\theta, 4\sin\theta \rangle for 0\leq \theta \leq 2\pi . So, {\mathbf c}^\prime(t) = \langle -4\sin\theta, 4\cos\theta \rangle. Thus, we’ve accomplished our first step. Second, we want to substitute these into our integral, giving

\int_a^b f({\mathbf c}(t))||{\mathbf c}^\prime(t)||dt = \int_0^{2\pi} (4\cos\theta)(4\sin\theta)\sqrt{(-4\sin\theta)^2+(4\cos\theta)^2} = 64\int_0^{2\pi} \cos\theta\sin\theta .

Finally, we want to evaluate this integral. There are a few methods for doing this (one could, say, rewrite this integral using trig-identities). I’ll use u-substitution. So, setting u=\sin\theta, we have du=\cos\theta d\theta, and our bounds go from \theta=0 to u=\sin 0=0 and from \theta=2\pi to u=\sin 2\pi=0. Since our bounds now go from 0 to 0, we’re not integrating over any area! That is, our final answer is

\int_c xy ds = 0 .

Of course, since we’re integrating over a closed contour, this is expected.

Example 2: Contour integral of a vector function

When we have a vector-valued function, we know we need to take a dot product. Otherwise, we’d be integrating over a vector (which we know is wrong from the bulleted list above!). So, say we have {\mathbf F}=\langle x,y,z \rangle and a contour described by {\mathbf c}(t)=\langle t, e^t,t^2 \rangle for 0\leq t \leq 2. Following the flowchart, we get to a point where we have to test if {\mathbf F} is conservative. Indeed, testing each of our conditions, we find that {\mathbf F} is conservative. So, we can use the fundamental theorem of line integrals to evaluate our answer quite easily. We’ll do it the long way too, for practice.

  • Using the Fundamental Theorem of Line Integrals (FTLI)

To use the FTLI, we have to first find the potential function for {\mathbf F}. In this case, it’s pretty straightforward. We want to find a scalar function, say \phi, such that \nabla \phi={\mathbf F}. By inspection, we see that \phi=\frac{x^2}{2}+\frac{y^2}{2}+\frac{z^2}{2} will work (\nabla[\frac{x^2}{2}+\frac{y^2}{2}+\frac{z^2}{2}] = \langle \frac{\partial}{\partial x}, \frac{\partial}{\partial y}, \frac{\partial}{\partial z} \rangle[\frac{x^2}{2}+\frac{y^2}{2}+\frac{z^2}{2}] = \langle x,y,z\rangle). So, to finish evaluating, we simply use the formula \phi({\mathbf c}(2))-\phi({\mathbf c}(0)) = \phi(2,e^2,4)-\phi(0,1,0) = \frac{19+e^4}{2}.

  • Using Parameterization

Following the flowchart the wrong way leads to the formula \int_c {\mathbf F}\cdot d{\mathbf s} = \int_a^b {\mathbf F}(\mathbf{c}(t))\cdot {\mathbf c}^\prime(t) dt. Like in Example 1, we follow a three step process here. First, we find {\mathbf c}^{\prime} (t) = \langle 1, e^t,2t \rangle. Next, we find {\mathbf F}({\mathbf c}(t))\cdot{\mathbf c}^{\prime}(t) = 2t^3+t+e^{2t}. Finally, we integrate this function, \int_0^2 2t^3+t+e^{2t} dt = \frac{19+e^4}{2}. These are the same answers (which is nice).

Example 3 : Surface integral of a scalar function

Let’s consider \int_S xy+z ds over the surface x+2y+z=2 in the first octant (x,y,z\geq 0). Following the flowchart appropriately, we arrive at the formula \int\int_S f ds = \int\int_D f(\phi(u,v))||\phi(u,v)_u\times\phi(u,v)_v||.  So, to continue, we need to find a parametrization, \phi(u,v) of our surface, x+2y+z=2. One valid parametrization here is \phi(u,v)=\langle u,v,2-u-2v \rangle. Note that this comes from solving the surface equation x+2y+z=2 for z and replacing x and y with u and v, respectively. Then, to find our normal vector, we compute

{\mathbf n} = \phi_u\times\phi_v = \langle 1,2,1 \rangle.

(It’s quite hard to write down how to do a cross product in wordpress. If you’re unsure how I got \langle 1,2,1 \rangle, check your book, or ask me or Gus). Now that we have our normal vector, we simply need to take its magnitude and take the actual integral. So, we find that ||{\mathbf n}|| = \sqrt{1^2+2^2+1^2}=\sqrt{6}. Here, it’s important to point out an important short-cut for this. You learned in class that, if you can write your surface as z=f(x,y) for some function f(x,y), you can use the formula ||{\mathbf n}|| = \sqrt{1+\left(\frac{\partial f}{\partial x}\right)^2+\left(\frac{\partial f}{\partial y}\right)^2}. Let’s try that in this case: \frac{\partial f}{\partial x} = -1 and \frac{\partial f}{\partial y} = -2. So, \sqrt{1+\left(\frac{\partial f}{\partial x}\right)^2+\left(\frac{\partial f}{\partial y}\right)^2} = \sqrt{6}, just like we found above.The next step is to determine that f(\phi(u,v))=uv+2-u-2v.

Finally, we just need to find the bounds of our integral and actually compute the answer. Because we’re doing a surface integral, we only look at the projection of our surface. In other words, we need to find the triangle in the xy-plane. So, we set z=0 and find y=1-\frac{1}{2}x. Thus, the bounds on our integral will be 0\leq x \leq \frac{1}{2} and 0\leq y \leq 1-\frac{1}{2}x . So, our integral becomes \sqrt{6}\int_0^\frac{1}{2}\int_0^{1-\frac{1}{2}u} uv-u-2v+2 dvdu = \frac{659}{1536}\sqrt{6}. This is an ugly answer, but it’s certainly an answer.

Example 4 : Surface integral of a vector function

Finally, let’s find the surface integral of the vector field {\mathbf F}=\langle x,y^2,xy \rangle over the helicoid parameterized by \phi(u,v)=\langle u \cos v, u\sin v, v \rangle for 0\leq u \leq 1 and 0 \leq v \leq 2\pi. Following the flowchart, we see that this integral is fairly straightforward. We simply need to find: {\mathbf F(\phi(u,v))}, {\mathbf n}=\phi_u\times \phi_v, and {\mathbf F}(\phi(u,v))\cdot {\mathbf n}. So, we do each of these in turn.

Substituting \phi(u,v) into {\mathbf F} gives u\cos v,u^2\sin^2 v,u^2\cos v\sin v \rangle. Then, taking the cross product of the partials of \phi will give {\mathbf n} = \phi_u\times \phi_v = \langle \sin v, -\cos v, u \rangle. Next, we find the dot product of these two quantities, giving \langle u\cos v,u^2\sin^2 v,u^2\cos v\sin v\rangle\cdot\langle \sin v, -\cos v, u \rangle = u\cos v\sin v - u^2\sin^2 v\cos v+u^3\cos^2 v\sin v. This is a particularly ugly integrand, but we can certainly do it (mostly by using integration-by-parts). In fact, we’ll find that \int_0^1\int_0^{2\pi} u\cos v\sin v - u^2\sin^2v\cos v+u^3\cos^2v\sin v dvdu = 0.

So that’s it – how you do surface and line integrals without any of the tools from chapter 17. Do you have any questions after that? Did I make any mistakes in my math? If so, feel free to comment below (or send me an email) and I’ll do my best to fix things!

Write down what you’ve done

I was a taught both basic probability and statistics theory by a young Swiss statistician who liked to talk about putting new mathematical tricks and theorems “in your math pocket.” Certain tricks appear again and again throughout disparate mathematical disciplines (multiplying by a “well-chosen one” or switching order of integration are basic tricks which come to mind immediately). A mathematician’s math pocket was supposed to hold these tricks and be called upon whenever needed. In a post to his blog Field’s medal winner Terry Tao echoed a similar sentiment. Rather than filling up one’s mental math pocket, however, he suggests writing these tricks down.

In this light, myself and two other aspiring applied mathematicians at the Colorado School of Mines will be posting to this blog. Certainly, it will serve as a spot to write down these tricks. However, it will (hopefully) be much more. Each of us hopes to be a professional academic mathematician. Such a profession, of course, requires that we be capable of doing mathematics. Much of the job, however, involves teaching and writing. As such, this blog will help us to improve on our mathematical exposition and our pedagogical skills. It will allow us to record neat tricks and problems we’ve come across. Mathematics requires creativity and the ability to apply mathematical concepts to interesting physical and scientific phenomena. As such, this blog will give us reason to write up the ideas we have and the things to which we’d like to apply mathematics. Mathematics is inherently collaborative, so we will share interesting papers and mathematical tidbits.  It will even serve as a way for the three of us to continue communicating as we spread across the country for our Ph.D studies.

It’s hard to say now how often this blog will be updated (presumably there will be many new posts over the summers and winters with a significant drop-off once classes begin). This blog, however, will certainly be an aid as we continue along the path to becoming applied mathematicians.