## Archive for the ‘jdr’ Tag

### Quick Comments on the Trace of a Function

[Most of this is taken from Salsa, Partial Differential Equations in Action and Evans, Partial Differential Equations. It’s not terribly rigorous, mostly because I don’t really know this stuff yet. ]

Consider a general boundary value problem in and ,

and its derivatives take on values on

for a domain with boundary , a function , and a function of differential operators . Under “nice” assumptions the PDE has explicit, strong solutions . However, in general, not much can be assumed about . In such cases, we search for weak solutions. This can considerably complicate things. In this post, we discuss one such complication – that of defining solutions and their gradients on the boundary.

In what follows, we drop bold text on vectors and assume it will be clear from context. Assume is a weak solution to some PDE defined on a domain with boundary . Since is a solution to the weak formulation of the PDE, we have that , where and is a Soblev space. In particular, consider the Soblev space of functions in whose first derivatives in the sense of distributions are functions in . That is, Note that is a separable Hilbert space, continuously embedded in and that the gradient is continuous from to .

In a boundary value problem, we specify the value or its gradient on . While this poses no issues in classical PDE theory (where and are usually assumed to be ), problems arise in more general settings; since for almost any domain (true so long as isn’t fractal), the value of on is completely arbitrary. We need to define an extension to so that the boundary value problem (and its solution) are better-posed. To do this, we define a trace. Essentially, we do this by approximating by a sequence of smooth functions on . Since these functions are smooth, we remove the obstacle.

Before defining a trace, we define one more space – that of continuous functions with compact support which converge in a suitable way.

Definition Denote by the set of functions in with compact support. Using the multiindex notation, denote

for and . Then, for a sequence of functions and a function , we say

if

1) uniformly in for all and;

2) there exists a compact set containing the support of every

We denote the space endowed with this definition of convergence by . Moreover, denote by the set of restrictions to of functions in .

Now, we have sufficient knowledge to define the trace of a function. (Note: Salsa and Evans take this to be a theorem. It seems just as reasonable to define a trace operator as below and state its existence as a theorem.)

Definition Consider and defined as above. The trace operator of on is the linear operator which satisfies

1) for and;

2) , where is a constant and is the size of the space (e.g., ).

Then, the trace of on is , also denoted by .

It can be shown that the trace operator exists (theorem 7.11 in Salsa). It is constructed via the following process: first, consider . We want to be continuous from into . As such, we need to ensure that These are exactly the two conditions given in the definition of the trace operator, but for rather than . Thus, we further need to extend to the whole space . This can be done by exploiting the fact that is dense in . Specifically, this is done with continuous functions that converge to .

Trace functions, operators, and trace spaces form a rather important part of the theory of elliptic PDEs. While this is only a quick note, any book on the subject will yield much more information on the topic.

### Linear Transformations, Rank-Nullity, and Matrix Representations (Part 1)

One of the most fundamental theorems in Linear Algebra is the Rank-Nullity, or Dimension, Theorem. This post and the next (in an attempt to study for my preliminary exams) will discuss this theorem, along with linear transformations and their relationship to matrices. In what follows, we follow Hoffman and Kunze by letting greek letters be vectors and roman lowercase letters be scalars.

We begin with a few preliminary remarks.

Definition 1Let and be vector spaces over the field . Alinear transformationis a mapping such that

for all and .

Linear Transformations are wildly important in all of mathematics. In , for example, any function which defines a line through the origin is a linear transformation. Why only lines that pass through the origin, rather than *any* line? Say there were a linear transformation such that . Then, , so by linearity, . This is only true if . Indeed, for any linear transformation , , where and are the zero-elements of and , respectively. A more important example of a linear transformation is the derivative function from the space of differentiable functions to itself. Finally, any matrix is a linear transformation from to . We’ll see later that any linear transformation can be written as such a matrix.

Two important properties of any linear transformation are its rank and null space (or kernel).

Definition 2Let and be vector spaces over a field with finite-dimensional and let be a linear transformation. Then,

- The
rangeof , denoted range, is the set . Therankof is dimension of range.- The
null spaceof , denoted null, is the set . Thenullityof is the dimension of null.

Notice that by naming the their dimensions, we’ve implicitly assumed that the range and nullspace of a linear transformation are themselves vector spaces. This is not too hard to justify, so we’ll do it here.

Theorem 3Let and be vector spaces over the filed and let be a linear transformation. Then, range is a subspace of and null is a subspace of .

*Proof:* To show our result, we use the subspace test. So, we will show that range and null are nonempty sets which are closed under vector addition and scalar multiplication.

First recall from above that . This shows that both the range and nullspace of are never empty.

Now, we want to show that if range, then range as well. So, assume and so that range. Then, by linearity, . This means, however, that there is an such that whenever and are in range – i.e., is in the range of . This gives us that range is a subspace of .

We follow the same process for the nullspace. So, assume null. Then, So, null whenver and are. That is, null is a subspace of .

With all of this in hand, we’re finally ready to state and prove the Rank-Nullity theorem for transformations.

Theorem 4 (Rank-Nullity)Let and be vector spaces over the field , finite-dimensional, and let be a linear transformation. Then,

*Proof:* Our proof will proceed as follows: first, we’ll assume a basis for null. We know this exists, since is finite-dimensional and null is a subspace of . From this and the basis for , we’ll construct a spanning set for range and show that it’s linearly independent.

Assume that dim. Then, assume is a basis for null (certainly, ). We contend that every non-empty linearly indepedent set of vectors in a finite dimensional vector space is part of a basis for that vector space. Why? Assume not. If only one element, say , from this linearly independent set, say , is absent from the basis, then either would be linearly independent from the basis, which would be a contradiction, or there would be an element of the basis, say linearly dependent on . If this were the case, however, could take the place of in the basis. This argument can be extended to more elements of until we have shown that no element of can be in the basis. This is clearly absurd. So, since are a linearly independent subset of , we can construct a basis for as .

Now, consider the set . This certainly spans range. Moreover, since are in the nullspace of , for . We have, then, that is a spanning set of range and want to show that it’s linearly independent. So consider the linear combination

for scalars . We want to show that for all . Since is linear, (3) is equivalent to

Thus, is an element of the nullspace of . Since is a basis for the nullspace, we know that

The form a linearly independent set, however, so for all . Thus, we have that is a linearly indepenent set and so a basis for range. Therefore,

as desired.

In introductory Linear Algebra classes and texts, the Rank-Nullity Theorem is usually written in terms of solvability of a matrix. We in the next post, we discuss the representation of transformations by matrices. We will see that this can always be done. Certainly, it’s not hard to see that there are times when it is easier to manipulate matrices than it is linear transformations, and vise versa. Indeed, in practice, to quote Hoffman and Kunze, “We may do this when it is convenient, and when it is not convenient we shall not.” The following post will discuss how to write transformations in terms of matrices as well as how to write the Rank-Nullity Theorem in terms of matrices.

### Some Links

I came across some links which seem like they’ll be helpful over the next few years of Ph.D school. Posting them here to refer to later.

Some poster design tips: http://www2.lib.uct.ac.za/infolit/poster.htm

A nice paper on ‘how to prove things.’ It’s written for someone in their first proof-based course, but still has good tips for the general prover (pdf): http://cheng.staff.shef.ac.uk/proofguide/proofguide.pdf

A paper on why determinants shouldn’t be introduced in Linear Algebra classes until, e.g., eigenvalues have already been established. It’s the basis for Linear Algebra Done Right (both the paper and the book by S. Axler): http://www.axler.net/DwD.html

### Math 213: How to Integrate

After the second test, most of Calc III revolves around various methods for finding line and surface integrals (in Rogawski’s text, this corresponds to chapter 16). Evaluating these integrals is straightforward, particularly once you know the shortcuts (e.g., divergence theorem). Even without the shortcuts, however, integrals of this type are simple so long as you know which formula to use when. So, what follows is a flowchart you can follow to figure out which formula to use. After the chart are a number of example problems solved using the flowchart.

Before I introduce the flowchart, however, keep a few things in mind:

- Your final answer should
**always**be a number. So, if you integrate and your answer is a vector, you’ve done something wrong! Specifically, if you’re trying to integrate a vector-valued function (e.g., ), you should do a dot product at some point. - Sometimes, you’ll have to figure out which parametrization to use. If you’re given something like , you should probably parametrize like . Here, will have the same bounds as . If you’re given something else, you’ll have to work a little harder. Some common cases are the parametrization of a line and parametrization of a circle. Say you’re given two points, and . Then, it’s probably easiest to parametrize like for (Note: You can also find an equation for you line in terms of and use . Sometimes this is easier, sometimes it’s harder – it’s hard to say
*a priori*how it will go. Really, the parametrization you use is a matter of personal preference and how comfortable you are with each method.). Parameterizing a circle is a little more straightforward. Unless you have a particularly compelling reason not to (and off the top of my head, I certainly can’t think of any), always use . Then, make your bounds correspond to how much of the circle you’re considering. - The above point is, of course, a little trickier for 3D surfaces, but the same ideas roughly apply. For example, if you’re given , you can parametrize using . Here, your bounds on and correspond to your bounds on and . See your book (or me or Gus) for more details.

With those points out of the way, we can now look at the flowchart. Though it doesn’t explicitly say so, start in the upper-left corner (click on the image to make it bigger).

This flowchart describes (essentially) four cases. We’ll give an example of each.

**Example 1**: Contour Integral of a scalar function

Say we’re trying to find over the contour described by . Following the flowchart, we answer a few questions. First, we are given a function (), so we move on to answer if our function is scalar or vector. Obviously, we know from the title of this example that we have a scalar function. If we weren’t given that information, however, we would still know we have a scalar function (recall, a scalar function is anything which *isn’t* a vector function). Moreover, since we’re integrating over something in 2D, , we know that we have a contour (or line) integral. So, following the flowchart, we evaluate our integral using . That is, we only have three more things to do until we’ve found our answer: first, we must find and its derivative. Second, we substitute what we’ve just found into the equation for scalar line integrals. Finally, we evaluate the integral.

Since we’re integrating over a circle of radius (recall that a circle with radius is written implicitly as ), we want to parametrize using for . So, . Thus, we’ve accomplished our first step. Second, we want to substitute these into our integral, giving

Finally, we want to evaluate this integral. There are a few methods for doing this (one could, say, rewrite this integral using trig-identities). I’ll use substitution. So, setting , we have , and our bounds go from to and from to . Since our bounds now go from to , we’re not integrating over any area! That is, our final answer is

.

Of course, since we’re integrating over a closed contour, this is expected.

**Example 2**: Contour integral of a vector function

When we have a vector-valued function, we know we need to take a dot product. Otherwise, we’d be integrating over a vector (which we know is wrong from the bulleted list above!). So, say we have and a contour described by for . Following the flowchart, we get to a point where we have to test if is conservative. Indeed, testing each of our conditions, we find that is conservative. So, we can use the fundamental theorem of line integrals to evaluate our answer quite easily. We’ll do it the long way too, for practice.

- Using the Fundamental Theorem of Line Integrals (FTLI)

To use the FTLI, we have to first find the potential function for . In this case, it’s pretty straightforward. We want to find a scalar function, say , such that . By inspection, we see that will work (). So, to finish evaluating, we simply use the formula .

- Using Parameterization

Following the flowchart the wrong way leads to the formula . Like in Example 1, we follow a three step process here. First, we find . Next, we find . Finally, we integrate this function, . These are the same answers (which is nice).

** Example 3 **: Surface integral of a scalar function

Let’s consider over the surface in the first octant (). Following the flowchart appropriately, we arrive at the formula So, to continue, we need to find a parametrization, of our surface, . One valid parametrization here is . Note that this comes from solving the surface equation for and replacing and with and , respectively. Then, to find our normal vector, we compute

.

(It’s quite hard to write down how to do a cross product in wordpress. If you’re unsure how I got , check your book, or ask me or Gus). Now that we have our normal vector, we simply need to take its magnitude and take the actual integral. So, we find that . Here, it’s important to point out an important short-cut for this. You learned in class that, if you can write your surface as for some function , you can use the formula . Let’s try that in this case: and So, , just like we found above.The next step is to determine that .

Finally, we just need to find the bounds of our integral and actually compute the answer. Because we’re doing a surface integral, we only look at the projection of our surface. In other words, we need to find the triangle in the xy-plane. So, we set and find . Thus, the bounds on our integral will be and . So, our integral becomes This is an ugly answer, but it’s certainly an answer.

** Example 4 **: Surface integral of a vector function

Finally, let’s find the surface integral of the vector field over the helicoid parameterized by for and . Following the flowchart, we see that this integral is fairly straightforward. We simply need to find: and . So, we do each of these in turn.

Substituting into gives . Then, taking the cross product of the partials of will give . Next, we find the dot product of these two quantities, giving . This is a particularly ugly integrand, but we can certainly do it (mostly by using integration-by-parts). In fact, we’ll find that

So that’s it – how you do surface and line integrals without any of the tools from chapter 17. Do you have any questions after that? Did I make any mistakes in my math? If so, feel free to comment below (or send me an email) and I’ll do my best to fix things!

### Write down what you’ve done

I was a taught both basic probability and statistics theory by a young Swiss statistician who liked to talk about putting new mathematical tricks and theorems “in your math pocket.” Certain tricks appear again and again throughout disparate mathematical disciplines (multiplying by a “well-chosen one” or switching order of integration are basic tricks which come to mind immediately). A mathematician’s math pocket was supposed to hold these tricks and be called upon whenever needed. In a post to his blog Field’s medal winner Terry Tao echoed a similar sentiment. Rather than filling up one’s mental math pocket, however, he suggests writing these tricks down.

In this light, myself and two other aspiring applied mathematicians at the Colorado School of Mines will be posting to this blog. Certainly, it will serve as a spot to write down these tricks. However, it will (hopefully) be much more. Each of us hopes to be a professional academic mathematician. Such a profession, of course, requires that we be capable of doing mathematics. Much of the job, however, involves teaching and writing. As such, this blog will help us to improve on our mathematical exposition and our pedagogical skills. It will allow us to record neat tricks and problems we’ve come across. Mathematics requires creativity and the ability to apply mathematical concepts to interesting physical and scientific phenomena. As such, this blog will give us reason to write up the ideas we have and the things to which we’d like to apply mathematics. Mathematics is inherently collaborative, so we will share interesting papers and mathematical tidbits. It will even serve as a way for the three of us to continue communicating as we spread across the country for our Ph.D studies.

It’s hard to say now how often this blog will be updated (presumably there will be many new posts over the summers and winters with a significant drop-off once classes begin). This blog, however, will certainly be an aid as we continue along the path to becoming applied mathematicians.