Mathematical Food for Thought

 
 
vardenafil mg 
tetracycline générique 
provera sans ordonnance 
periactin sur le comptoir 
propecia generique 
augmentin mg 
acheter doxycycline en ligne 
anafranil pharmacie 
loperamide sans recette 
zovirax mg 
order viagra cialis without prescription viagaraforsale cialis lowest price purchase viagra viagra sale buy champix kamagra low price buy kamagra visa viagra best prices
 
  • About

    Serves a Daily Special and an All-You-Can-Eat Course in Problem Solving. Courtesy of me, Jeffrey Wang.
 
What’s Your Function? Topic: Algebra. Level: AMC/AIME. July 7th, 2007

Problem: Given two positive reals  \alpha and  \beta , show that there is a continuous function  f that satisfies  f(x) = f(x+\alpha)+f(x+\beta) .

Solution: There are several special cases that are interesting to look at before we make a guess as to what type of function  f will be. First we consider the case  \alpha = \beta = 1 . This immediately gives

 f(x) = 2f(x+1) .

It should not be too difficult to guess that  f(x) = 2^{-x} is a solution to this, as well as any constant multiple of it. Now try  \alpha = -1 and  \beta = -2 , resulting in

 f(x) = f(x-1)+f(x-2) .

Looks a lot like Fibonacci, right? In fact, one possible function is just  f(x) = \phi^x . That’s pretty convenient.

Notice how both of these illuminating examples are exponential functions, which leads us to guess that our function will be exponential as well. So, following this track, we set

 f(x) = a^x

so we simply need to solve

 a^x = a^{x+\alpha}+a^{x+\beta}

 a^x = a^x\left(a^{\alpha}+a^{\beta}\right) .

Unfortunately, the equation  a^{\alpha}+a^{\beta} = 1 does not always have a solution (take  \alpha = 1 and  \beta = -1 for example). But that’s ok and I’ll worry about it some other time. In any case we have found a function for whenever the equation  a^{\alpha}+a^{\beta} = 1 has a solution in the reals.

Posted in Algebra || 5 Comments »
Addition At Its Finest. Topic: Calculus/S&S. June 29th, 2007

Problem: Evaluate  \displaystyle \sum_{n=1}^{\infty} \frac{x^n}{n(n+1)} where  x is a real number with  |x| < 1 .

Solution: Looking at that all too common denominator, we do a partial fraction decomposition in hopes of telescoping series. The summation becomes

 \displaystyle \sum_{n=1}^{\infty} \left(\frac{x^n}{n}-\frac{x^n}{n+1}\right) .

Common Taylor series knowledge tells us that

 \displaystyle \ln{(1-x)} = -\left(x+\frac{x^2}{2}+\frac{x^3}{3}+\cdots\right) = -\sum_{n=1}^{\infty} \frac{x^n}{n} ,

which convenient fits the first part of the summation. As for the second part, we get

 \displaystyle \sum_{n=1}^{\infty} \frac{x^n}{n+1} = \frac{1}{x} \sum_{n=1}^{\infty} \frac{x^{n+1}}{n+1} = \frac{-\ln{(1-x)}-x}{x}

from the same Taylor series. Combining the results, our answer is then

 \displaystyle \sum_{n=1}^{\infty} \frac{x^n}{n(n+1)} = 1-\ln{(1-x)}+\frac{\ln{(1-x)}}{x} .

QED.

——————–

Comment: Even though the trick at the beginning didn’t actually get much to telescope, the idea certainly made it easier to recognize the Taylor series. Algebraic manipulations are nifty to carry around and can be applied in problems wherever you go.

——————–

Practice Problem: Show that  \displaystyle \int_0^{\frac{\pi}{2}} \ln{(\tan{x})} = 0 .

The Smaller The Better. Topic: Calculus. June 18th, 2007

Problem: Given a complicated function  f: \mathbb{R}^n \rightarrow \mathbb{R} , find an approximate local minimum.

Solution: The adjective complicated is only placed so that we assume there is no easy way to solve  \bigtriangledown f = 0 to immediately give the solution. We seek an algorithm that will lead us to a local minimum (hopefully a global minimum as well).

We start at an arbitrary point  X_0 = (x_1, x_2, \ldots, x_n) . Consider the following process (for  k = 0, 1, 2, \ldots ), known as gradient descent:

1. Calculate (approximately)  \bigtriangledown f(X_k) .

2. Set  X_{k+1} = X_k &#8211; \gamma_k \bigtriangledown f(X_k) , where  \gamma_k is a constant that can be determined by a linesearch.

It is well-known that the direction of the gradient is the direction of maximum increase and the direction opposite the gradient is the direction of maximum decrease. Hence this algorithm is based on the idea that we always move in the direction that will decrease  f the most. Sounds pretty good, right? Well, unfortunately gradient descent converges very slowly so it is only really useful for smaller optimization problems. Fortunately, there exist other algorithms but obviously they are more complex, such as the nonlinear conjugate gradient method or Newton’s method, the latter of which involves the computation of the inverse of the Hessian matrix, which is a pain.

Colorful! Topic: Calculus. June 6th, 2007

Theorem: (Green’s Theorem) Let  R be a simply connected plane region whose boundary is a simple, closed, piecewise smooth curve  C oriented counterclockwise. If  f(x, y) and  g(x, y) are continuous and have continuous first partial derivatives on some open set containing  R , then

 \displaystyle \oint_C f(x, y) dx + g(x, y) dy = \int_R \int \left(\frac{\partial g}{\partial x}-\frac{\partial f}{\partial y}\right) dA .

——————–

Problem: Evaluate  \displaystyle \oint_C x^2y dx + (y+xy^2) dy , where  C is the boundary of the region enclosed by  y = x^2 and  x = y^2 .

Solution: First, verify that this region satisfies all of the requirements for Green’s Theorem – indeed, it does. So we may apply the theorem with  f(x, y) = x^2y and  g(x, y) = y+xy^2 . From these, we have  \frac{\partial g}{\partial x} = y^2 and  \frac{\partial f}{\partial y} = x^2 . Then we obtain

 \displaystyle \oint_C x^2y dx + (y+xy^2) dy = \int_R \int (y^2-x^2) dA .
But clearly this integral over the region  R can be represented as  \displaystyle \int_0^1 \int_{x^2}^{\sqrt{x}} (y^2-x^2) dy dx , so it remains a matter of calculation to get the answer. First, we evaluate the inner integral to get

 \displaystyle \int_0^1 \int_{x^2}^{\sqrt{x}} (y^2-x^2) dy dx = \int_0^1 \left[\frac{y^3}{3}-x^2y\right]_{x^2}^{\sqrt{x}} dx = \int_0^1 \left(\frac{x^{3/2}}{3}-x^{5/2}-\frac{x^6}{3}+x^4\right)dx .

Then finally we have

 \displaystyle \int_0^1 \left(\frac{x^{3/2}}{3}-x^{5/2}-\frac{x^6}{3}+x^4\right)dx = \left[\frac{2x^{5/2}}{15}-\frac{2x^{7/2}}{7}-\frac{x^7}{21}+\frac{x^5}{5}\right]_0^1 = \frac{2}{15}-\frac{2}{7}-\frac{1}{21}+\frac{1}{5} = 0 .

QED.

——————–

Comment: To me, Green’s Theorem is a very interesting result. It’s not at all obvious that a line integral along the boundary of a region is equivalent to an integral of some partial derivatives in the region itself. A simplified proof of the result can be obtained by proving that

 \displaystyle \oint_C f(x, y) dx = -\int_R \int \frac{\partial f}{\partial y} dA and  \displaystyle \oint_C g(x, y) dy = \int_R \int \frac{\partial g}{\partial x} dA .

——————–

Practice Problem: Let  R be a plane region with area  A whose boundary is a piecewise smooth simple closed curve  C . Show that the centroid  (\overline{x}, \overline{y}) of  R is given by

 \displaystyle \overline{x} = \frac{1}{2A} \oint_C x^2 dy and  \displaystyle \overline{y} = -\frac{1}{2A} \oint_C y^2 dx .

More Integrals… *whine*. Topic: Calculus. June 4th, 2007

Definition: (Jacobian) If  T is the transformation from the  uv -plane to the  xy -plane defined by the equations  x = x(u, v) and  y = y(u, v) , then the Jacobian of  T is denoted by  J(u, v) or by  \partial(x, y)/\partial(u, v) and is defined by

 J(u, v) = \frac{\partial(x, y)}{\partial(u, v)} = \frac{\partial x}{\partial u} \cdot \frac{\partial y}{\partial v} &#8211; \frac{\partial y}{\partial u} \cdot \frac{\partial x}{\partial v} ,

i.e. the determinant of the matrix of the partial derivatives (also known as the Jacobian matrix). Naturally, this can be generalized to more variables.

——————–

Theorem: If the transformation  x = x(u, v) ,  y = y(u, v) maps the region  S in the  uv -plane into the region  R in the  xy -plane, and if the Jacobian  \partial(x, y)/\partial(u, v) is nonzero and does not change sign on  S , then (with appropriate restrictions on the transformation and the regions) it follows that

 \displaystyle \int_R \int f(x, y) dA_{xy} = \int_S \int f(x(u, v), y(u, v)) \left|\frac{\partial(x, y)}{\partial(u, v)} \right| dA_{uv} .

——————–

Problem: Evaluate  \displaystyle \int_R \int e^{(y-x)/(y+x)} dA , where  R is the region in the first quadrant enclosed by the trapezoid with vertices  (0, 1); (1, 0); (0, 4); (4, 0) .

Solution: The bounding lines can be written as  x = 0 ,  y = 0 ,  y = -x+1 , and  y = -x+4 . Now consider the transformation  u = y+x and  v = y-x . In the  uv -plane, the bounding lines of the new region  S can now be written as  u = 1 ,  u = 4 ,  v = u , and  v = -u .

We can write  x and  y as functions of  u and  v : simply  x = \frac{u-v}{2} and  y = \frac{u+v}{2} . So the Jacobian  \displaystyle \frac{\partial(x, y)}{\partial(u, v)} = \frac{\partial x}{\partial u} \cdot \frac{\partial y}{\partial v} &#8211; \frac{\partial y}{\partial u} \cdot \frac{\partial x}{\partial v} = \frac{1}{2} \cdot \frac{1}{2} &#8211; \frac{1}{2} \cdot \left(-\frac{1}{2} \right) = \frac{1}{2} .

Then our original integral becomes  \displaystyle \int_R \int e^{(y-x)/(y+x)} dA = \frac{1}{2} \int_S \int e^{v/u} dA . And this is equivalent to

 \displaystyle \frac{1}{2} \int_S \int e^{v/u} dA = \frac{1}{2} \int_1^4 \int_{-u}^u e^{v/u} dv du = \frac{1}{2} \int_1^4 \big[ u e^{v/u} \big]_{v=-u}^u du = \frac{1}{2} \int_1^4 u\left(e-\frac{1}{e}\right) du = \frac{15}{4}\left(e-\frac{1}{e}\right) .

QED.

——————–

Comment: Note that the above theorem is probably very important in multivariable calculus, as it is the equivalent to  u -substitution in one variable, which we all know is the ultimate integration technique. It functions in the same way, giving you a lot more flexibility on the function you are integrating and the region you are integrating on.

——————–

Practice Problem: Evaluate  \displaystyle \int_R \int (x^2-y^2) dA , where  R is the rectangular region enclosed by the lines  y = -x ,  y = 1-x ,  y = x ,  y = x+2 .

Google

 

 
 
free web counters
Etronics