Systems of Linear Differential Equations

From Math Images
Revision as of 14:39, 24 July 2013 by Christaranta (talk | contribs)
Jump to: navigation, search
Inprogress.png
U.S. and Soviet Union nuclear warheads
Nuclear.JPG
Fields: Calculus and Dynamic Systems
Image Created By: [[Author:| ]]

U.S. and Soviet Union nuclear warheads

Differential equations have always been a popular research topic due to their various applications. A system of linear differential equations is no exception; it can be used to model arms races, simple predator prey models, and more.


Basic Description

In 1945, the United States dropped atomic bombs on the Japanese cities of Hiroshima and Nagasaki, ending World War II and establishing itself as a new superpower. The Soviet Union, while an ally of the United States during WWII, feared the bomb and spent the next few years developing their own atomic bomb, finally detonating their first nuclear weapon in 1949. This marked the beginning of a long and expensive arms race between the two powers. The competition for nuclear might, along with the countries' different ideologies (communism vs. capitalism), caused a political and psychological war during the second half of the 20th century, now known as the Cold War. During this period, both powers invested tremendous resources into their technology and weaponry, worried that the other was pulling ahead.

We will attempt to build a simple model to see the effects of the nuclear arms race. What type of model would fit best? Consider these two images.

Figure 1
Figure 2

Note that, starting in 1947, the Soviet Union rapidly increased its defense spending (Figure 2). In response, the United States rapidly increased its defense spending beginning in 1948 (Figure 1). From that point on, the spending of the two powers pushed and pulled, staying within a fairly narrow range (the two higher points of United States defense spending around 1953 and 1968 are related to the Korean and Vietnam Wars). These fluctuating changes are connected with each power's analysis of its own defense spending and the other power's defense spending. The more the United States is already spending on defense, the less willing it is to increase its defense spending for the following year. However, if the Soviet Union is spending a tremendous amount on defense, then the United States will increase its defense spending to not be left behind. In other words, we can analyze the defense spending of each country by analyzing the change in defense spending from year to year. Since derivatives are the mathematical tool to analyze change, we will build our model around derivatives.

Let x(t) and y(t) be the defense spending of the United States and the Soviet Union respectively at time t (in years). Then x′(t) is the derivative of x over t and it represents how much the U.S. spending changes over t years. Likewise, y′(t) is the derivative of y over t and it represents how much the Soviet spending changes over t years. As mentioned above, the rate of U.S. defense spending depends negatively on its current defense spending and positively on the Soviet Union's current defense spending. Correspondingly, the rate of Soviet defense spending depends negatively on its current defense spending and positively on the United States's current defense spending. The model's starting point is the year in which the arms race started, when t = 0. Let x0 and y0 be the initial(t = 0) defense spending of the United States and the Soviet Union respectively The following equations fit these conditions.

x'(t) = ax(t) + by(t)\quad\quad\quad x(0) = x_0 , where a is negative and b is positive.
y'(t) = cx(t) + dy(t)\quad\quad\quad\, y(0) = y_0 , where c is positive and d is negative.

This is a linked system of first order linear differential equations. In other words, equation x′(t) states that the United States lowers its budget by a dollars for every dollar it spent the previous year, and raises it by b dollars for every dollar in the Soviet Union's budget. Equation y′(t) states that the Soviet Union lowers its budget by d for every dollar it spent the previous year, and raises it by c dollars for every dollar in the United States's budget.

Note: We must use our model with a degree of skepticism because there are far more factors affecting the arms race than just the two countries' defense spending, such as available money/resources and the spending requirements of other important programs. Nevertheless, our model is useful for analyzing the general outline of the Cold War arms race and predicting its outcomes.

Introduction to Systems of Linear Differential Equations

We clarify a few terms:

  • A differential equation is an equation that involves both function(s) and their derivatives.
  • A nth order differential equation only involves up to the nth derivative of any function.
  • A linked system of differential equations has at least one equation that involves multiple different derivatives or functions.
  • The expressions of linear equations are linear combinations of functions.

Each equation in our system is a linear combination of functions x(t) and y(t) and both have one derivative of either x or y. Hence we have a linked system of first order linear differential equations.

We can use eigentheory to solve for any system of first order linear differential equations. Ultimately, the outcome of the nuclear arms race depends only on the values of a, b, m, and n (read the More Mathematical Explanation for details).

A More Mathematical Explanation

Solving the Two Dimensional System

We will first solve the general system of two linear different [...]

Solving the Two Dimensional System

We will first solve the general system of two linear differential equations.

The general equations are

x'(t) = ax(t) + by(t) \quad\quad\quad x(0) = x_0
y'(t) = cx(t) + dy(t) \quad\quad\quad\, y(0) = y_0 .

Since we are solving the general system, a, b, c, and d are elements of ALL numbers (including imaginary numbers). Naturally, the systems generated from our arms race model is a subset of this general system, so knowing how to solve the general system will lead to knowing how to solve the arms race model.

We can write this system with matrices in the form

\vec{u}'=A\vec{u},    where     A=\begin{bmatrix} a & b \\ c & d \end{bmatrix},     \vec{u} = \begin{bmatrix} x \\ y \end{bmatrix},    and     \vec{u}' = \begin{bmatrix} x' \\ y' \end{bmatrix}.

In order to gain an intuition on how to solve this equation, we consider the one dimensional case.

The One Dimensional Case

If the two dimensional case involved the equation :\vec{u}' = A\vec{u} with A a 2x2 matrix, then the A in the equation of the one dimensional case must be a 1x1 matrix, or just a constant. Consequently,  \vec{u} will actually be u instead; it is no longer a vector. Hence the one dimensional equation is

u' = ku, where k is a real number.

Solving this requires basic knowledge of calculus. We can write this as

u' = \frac{du}{dt} = ku (remember that u is a function of t)
\frac{1}{u} du = kdt (rearranged variables).

Integrate both sides:

\int \frac{1}{u} du = \int kdt
ln(u) = kt + c
e^{ln(u)} = e^{kt+c}
u = e^{kt}e^{c}
u = Ce^{kt}. (So C = ec.)

Now we can apply the initial condition. Note that

u(0) = C(1) = C, so u(0) = C.

Thus the solution to the one dimensional case is

u(t) = u_0e^{kt}.

Back to the Two Dimensional System

The solution to the one dimensional case suggests that the solution to our system is \vec{u}(t) = e^{At}\vec{u}_0. We will explore the concept of raising e to a matrix later. Since the solution to the one dimensional case is u = Cekt, a good guess of the solution to the two dimensional case is  \vec{u}(t) = e^{\lambda t} \vec{v} . We plug our guess into the original equations to find λ and \vec{v} that work. We write  \vec{v} = \begin{bmatrix} p \\ q \end{bmatrix} .

We verify our guess by plugging it into our original system:

 p \lambda e^{\lambda t} = x'(t) = ap e^{\lambda t} + bq e^{\lambda t}
 q \lambda e^{\lambda t} = y'(t) = cp e^{\lambda t} + dq e^{\lambda t}

After we divide by eλt (which is never 0), we can rewrite our system as  \lambda \vec{v} = A \vec{v}. Thus, our guess is correct if and only if λ is an eigenvalue and  \vec{v} is an eigenvector of A.

Now we must account for the fact that a 2x2 matrix can have two eigenvalues λ1 and λ2. Since differentiation is a linear operator, we know that the solution to our system is linear. We write the general solution as a linear combination

 \vec{u} = e^{\lambda_1 t} \vec{v}_1 + e^{\lambda_2 t} \vec{v}_2 .

We now apply our initial condition. When t = 0, our general solution becomes

 \vec{u}(0) = \vec{v}_1 + \vec{v}_2 .

It is rare that  \vec{v}_1 and  \vec{v}_2 add up to the initial condition, so how do we solve this? Ideally, we multiply the two vectors by real number constants c1 and c2, but we must check that this still works for our system.


By multiplying constants to each term of our solution expression, our new proposed solution is

 \vec{u} = c_1 e^{\lambda_1 t} \vec{v}_1 + c_2 e^{\lambda_2 t} \vec{v}_2 .

We verify this by plugging into our matrix equation \vec{u} = A\vec{u}.

 \begin{align} \vec{u}' &= \lambda_1 c_1 e^{\lambda_1 t} \vec{v}_1 + \lambda_2 c_2 e^{\lambda_2 t} \vec{v}_2 \\
&= A\vec{u} = A(c_1 e^{\lambda_1 t} \vec{v}_1 + c_2 e^{\lambda_2 t} \vec{v}_2) \\
&= A(c_1 e^{\lambda_1 t} \vec{v}_1) + A(c_2 e^{\lambda_2 t} \vec{v}_2) \text{ (Matrix multiplication is distributive.)} \\
&= c_1 e^{\lambda_1 t} (A \vec{v}_1) + c_2 e^{\lambda_2 t} (A \vec{v}_2) \text{ (Scalars pass through.)} \\
&= c_1 e^{\lambda_1 t} (\lambda_1 \vec{v}_1) + c_2 e^{\lambda_2 t} (\lambda_2 \vec{v}_2) \\
&= \lambda_1 c_1 e^{\lambda_1 t} \vec{v}_1 + \lambda_2 c_2 e^{\lambda_2 t} \vec{v}_2 \quad \blacksquare
\end{align}

In fact, multiplying constants to the terms is equivalent to finding a linear combination of our basis vectors e^{\lambda_1 t} \vec{v}_1 and e^{\lambda_2 t} \vec{v}_2. Remember that differentiation is a linear transformation, so the linear combinations of existing solutions are naturally solutions as well.


Now that we have a more refined general solution, we need to know how to solve for the constants. Our initial condition now becomes

 \vec{u}(0) = \begin{bmatrix} x_0 \\ y_0 \end{bmatrix} = c_1\begin{bmatrix} p_1 \\ q_1 \end{bmatrix} + c_2\begin{bmatrix} p_2 \\ q_2 \end{bmatrix} .

We can rewrite this as

 \begin{bmatrix} p_1 & p_2 \\ q_1 & p_2 \end{bmatrix} \begin{bmatrix} c_1 \\ c_2 \end{bmatrix} = \begin{bmatrix} x_0 \\ y_0 \end{bmatrix} , which can be solved using Gaussian elimination.

Thus we have our particular solution for any initial condition. With the constants solvable, our general solution is

 \vec{u} = c_1 e^{\lambda_1 t} \vec{v}_1 + c_2 e^{\lambda_2 t} \vec{v}_2 .

Arms Race Example

We will solve a system that follows the rules of our arms race model. Since the real Cold War arms race is hard to model, we will make assumptions on the actions of the United States and the Soviet Union. Assume that the United States lowers its budget by 3 dollars for every dollar it spent the previous year, and raises it by 6 dollars for every dollar in the Soviet Union's budget. Further assume that the Soviet Union lowers its budget by d for every dollar it spent the previous year, and raises it by c dollars for every dollar in the United States's budget. Finally assume that the United States defense spending is 130 billion dollars and the Soviet Union defense spending is 150 billion dollars at t = 0.

x'(t) = -3x(t) + 6y(t) \quad\quad\quad x(0)=13
y'(t) = 5x(t) - 2y(t) \quad\quad\quad\;\;\; y(0)=15.

All of the constants are made up. The values of x and y are in tens of billions of dollars.

We can rewrite this as

 \begin{bmatrix} x' \\ y' \end{bmatrix} = \begin{bmatrix} -3 & 6 \\ 5 & -2 \end{bmatrix} \begin{bmatrix} x \\ y \end{bmatrix} .

To find the eigenvalues of A, remember that λ is an eigenvalue if and only if the determinant of (A - λI) is 0:

<math> \begin{align}


Why It's Interesting

We have shown how linear differential equations are highly applicable. They can model basic arms races and Hooke's law with a high degree of accuracy. We can use linear differential equations for any system that has derivatives and functions written in linear combinations of one another. Some include heat flow, bank interest, basic predator-prey models, and population through time. Through the Hooke's law case, we saw that our method for solving linear differential equations also works for second order equations. In fact, we can write any k order differential equation as a system of k first-order differential equations.

Even if we ignore their applicability, linear differential equations are fascinating topics to study. They help show how differentiation is a linear transformation by writing the system as a matrix equation and deriving a linear combination as the solution. Furthermore, they emphasize the importance of eigentheory. Without eigentheory, we could have solved the equations, much less determine the stability of our system. Eigentheory is not just a matrix stretching or shrinking a vector, it contains one of the core ideas of linear algebra - scalar multiplication. Coupled with the other core idea of linear algebra, linear combinations, eigentheory is an essential tool for linear algebra and all of mathematics.


Teaching Materials

There are currently no teaching materials for this page. Add teaching materials.




References






If you are able, please consider adding to or editing this page!


Have questions about the image or the explanations on this page?
Leave a message on the discussion page by clicking the 'discussion' tab at the top of this image page.

[[Category:]]