Linear ODEs#
Definition: Linear Ordinary Differential Equations
An ordinary differential equation
is linear if there exist functions \(a_0, a_1, \dotsc, a_n, b\) such that the ODE can be expressed as
Theorem: Existence and Uniqueness
Let \(I \subseteq \mathbb{R}\) be some open interval and let
be a linear ODE with initial conditions \((x_0, y_0), \dotsc, (x_n, y_n)\), where \(x_0, \dotsc, x_n \in I\).
If \(a_0, a_1, \dotsc, a_{n-1}\) and \(b\) are continuous on \(I\), then there exists one and only one function \(\phi: I \to \mathbb{R}\) which satisfies the initial conditions and the linear ODE for every \(x \in I\).
Proof
TODO
First-Order Linear ODEs#
A first-order linear ODE can be expressed as
for some functions \(P\), \(Q\) and \(G\). We often need transform this into a more standard form by dividing by \(Q(x)\) and imposing the condition that \(Q(x) \ne 0\):
where \(p = P / Q\) and \(g = G / Q\).
Theorem: Variation of Parameters
Consider a first-order linear ODE which can be expressed as follows:
A real function \(\phi\) is a solution on some subset \(S \subseteq \mathbb{R}\) if and only if it can be expressed as
where \(P(x)\) is any antiderivative of \(p(x)\) on \(S\) and \(c(x)\) is an antiderivative of \(q(x)\mathrm{e}^{P(x)}\) on \(S\).
Example: \(y'(t) + \frac{y(t)}{t} = \cos t\)
Consider the following first-order linear ODE:
It has the following form:
The antiderivatives of \(p\) for \(t \in (0, \infty)\) are the following:
For \(c(t)\), we get:
Therefore:
Proof
TODO
Algorithm: Solving First-Order Linear ODEs
We are given the following first-order linear ODE:
To solve this, we use the method of integrating factors. The goal is to find some function \(\mu(x)\) such that we can use the product rule to express the left-hand side as the derivative of \((\mu(x)y)\). We can then use antidifferentiation to find the solutions.
- Multiply both sides by a yet unknown function \(\mu(x)\):
- In order for \(\mu(x)y' + \mu(x)p(x)y\) to be the derivative of \((\mu(x)y)\), we need \(\mu(x)p(x)\) to be equal to \(\mu'(x)\) because the product rule gives us \((\mu(x)y)' = \mu(x)y' + \mu'(x)y\):
- Divide both sides of \(\mu'(x) = \mu(x)p(x)\) by \(\mu(x)\), imposing the condition \(\mu(x) \ne 0\):
- By also imposing the condition that \(\mu(x) \gt 0\) and then antidifferentiating both sides, we can transform this further:
- We take the real exponential function of both sides:
For each choice of \(C \in \mathbb{R}\), the expression \(\int p(x) \mathop{\mathrm{d}x} + C\) is some antiderivative \(P(x)\) of \(p(x)\). Thus, if we can find an antiderivative \(P(x)\) of \(p(x)\), then we can find a \(\mu(x) = \mathrm{e}^{P(x)}\) which satisfies the condition \((\mu(x)y)' = \mu(x)y' + \mu'(x)y\). We can easily verify this as well by applying the chain rule and the rules for the derivative of the real exponential function:
Moreover, since the real exponential function is always positive, we have a \(\mu(x)\) which satisfies the previously imposed condition that \(\mu(x) \gt 0\).
- We have shown how to find an appropriate \(\mu(x)\), so we can proceed with the original equation:
- We antidifferentiate both sides:
- To obtain the solutions, we divide both sides by \(\mu(x)\):
For each choice of \(C \in \mathbb{R}\), the expression \(\int \mu(x)g(x) \mathop{\mathrm{d}x} + C\) is just some antiderivative \(\mathcal{F}\) of \(\mu(x)g(x)\). Therefore, the solutions of the original equation on some open subset \(U \subseteq \mathbb{R}\) are the functions \(y: U \to \mathbb{R}\) which for all \(x \in U\) can be expressed for as
where \(P(x)\) is any antiderivative of \(p(x)\) and \(\mathcal{F}(x)\) is any antiderivative of \(\mu(x)g(x) = \mathrm{e}^{P(x)}g(x)\).
Summary
Given some open subset \(U \subseteq \mathbb{R}\), the solutions of the ODE on \(U\) are the functions \(y: U \to \mathbb{R}\) which on \(U\) can be expressed as
for some antiderivative \(P\) of \(p\) and some antiderivative \(\mathcal{F}\) of \(\mathrm{e}^{P(x)}g(x)\).
Example: \(y' - 2y = 4 - x\)
Here we have \(p(x) = -2\) and \(g(x) = 4 - x\). We want to rewrite this as
The antiderivatives of \(p\) are given by
We choose \(C = 0\) to make everything simple and so
The equation thus becomes
We antidifferentiate both sides:
Use integration by parts on the right-hand side:
Therefore, we have
and so the solutions are
Example: \(xy' + 2y = 4x^2\)
TODO
Linear ODEs with Constant Coefficients#
Definition: Linear ODE with Constant Coefficients
A linear ODE with constant coefficients is a linear ODE which can be written in the form
where \(b, a_0, \dotsc, a_n \in \mathbb{R}\) are real numbers.
Definition: Characteristic Polynomial
The characteristic polynomial of a homogeneous linear ODE with constant coefficients
is the following complex polynomial:
Theorem: Solutions from Characteristic Polynomials
Consider the following homogeneous linear ODE with constant coefficients:
The roots of its characteristic polynomial can be used to generate a basis for the solution space:
- Each real root \(\lambda\) with multiplicity \(k\) yields the following basis elements:
$\(\mathrm{e}^{\lambda t}, t \mathrm{e}^{\lambda t}, \dotsc, t^{k-1}\mathrm{e}^{\lambda t}\)$
- Each pair of complex conjugated roots \(\lambda = a \pm b \mathrm{i}\) with individual multiplicity \(k\) yields the following basis elements:
$\(\mathrm{e}^{at}\cos(bt), \mathrm{e}^{at}\sin(bt),t\mathrm{e}^{at} \cos(bt), t\mathrm{e}^{at}\sin(bt), \dotsc, t^{k-1} \mathrm{e}^{at} \cos(bt), t^{k-1} \mathrm{e}^{at} \sin(bt)\)$
Example
Consider the following homogeneous linear ODE with constant coefficients:
It has the characteristic polynomial
whose roots are the following:
We get the following basis:
The general solution is thus the following:
Example
Consider the following homogeneous linear ODE with constant coefficients:
It has the characteristic polynomial
whose roots are the following:
We get the following basis:
The general solution is thus the following:
Proof
TODO
Theorem: Reduction to Linear System
Consider the following homogeneous linear ODE with constant coefficients:
A function \(\phi\) is a solution on some interval \(I \subseteq \mathbb{R}\) if and only if \(\boldsymbol{\psi}\) is a solution on \(I\) of the linear system
where:
Proof
TODO