Note: This guide assumes prior knowledge of multivariable and some vector calculus, as well as basic knowledge of vectors and matrices. See Guide to multivariable and vector calculus if you need a refresher/introduction to these topics.

A partial differential equation or PDE is any equation that contains a function and its partial derivatives. The general form of a PDE is:

To shorten derivatives, we often use the symbol, where:

Some specific examples of PDEs include the heat equation:

The wave equation:

And Poisson’s equation:

PDEs can have many, many solutions. For instance, take the PDE:

The general solution of this PDE is . And yes, those can be any functions of , whether they be or or . Thus, just like ODEs need initial conditions to give unique solutions, PDEs need boundary conditions to give unique solutions. The typical set of boundary conditions for a PDE are values at the edges of the domain of the PDE (e.g. as ). When the PDE is dependent on time, then the boundary condition for is often called an initial condition - remember, for PDEs, initial conditions are considered a type of boundary condition.

Analytically solving PDEs

Technique (ODE)

To solve a first order equation of the form where , we proceed as follows.

  1. Multiply both sides by :
  1. Divide both sides by to separate the variables:
  1. Because both sides contain a differential in their respective variables, integrate both sides and add a single constant of integration:
  1. Solve for using initial conditions. For example, if an initial condition where , substitute and to solve for .

Examples (ODE)

Example 1

Consider the ordinary differential equation given by:

We wish to solve for the general solution. Because , the equation can be written as , where and . Therefore, we can use separation of variables to solve it, as follows:

Example 2

Consider the initial-value problem given by:

Using separation of variables, we proceed as follows:

This is the general solution. We now use the initial condition to solve for :

Therefore, we find that:

We can rearrange to now solve for :

The result gives two possible (implicit) solutions. However, we can use the initial condition to solve for the sign:

This gives us a unique solution for :

Technique (PDE)

Separation of variables in PDEs is used similarly as in ODEs. However, there are additional steps to the process. Consider the example initial-boundary value problem (IBVP):

We can use separation of variables to determine separated equations and apply initial and boundary conditions to the resulting equations to find the solution.

  1. Assume that the function for which we wish to solve is separable, that is, that can be written in the following form:
  1. Solve for the derived terms using the separable equation:
  1. Substitute the derived terms:
  1. Divide both sides by :
  1. Divide both sides by :
  1. Because the sides are dependent on different variables, they are both equal a constant; if you increase , the right side will still be the same. Therefore, let the equation equal a constant :
  1. Use each side to form a homogeneous equation:
  1. Use the initial conditions to solve for initial conditions for the new equations:

Now note that if , then , so . Therefore, . Likewise:

Using the same logic, .

Example for the Helmholtz Equation

The Helmholtz Equation is an eigenvalue problem that occurred while using separation of variables for a PDE with a Laplacian operator in a multi-dimensional, bounded region. However, for this guide, we will be using only 2 dimensions in Cartesian coordinates because it is easier to learn and display. It takes the form of:

It can have standard boundary conditions such as a Dirichlet boundary condition, , or a Neumann boundary condition, , but because it is used in functions with multiple variables, in 2-dimension, there can be the boundary condition:

where are functions of every element on the boundary, is the gradient, and . Examining the condition, if , then is becomes a Dirichlet boundary condition. If , then the condition can be rewritten into . This translates into a Neumann boundary condition at every variable depends on. Therefore, this boundary condition is a linear combination of Dirichlet and Neumann boundary conditions. According to the Stern-Liouville Theorem, - All eigenvalues are real - There are infinitely many eigenvalues with a smallest eigenvalue, but not a biggest eigenvalue - Each eigenvalue may have multiple eigen functions - The eigen functions for a complete set, i.e. piecewise, smooth functions can converge to linear combinations of eigenfunctions.

Eigenfunctions for different eigenvalues are orthogonal to each other. The eigenvalues are given by the Rayleigh quotient formula:

where is the boundaries of the region. Using the properties of this theorem and multi-variable calculus, you can solve for .

Example for the wave equation

Now, consider the IBVP of the wave equation, which is given by:

with the initial and boundary conditions:

Because the PDE is linear and homogeneous, is separable. Therefore,

Even though is separable because the region is a simple shape, let’s leave them together for later use. We can then compute the derivatives as follows:

Now, substituting the derivatives gives us:

This gives us two differential equations. First, we have the ODE for , given by:

Second, we have the PDE for :

This is actually the Helmholtz equation in disguise, since it can be written as . Now we can apply the boundary conditions:

Therefore, the B.C.’s for reduce to:

Now, we can perform separation of variables on . Let . Then, the partial derivatives are given by:

Substituting and into the previous homogeneous equations, we get:

Where is the separation constant. This gives us two differential equations for and :

Therefore, the homogeneous equations in standard form are given by:

Substituting in the B.C.’s gives us:

The B.C.’s here reduce to:

Solving for , we have:

After solving through the cases of and , you will find and where and is an arbitrary constant. Solving for we get:

After solving the cases of , , and , you will find and where is arbitrary, , Therefore, . Solving for , we have:

Afterwards, you can use superposition to solve for , giving a general solution of the form:

Numerically solving PDEs

The most common methods of solving PDEs numerically are the finite difference, finite element, and boundary element methods. We will start with the finite difference method (FDM). The basic idea is to transform a PDE into a linear algebra problem by encoding derivatives as matrices. Take one-dimensional wave equation as example. It is given by:

We want to discretize the second partial derivative. Recall that the derivative is the limit of a difference quotient, which reads, in the discrete case, as:

Or, alternatively written:

This works in all cases except at the left and right boundaries of the domain, as there is not another point to the left of the boundary to take the central difference. Instead, we use the single-sided difference - for the left, we have:

And for the right, we have:

Note: see https://web.media.mit.edu/~crtaylor/calculator.html for a calculator for these values (enter 0, 1, 2, 3 for the left-handed 2nd derivative and -3, -2, -1, 0 for the right-handed 2nd derivative).

Now, partial derivatives are linear operators, just like matrices. So we can spatially discretize the equation by turning the 2nd spatial partial derivative into a matrix acting on the solution vector :

Where:

By discretizing the spatial derivative, the partial time derivative simply becomes an ordinary time derivative. Thus we can rewrite as:

Which is an ordinary differential equation that can be solved with conventional ODE solvers. The only thing left is to compute an initial condition and (if you know the initial condition you can differentiate it to get the initial time derivative).

This method works well even in the higher-dimensional case. For instance, consider the generalized wave equation:

By encoding the Laplacian as a matrix acting on , the same approach can be used as with the other approaches previously mentioned. In fact, this approach works for all linear partial differential equations that have one time and one or several space derivatives.

For other cases of linear PDEs that are not time-based, such as Poisson’s or Laplace’s equation, they can be generally written in the form:

And solved using standard linear algebra techniques for solving linear systems, of which there are many different solvers in popular libraries like NumPy, SciPy, Eigen, and others. Finally, for nonlinear PDEs, they can be generally written in the form:

Which is a root-finding problem that can be solved with Newton’s method:

Solving vector differential equations

To solve vector differential equations, it is necessary to break up the vector differential equation into its components, and then solve for each of the components. For example, the first two of Maxwell’s equations result in 2 trivial equations. The second two are more complex, as the curl of a vector field produces another vector field. Thus, we must instead rewrite the curl component-by-component. The complete equations are:

So Maxwell’s equations result in 8 coupled PDEs. However, only 6 of them are independent - any system that satisfies the last 6 PDEs also must satisfy the first 2, so long as the boundary conditions satisfy the conservation of charge.

Here, note that for a solution, the current density must be set, which consists of one function each for .

The Maxwell equations are essential in our understanding of electromagnetism, and our work heavily depends on being able to solve them (both analytically and numerically), allowing us to analyze phenomena as diverse as electromagnetic waves to electron beams. See Solving for the fields in a hollow waveguide for an example.

List of common PDEs

This list contains a number of very common PDEs in physics, mathematics, and the sciences for reference purposes.

Wave equation:

Heat/diffusion equation:

Laplace’s equation:

Poisson’s equation:

Burger’s equation:

Continuity equation:

Navier-Stokes:

Maxwell’s equations (in vector form):

Einstein field equations:

Schrödinger equation:

Euler-Lagrange equations:

Hamilton’s equations:

Black-Scholes equation:

Sources and useful reading