First and Second Variations, and Dirichlet’s Principle

Introduction: What is Calculus of Variations?


Joshua Siktar

2 years ago | 5 min read

Introduction: What is Calculus of Variations?

The calculus of variations refers to the science (or the art) of solving optimization problems containing integrals. In other words, out of a collection of functions, choose the one that makes the integral of those functions as small as possible. Such a function will be called a minimizer. 

Sometimes the integrand will depend on the function itself alone; sometimes dependence on derivatives of the function will be allowed. Here is an abstract example of what one of these problems looks like:

This is an excerpt from Filip Rindler’s “Calculus of Variations” text, Springer Publications. Arguably my favorite text on the subject matter.

If you aren’t familiar with what the “W” means, don’t worry about it right now (this is related to the theory of Sobolev Spaces).

When you’re given one of these problems, there are really two distinct, but related, questions being asked in one bundle:

  1. Does a minimizer of the problem exist?
  2. If a minimizer exists, is it unique?

To answer these two questions about variational principles, a wide range of areas of mathematics may be called upon. Functional analysis will take up a large part of this, but partial differential equations also like to make appearances. The applications are found in many places.

Many of the definitions and notions that were introduced in the calculus of variations were motivated by phenomena occurring in materials science, though further applications to optimal control and partial differential equations have also been realized.

That being said, this article will be focused on the concept of a variation that often appears while proving existence of a minimizer. This is really two concepts in disguise as one: the first variation and the second variation.

The Intuition from Calculus

To explain what first and second variations are, we turn to one-dimensional calculus for an analogy. You remember what critical points and inflection points are, right? Well, let’s do a quick refresher. Consider the following function in one dimension graphed below:

Graph of f(x) = x⁴(x — 1), courtesy of Geogebra Graphing Calculator

This is the graph of f(x) = x⁴(x — 1), or f(x) = x⁵-x⁴. It is a simple exercise in calculus to compute the first and second derivatives, which are defined on the entire real line:

f’(x) = 5x⁴-4x³

f’’(x) = 20x³-12x²

To find the critical points we want to find the solutions to the equation f’(x) = 0, which in this case are x = 0 and x = 4/5. To find the inflection points we want to find the solutions to f’’(x) = 0, which are x = 0 and x = 3/5.

Furthermore, a graph is concave up at a point if its second derivative is positive at that point; a graph is concave down at a point if its second derivative is negative at that point. Inflection points serve to connect regions of a graph that are concave up and concave down (at least when the second derivative is continuous; it will be most of the time).

Interestingly, for calculus of variations, our inspiration comes from points where the graph is concave up, rather than being an inflection point.

What are First and Second Variations?

The first and second variation involve perturbing the functional by a test function. In the case of graphing real-valued functions, a perturbation is simply a translation of the graph of the function left, right, up, or down. This takes a very important meaning for functionals, however.

Image courtesy of TeXeR

Detailed Example: Dirichlet’s Principle with Poisson’s Equation

To conclude this article we’ll do a detailed breakdown of the Dirichlet Principle for Poisson’s Equation. First, what is Poisson’s Equation? In arbitrarily many dimensions, it looks like this:

The Poisson Equation; image courtesy of TeXeR

The functions f and g are referred to as data, and they are usually given. At least in the context of classical partial differential equations, assuming f and g are continuous on their respective domains is enough.

If we are in one dimension, the Laplacian just becomes a second derivative (u’’). It turns out when f and g are continuous, there will be at most one solution (I won’t prove that here though; the proof is not related to the main takeaways of this article).

What we do from here will be in the one-dimensional case for sake of simplicity, on the domain (0, 1). This prevents the complications of performing the Chain Rule in higher dimensions, and the more cumbersome notation that goes along with it, from obfuscating the main ideas.

We’re now going to work through the proof of Dirichlet’s Principle for the Poisson Equation, which is arguably one of the most elegant applications of the concept of a first variation. This is often covered in graduate level partial differential equations courses, but the underlying concepts are relatively simple and largely build upon what I’ve talked about here.

It revolves around minimizing an integral expression known as the Dirichlet Energy. The statement of Dirichlet’s Principle for the Poisson Equation is as follows:

Image courtesy of TeXeR

This result is really saying two things at once. It says that a minimizer will satisfy the Poisson Equation, and vice-versa. An admissible set is simply the group of functions that we are comparing (at the very least, all of the quantities appearing in the functional must be well-defined, namely derivatives). Let’s first prove that a given minimizer automatically satisfies the Poisson Equation.

Image courtesy of TeXeR

At first glance, it may look like we’ve missed a step in completing this part of the proof. To satisfy Poisson’s Equation, we need to satisfy the boundary conditions, too. When did we satisfy this stipulation?

Think for a moment. Read the proof again.

Then you may realize we satisfied the boundary condition at the very beginning, pretty much for free. The admissible class only contain functions that satisfy the boundary conditions. Since we picked a minimizer from this admissible class, it automatically satisfies the boundary conditions.

Now, we prove the second part of the Dirichlet Principle: if a function satisfies Poisson’s Equation, it then must be a minimizer for the probleem.

Image courtesy of TeXeR

Since we said that the Poisson Equation has at most one solution, this Dirichlet Principle implies that for any given data, there is at most one minimizer for the functional. I’d call this a corollary of the Dirichlet Principle, for Poission’s Equation, including the special case of the Laplace Equation.

This is the content of the proof of Dirichlet’s Principle. There’s one last question I want to explore a bit that is not covered explicitly in this framework within the Evans text. What is the second variation for this problem? Let’s do a little bit of extra calculation.

As a side note, the idea behind Dirichlet’s Principle can be extended to other partial differential equations in higher dimensions, in particular other elliptic problems. Chapter 8 of the text by Evans explores these in greater detail.

Acknowledgments: the exposition on Dirichlet’s Principle is largely based on the text “Partial Differential Equations” by Lawrence Craig Evans (2nd edition, AMS publications). Also, I include some additional exposition on the Dirichlet Principle in this post on Quora.

Note: I also talked about the Dirichlet Principle in this post on Quora as well.


Created by

Joshua Siktar

Ph.D. Candidate, Applied Mathematics, University of Tennessee-Knoxville | B.S. Mathematics, Carnegie Mellon | Facilitator of Modernization of Education







Related Articles