Lagrange Multipliers
The method of Lagrange multipliers is a technique in mathematics to find the local maxima or minima of a function subject to constraints . Lagrange multipliers are also used very often in economics to help determine the equilibrium point of a system because they can be interested in maximizing/minimizing a certain outcome. Another classic example in microeconomics is the problem of maximizing consumer utility. This is the problem that arises when a consumer wants to know how to best spend their disposable income on the acquisition of goods. In the market, there are goods and prices, and consumers want to determine the best basket.
Contents
Method of Solving
First we partially differentiate and with every variable and write the following equations:
Here, is called the Lagrange multiplier. Now we have equations but there are variables, so we use the given constraint as the equation. On solving these equations simultaneously, we get particular values of which can be plugged in to get the extremum value if it exists.
Find the dimensions of the box with the largest volume if the total surface area is .
Here, and
Proceeding as above, we get
On solving, there are two solutions:
- which makes either or by equation (1), but this is not possible.
- So the only possible solution left is Since we have
Note: We should be a little careful here. Since we’ve only got one solution, we might be tempted to assume that these are the dimensions that will give the largest volume. The method of Lagrange multipliers will give a set of points that will either maximize or minimize a given function subject to the constraint, provided there actually are minimums or maximums.
The function itself, , will clearly have neither minimums nor maximums unless we put some restrictions on the variables. The only real restriction that we’ve got is that all the variables must be positive. This, of course, instantly means that the function does have a minimum, zero.
The function will not have a maximum if all the variables are allowed to increase without bound. That, however, can’t happen because of the constraint
Here we’ve got the sum of three positive numbers because and are positive and the sum must equal 32. So, if one of the variables gets very large, say then because each of the products must be less than 32, both and must be very small to make sure the first two terms are less than 32. So, there is no way for all the variables to increase without bound, and thus it should make some sense that the function will have a maximum.
This isn’t a rigorous proof that the function will have a maximum, but it should help to visualize that in fact it should have a maximum, so we can say that we will get a maximum volume if the dimensions (in centimeters) are
Notice that we never actually found values for in the above example. This is fairly standard for these kinds of problems. The value of isn’t really important to determining if the point is a maximum or a minimum, so often we will not bother with finding a value for it. On occasion, we will need its value to help solve the system, but even in those cases, we won’t use it past finding the point.
Extremum Relative (Local) Conditioned
Of all the rectangles that can be inscribed in an ellipse, find the rectangle with maximum area.
Let's suppose that the equation for the ellipse is .
The rectangle being searched, having its vertices on the ellipse, must be symmetrical about the coordinate axes, so it is sufficient to find the coordinates of the vertex located in the first quadrant. The area we want to maximize is the function
In this particular case, we could clear Substituting in the equation the problem could become to maximize the function
But a more general approach could give us binding conditions, but not allowing a clear variable depending on other variables. Hence the interest in studying the method of Lagrange multipliers.
Problem Statement
Let be an open set in and . Let and with (functions with continuous partial derivative functions) such that the range of the Jacobian matrix
is Let and let's suppose that . It is said that the function has as a relative extremum conditioned by the equations when there exists a neighborhood of in such that or . In the first case is a maximum relative conditioned, and in the second case is a minimum relative conditioned.
Notice that saying relative is similar to local and that saying conditioned is because of ligature conditions constraints of
Theorem (Lagrange Multipliers)
Theorem (Lagrange Multipliers)
Under the conditions of the establishment of the problem, if in addition , then for the function to have an extremum relative conditioned at the point it's necessary to exist real numbers such that the function fulfills are called Lagrange multipliers.
Let's suppose furthermore that and at the point is verified that . Then for that to have at a minimum (respectively maximum) relative conditioned is sufficient with (resp. < 0) and .
Before the long and not easy proof of this theorem, we are going to see 2 examples. Let's come back to the example above, in extremum relative conditioned.
Examples
Of all the rectangles that can be inscribed in an ellipse, find the rectangle with maximum area.
Let
We want to maximize (read the example above, in extremum relative conditioned).
.
For fulfilling is necessary:
.
This, coupled with the ligature condition must be fulfilled for the searched solutions The only solution of these equations in the first quadrant is and . For knowing if is a maximum or minimum, we do the second-degree differential. Consider
and substitute the vectors to verify
i.e.
For these vectors is
Therefore, the solution obtained is a maximum i.e. the maximum area of a rectangle inscribed in is which implies that in a circle of radius 1, the rectangle with maximum area is one square fulfilling area 2.
Exercise: Find a parallelepiped inscribed in a ellipsoid with the maximum volume.
Answer: The maximum volume of the parallelepiped is
Example with Multiple Constraints:
Determine the points that are on the cylinder with equation and the plane with equation whose distance from the origin of coordinates is maximum or minimum.
, then
along with ligature conditions
form a system of equations which must meet the points we seek. There are two solutions for this system of equations if
At this point, we can stop and analyze the first condition (necessary) that one function must satisfy to have one maximum or one minimum relative and study these 2 examples without going to the sufficient condition to be met. Nevertheless, we are going to study the sufficient condition. Let and . Then
Particularizing at we get
whose solutions are and Particularizing at we get the same solutions. Let's see what happens at
so at we have a maximum relative conditioned, and
so at we have another maximum relative conditioned.
It only needs to be examined what happens when . In this case, the solutions obtained are and . These solutions correspond to two minimum relative conditioned.
Proof (Lagrange Multipliers)
Proof of Lagrange Multipliers Theorem:
(Necessary Condition)
Let due to . Because the range of the Jacobian matrix at is , there is a square submatrix of it of order whose determinant is not 0; let's suppose the submatrix is formed by the first rows, i.e.
Under these conditions and applying the implicit function theorem, there exists an open set and a function such that if with and , then we have
Let now the function . This function is an injective function (one to one) and and Let now the function If has at one extremum relative conditioned, has at one relative ordinary extremum and of the same nature. Indeed, by hypothesis there exists a neighborhood of , we can suppose that such that has a constant sign . Due to being a continuous function, will be a neighborhood of , because if
will have the same constant sign as , then will have an extremum relative ordinary at of the same nature that the extremum relative conditioned what has at , i.e. for
The last equality is broken down into Differentiating with respect to gives
Since what is intended is to determine the numbers so that has a stationary point at we'll write i.e.
This is a system of equations with unknowns . This system taking its first equations can clear our unknowns since the determinant forming system is . This part of the theorem will be proved if we prove that the unknowns satisfy the remaining equations of the system. Indeed, for taking into account and we have
(Sufficient Condition)
Reductio ad absurdum.
If the function did not take into a minimum relative conditioned, there would exist for each a point , such that and with . Take and and . Due to the compactness of the sphere of center and radius in deduce that can be extracted from a convergent subsequence converging to a point of the same sphere let's suppose that is the same sequence not to complicate the notation converging to since and we have and for . Applying Taylor's series,
where is in the segment determined for and by hypothesis and so
Since
Admitting that , it concludes that from a certain value of onwards, whence follows which contradicts the condition that we assumed as a result of admitting that didn't have at a relative minimum. The demonstration will be completed as we test that . According to the hypothesis of the theorem, it will be sufficient to prove that , since due to . But and applying the theorem of finite increases in differential calculus of several variables we'll have to
Hence and taking limits we get, in fact, .
References
- Ed.Tecnos Análisis matemático II, Topología y cálculo diferencial, J.A. Fernández Viña