Waste less time on Facebook — follow Brilliant.

Help on Lagrange Multipliers

Hey friends, I have often seen many brilliant solutions, featuring Lagrange Multipliers and think that they are of much use to compute maxima and minima. But I know nothing about them. So please can anyone suggest me sources about Lagrange multipliers? And any more works by Lagrange with a light description 'bout them. Thanks in advance.

Note by Abhimanyu Swami
3 years ago

No vote yet
1 vote


Sort by:

Top Newest

I found Lagrange multipliers to be suspicious back when I could have learned them for the first time, so I didn't learn them and always took a way around them.

Some years later I started scribbling on a napkin trying to figure out how I would look for constrained minima and realized what I was writing down had the same form as Lagrange multipliers.

I wrote the following to myself to remember the argument:

Suppose we'd like to find the extremum of \(F(x_1,x_2,x_3)\) under the constraint that \(G(x_1,x_2,x_3) = 0\). The constraint relation \(G\) implies that one of the variables is fully determined by the value of the other two. We can therefore suppose that \(x_3\) is a variable dependent on \(x_1\) and \(x_2\), which are first class variables, free to vary as they please. With this in mind, the differential change in \(F\) under any small change in any direction, \(dx_1\), for instance, is given by \[\frac{dF}{dx_1} = \frac{\partial F}{\partial x_1} + \frac{\partial F}{\partial x_3}\frac{dx_3}{dx_1} \] and is 0 at the extremum. As \(G = 0\) everywhere, the differential of \(G\) must satisfy \[dG = \frac{\partial G}{\partial x_1}dx_1 + \frac{\partial G}{\partial x_2}dx_2 + \frac{\partial G}{\partial x_3}\frac{dx_3}{dx_1}dx_1 = 0\] We are free to limit the variation to \(x_1\) i.e. \(dx_j = \delta_{1j}\). This implies \[ dG = \frac{\partial G}{\partial x_1}dx_1 + \frac{\partial G}{\partial x_3}\frac{dx_3}{dx_1}dx_1 = 0\] or \(\displaystyle \frac{dx_3}{dx_1} = -\frac{\frac{\partial G}{\partial x_1}}{\frac{\partial G}{\partial x_3}}\). Going back to the change in \(F\), \(dF\) we now have \[\frac{dF}{dx_1} = \frac{\partial F}{\partial x_1} - \frac{\partial G}{\partial x_1}\frac{\frac{\partial F}{\partial x_3}}{\frac{\partial G}{\partial x_3}} = 0 \] Going through the same process for a change in \(x_2\), we have \[\frac{dF}{dx_2} = \frac{\partial F}{\partial x_2} - \frac{\partial G}{\partial x_2}\frac{\frac{\partial F}{\partial x_3}}{\frac{\partial G}{\partial x_3}} = 0 \] Curiously, \(\displaystyle\frac{\frac{\partial F}{\partial x_3}}{\frac{\partial G}{\partial x_3}}\) is of the form for the Lagrange multiplier \(\lambda\) that is found when minimizing the system by aligning the gradient vectors of the constraint and the scalar function to be optimized. As the constraint ensures that \(x_3\) is some function of the variables \(x_1\) and \(x_2\), the two equations above form a solvable system in \(x_1\) and \(x_2\). So, we can say that \(\displaystyle\frac{\frac{\partial F}{\partial x_3}}{\frac{\partial G}{\partial x_3}}\) is really some function \(f(x_1,x_2)\) evaluated at the extremum \(x_1^*, x_2^*\) and thus can be some constant \(\lambda\) that we solve for, along with the constraint equation. This is the easier approach when actually solving problems but it seems magic when presented ad hoc as it often is.

Josh Silverman Staff · 3 years ago

Log in to reply

@Josh Silverman Thanks, now can you suggest me a source, which demonstrates cool lot of examples about this tool? Abhimanyu Swami · 3 years ago

Log in to reply

@Josh Silverman Looks like I'll leave Lagrange Multipliers alone until later, no matter how useful they are to my inequalities studies. Daniel Liu · 3 years ago

Log in to reply

@Daniel Liu After a while, Lagrange becomes the "default" approach for constrained inequality. Economists use it almost exclusively, and jump immediately to the Kuhn-Tucker conditions.

Be warned though, as with all approaches, you need to understand when it applies, and the various pitfalls. Very often, people fail to properly consider the boundary, especially on an unbounded domain. Calvin Lin Staff · 3 years ago

Log in to reply

Basically if you're given a constraint \(g(x_1,x_2,\dots,x_n)=k\) and you want to maximize \(f(x_1,x_2,\dots,x_n\), then it occurs where \(\nabla f(x_1,x_2,\dots,x_n)=\lambda\nabla g(x_1,x_2,\dots,x_n)\). If you're not familiar with \(\nabla f(x_1,x_2,\dots,x_n)\), it's just a vector containing its partial derivatives: \(\nabla(5xy+z^3)=\left<5y,5x,3z^2\right>\). So the strategy is to find \(\lambda\) or find some conditions for your variables using that proportion. Cody Johnson · 3 years ago

Log in to reply


Problem Loading...

Note Loading...

Set Loading...