Finding the largest and smallest possible values of a function, or **extremizing**, is one of the most important practical applications of calculus.

Finding the optimal values of a function of several variables is even more useful since many successful real-life models depend on two or more inputs.

This unit will build off our knowledge of extremizing single-variable functions to give us a glimpse of how we'll optimize many-variable functions in the Derivatives and Optimization chapters.

Imagine we have a box with base length \( x \) and height \( y.\) The box doesn't have a lid. The cost of manufacturing is directly proportional to the amount of material (or surface area) of the box.

What is the surface area of the box as a function of base width \( x \) and height \( y?\)

**Note:** Count the area of each box side only once.

The surface area of the box \( A \) is dependent on the base width \( x \) and the height \(y.\)

We express this relationship explicitly as \[ A(x,y) = x^2 + 4 x y.\] Writing the surface area as \( A (x,y) \) tells us that it is a function of both base width and height.

A company wants to produce such a box with base width at least 4 units and a height of at least 1 unit. If box material is 4 dollars per unit area, what's the cost of the cheapest box that can be produced?

**Hint:** This problem actually doesn't require any calculus, just some intuition.

Now suppose the company wants to produce the cheapest possible box with a fixed volume of \( x^2 y = 4\) cubic units.

Besides being positive, there's no restriction on the base width \( x \) or the height \(y \) this time.

If the cost is \(C(x,y) = 4 x^2 + 16 xy,\) what is the minimum in this case?

A constraint on \(x \) and \(y\) together can reduce a two-variable function down to a single-variable function.

However, there are many examples of optimization problems where we have two legitimately independent variables \( x\) and \(y ;\) *i.e.* there's no constraint tying them together.

To close out this unit, we'll look at one such example. In the process we'll discover the need for a new kind of tool: the **partial derivative**.

We'll work our way up to our example by looking at an analogous problem first.

In an upcoming chapter, we'll join the Brilliant team on an expedition to chart the bottom of an unexplored lakebed.

A series of boats move across the lake's surface, lowering bobs to measure the depth of the bed. Here's a table displaying the depths recorded by each boat in the survey.

Find the largest overall depth by finding the largest depth in each row, then taking the maximum of these values.

To find the largest depth in the table, we maximized row by row, and then took the largest of these values to find the global maximum depth.

The depth table data was actually built using the function \[ f(x,y) = 2 x y e^{1-x^2-y^2} \ \ \text{for} \ \ -2 \leq x \leq 2, \ 0 \leq y \leq 2\] (not to scale) so we expect it has a maximum. Let's combine our table strategy with what we know of single-variable calculus to find it.

By analogy, let's fix one variable, say \( y\), though we could also fix \(x\) and get the same answer. We'll use \( y_{0} \) to remind ourselves the \(y\) value is fixed.

Find the maximum value of \[ f(x,y_{0}) = \left (2 y_{0} e^{1-y_{0}^2} \right) x e^{-x^2},\ \ -2 \leq x \leq 2. \]

For a given \( 0 \leq y \leq 2,\) the maximum value of \[ f(x,y) = 2 x y e^{1-x^2-y^2} \ \ \text{for} \ \ -2 \leq x \leq 2 \] is given by \( M(y) = \sqrt{2} y e^{\frac{1}{2}-y^2}.\) By analogy, this is like compiling the list

Boat | 1 | 2 | 3 | 4 | 5 | 6 |

Largest Depth Recorded | 130 | 214 | 199 | 133 | 157 | 130 |

for the depth table

Where is \( M(y) = \sqrt{2} y e^{\frac{1}{2}-y^2} \) maximized on \( 0 \leq y \leq 2\, ?\)

Now for the bad news: our approach only works because \( f \) and its domain have very special properties. We need a versatile method for finding extrema of general multivariable functions.

Here's an equivalent way of looking at how we optimized \[ f(x,y) = 2 x y e^{1-x^2-y^2} \ \ \text{for} \ \ -2 \leq x \leq 2, \ 0\leq y \leq 2: \] Let's introduce a new kind of derivative \( \frac{\partial f}{\partial x},\) meaning we take the derivative of \( f\) as if it were just a function of \(x;\) \(y\) is fixed. Similarly, \( \frac{\partial f}{\partial y}\) means we differentiate with respect to \( y \) while holding \( x \) fixed.

Then to optimize \( f ,\) we really solved \( \frac{\partial f}{\partial x} = 0, \ \frac{\partial f}{\partial y} = 0, \) and then checked these critical values against the values of \( f\) on the edges of the rectangle \(-2 \leq x \leq 2, \ 0 \leq y \leq 2\) forming the boundary of \( f\)'s domain.

Assuming this is the right way to optimize, what is the largest value of \[ f(x,y) = 6 - x^2-y^2 \] for points \( (x,y) \) inside the circle of radius \( 2\) centered at the origin?

By introducing **partial derivatives** \( \frac{\partial f}{\partial x }\) and \(\frac{\partial f}{\partial y},\) we took our first steps into the larger world of multivariable optimization. We'll have a whole chapter on these derivatives, but we first need to know more about functions and their graphs.

Graphs are visual tools that can help us in our quest for extrema. Take a look at the animations below. The surface represents the graph of \( f(x,y) = 2 x y e^{1-x^2-y^2} \) on the larger square \( -2 \leq x, y \leq 2.\) The mountain tops correspond to maxima, and the pits represent minima.

Surface graphs can also help us begin to understand one of the other major pillars of multivariable calculus: **integration**. To graph surfaces, we need a coordinate system in space, which is the topic of our next unit. With this detour out of the way, we'll come back to many-variable integration to finish off our chapter.

×

Problem Loading...

Note Loading...

Set Loading...