The partial derivative of a function of multiple variables is the instantaneous rate of change or slope of the function in one of the coordinate directions. Computationally, partial differentiation works the same way as single-variable differentiation with all other variables treated as constant.
Partial derivatives are ubiquitous throughout equations in fields of higher-level physics and engineering including quantum mechanics, general relativity, thermodynamics and statistical mechanics, electromagnetism, fluid dynamics, and more. Often, they appear in partial differential equations, which are differential equations containing more than one partial derivative.
The definition of the partial derivative in the -direction of a function of two variables and is
Extending this definition to more than two variables is straightforward: all variables besides the variable in the derivative are held fixed in the definition above. The same holds true for partial derivatives in directions other than the -direction: for instance, for the partial derivative in the -direction the first term in the numerator above is taken to be . Note that this definition is the usual difference quotient defining the slope of a tangent line, except with the extra variables held fixed.
Notation for the partial derivative varies significantly depending on context. A frequently used shorthand notation in physics for the left-hand side above includes , while mathematicians will often write (although this can be ambiguous). In contexts where is a function of space and time , the dot derivative will typically denote the partial derivative with respect to time while the prime derivative will typically denote the partial derivative with respect to space .
All of the usual rules for the ordinary single-variable derivative carry over in the case of partial derivatives:
- Linearity: where are constants and are functions.
- Product Rule: .
- Chain Rule: .
Since the quotient rule and other rules, such as the power rule, rules for differentiating logarithms and trig functions, etc., can be derived from the above using Taylor series, they are not listed separately, but all such rules still hold.
Compute the partial derivative in the -direction of
first directly from the definition and second by treating the partial derivative as an ordinary derivative with all other variables held constant.
Plugging into the definition gives
Now, using the trigonometric identity for the sum, the right-hand side above can be rewritten as
Evaluating the limits on the right-hand side with L'Hopital's rule, one finds
Computing the partial derivative instead by treating the -variable as a constant, the chain rule gives
in agreement with the first approach. This example demonstrates that the second approach (treating all variables not involved in the derivative as constant) is typically more expedient than computing the partial derivative directly from the definition.
The original function and its partial derivative are graphed below for illustration:
What is the partial derivative in the -direction of the function evaluated at the point
The partial derivative everywhere is given by
Evaluating at gives
Since there is one partial derivative operator for each coordinate, the partial derivatives of a function can be arranged as a vector, called the gradient and denoted :
The definition above is written for the three-dimensional case, but the generalization to arbitrary dimensions (including only two dimensions) is straightforward; each component of the vector is a partial derivative in an independent coordinate direction. The operator is often called the gradient operator or the del operator. It can be treated as a vector of derivative operators:
By using the del operator in vector operations like the cross product and dot product, new types of derivative-like objects called the curl and divergence can be defined on vector fields in multivariable calculus.
Find the divergence of the vector field everywhere outside the origin.
Treating the del operator as a vector, the divergence is given by the dot product
Computing higher-order partial derivatives also works the same way as in single-variable calculus; simply apply the derivative operator multiple times:
where the rightmost expression is another way of writing the second partial derivative with respect to . If the two derivative operators are not the same, the higher-order partial derivative is called a mixed partial derivative:
Typically (but not always!) the partial derivatives in different directions commute:
This is true as long as both of the mixed partial derivatives are continuous.
Show that the mixed partial derivatives of the function
at the origin are not equal, i.e. the partial derivatives in the - and -directions do not commute .
Consider the restrictions of the partial derivatives of to the - and -axes. Restricted to the -axis outside the origin, looks like
and restricted to the -axis outside the origin, similarly looks like
At the origin, corresponds to of , i.e. a derivative of taken along the -axis. Similarly, corresponds to of , i.e. a derivative of taken along the -axis. Since only the restrictions of the first derivatives to the coordinate axes matter for computation of the second derivatives at the origin, the expressions above suffice. The mixed partial derivatives are therefore
so the partial derivatives in the - and -directions do not commute.
This follows as a consequence of the failure of the mixed partial derivatives to be continuous can be seen from the graph of
Just as the first-order partial derivatives could be arranged to form a vector (the gradient), the second-order partial derivatives can be arranged as a matrix called the Hessian matrix:
The determinant of the Hessian is important in characterizing the stability of critical points where one of the first-order partial derivatives vanish, similar to the use of the second derivative test in single-variable calculus.
When dealing with functions of multiple variables, one often sees derivatives written with instead of . These derivatives are called total derivatives and are distinct from partial derivatives. For instance, consider a function where is itself some time-dependent position. Since itself is time-dependent, the time-dependence of depends not only on the explicit form of but also on the path . This fact is captured by the total derivative
The first term gives the implicit dependence of on through the time-dependence of via the chain rule, while the second term represents the explicit dependence of on .
Compute the total derivative with respect to of
where represents the path parameterized by .
The implicit dependence of on is
The explicit dependence of on is
So the total derivative is
There is an additional term from the time-dependence of that is not present in just the partial derivative with respect to .
Usually, when computing partial derivatives, it is assumed that all coordinate directions but one are held fixed. However, in some cases (especially in thermodynamics), the quantity that is held fixed for a partial derivative is some combination of the other coordinate variables (like the entropy of an ideal gas, which is a function of temperature and volume). The notation for a partial derivative of a function with respect to the coordinate holding the quantity fixed is
Using this notation, one can write a convenient identity called the triple product identity for partial derivatives:
This identity generalizes to cyclic products of partial derivatives, with the right-hand side given by .
The heat capacity of an ideal gas is defined as the partial derivative of heat input with respect to temperature change. That is, the heat capacity quantifies how responsive the temperature of an ideal gas is to addition of heat. However, depending on how the system is externally maintained, the heat capacity may take different values. Two standard definitions are the heat capacity at constant volume and the heat capacity at constant pressure , defined by
An extensive derivation shows that the difference between the two heat capacities is given by
For an ideal gas compute the difference between heat capacities explicitly in terms of the number of moles of gas and the gas constant .
Computing each of the partial derivatives in the given formula using the ideal gas law,
Plugging into the given formula, the difference of heat capacities is
This result provides concrete physical proof that the quantity held fixed in computing a partial derivative matters.
Partial derivatives in mathematics and the physical sciences are often seen in partial differential equations, differential equations containing more than one partial derivative. Just as ordinary differential equations typically arise by modeling small changes in a system over a small interval of time or space, partial differential equations commonly arise by modeling small changes in a system over both time and space, especially when the change in time affects the change in space and vice versa. A few particularly important partial differential equations are listed below along with a description of the contexts in which each arises, illustrating that partial derivatives are essential for understanding various phenomena throughout the physical sciences:
The wave equation describes the propagation of oscillations of amplitude of some object, including light waves, pressure waves (sound), and oscillations of physical objects like ropes. Solutions can generically be written as linear combinations of functions and describing the propagation of a wave to the left or right in time with velocity .
The Schrödinger equation governs the evolution of wavefunctions in quantum mechanics describing particles of mass moving in a potential . It says that the "velocity" of the probability distribution for the location of a particle is proportional to the curvature of the wavefunction. Localizing a particle in space gives the wavefunction enormous curvature in a small region, so the probability distribution will want to move outward rapidly from this region, in a manifestation of the Heisenberg uncertainty principle.
The heat equation is similar to the Schrödinger equation with set equal to zero. It describes the diffusion of temperature. Similar to the Schrödinger case, the rate at which heat expands outward is proportional to the second derivative or "curvature," where large curvature corresponds to a lot of heat being confined in a small region.
The Navier-Stokes equations are extremely important in fluid dynamics, for they constrain the velocity of a flowing liquid in the coordinate direction, . The constant defines the viscosity, defines the work supplied to the system, and represents applied accelerations to the system in the coordinate direction (e.g. from gravity).
Einstein Field Equations:
The Einstein field equations are a set of many coupled partial differential equations that solve for the geometry of spacetime in general relativity in terms of the matter/energy that is present in spacetime. The left-hand side above includes the expressions and , which are defined via a complicated combination of partial derivatives of the metric of spacetime that governs how distances are measured.
The Black-Scholes equation is a well-known application of partial differential equations in finance. It models the price of a derivative investment in terms of variables that describe the value of the option: the standard deviation of the stock's returns, the price of the stock, and the risk-free interest rate .
 Weisstein, Eric W. Partial Derivative. From MathWorld--A Wolfram Web Resource. http://mathworld.wolfram.com/PartialDerivative.html.