A Fermi estimate is one done using back-of-the-envelope calculations and rough generalizations to estimate values which would require extensive analysis or experimentation to determine exactly.
Physics is celebrated for its ability to make extremely accurate predictions about tough problems such as the magnetic moment of electrons, the deflection of light by the Sun's gravity, or the orbit of the planets around the Sun. However, accuracy often comes at the cost of great difficulty in calculation.
For example, calculating even a poor estimate for the temperature at which a Bose-Einstein condensate forms requires most people about 15 years of preparation, and an accurate calculation for the flight of a commercial jet is entirely intractable without the aid of sophisticated software systems that handle the difficult numerical calculations. If a nice, aesthetically pleasing analytical result were required, much of physics would not have happened. Indeed, the seduction of formal calculations can be a serious hindrance. Whenever the math gets out of hand, it is usually a good idea to relax demands and accept an approach that while imprecise, offers a prayer of moving forward.
If the cost of the wars in Afghanistan and Iraq were factored into the price of gasoline for a taxpayer in the United States, what would the effective war tax be per gallon? On the basis of energy per person, what is more effective, solar power or wind? How many cracked iPhone screen repairmen are there in the United States? Each of these questions is probably completely bewildering, at least if trying to guess the answer in one step. Who has an intuition for any of these numbers? They are outside normal experience for several reasons. In the context of the question of a war tax
- Some of the numbers involved are incredibly big, and humans are poor judges of large numbers.
- There are factors that are completely unknown, such as the number of miles driven by a typical person per year, the size of the war's black budget (clandestine operations), or the stability of OPEC in the absence of United States foreign policy, that could be significant factors in finding an accurate answer.
However, these concerns can be addressed by breaking the big problem into appropriate sub-problems.
How much revenue does a typical MLB team make per season?
The main question is quite a tall order, but some reasonable assumptions can be made.
One of the major sources of team revenue must be ticket sales, or else they'd just play the games in one place and broadcast the proceedings to the respective cities. So, assume that ticket sales make up the profits of a typical MLB team (this can always be revisited if it turns out to be wildly off).
How much does a team make from ticket sales per year? Most MLB stadiums have room for fans to attend. Assuming that on most days the stadium is approximately full, since they probably plan so that the stadium doesn't look too empty on TV, but has enough room for extra fans should there be heightened interest.
A fan can get a horrible seat for a team in a reasonably sized market for about $20 (personal estimate from going to Los Angeles Angels games). An extremely nice seat can cost as much as $2500. Let's assume that the distribution is asymmetrical and that most seats are just okay, so that the average ticket costs slightly more than the ultra-bad seat .
Finally, each team plays games per year, and on average, splits the profits of each game with the opposing team. Thus, the annual revenue of a typical MLB team is approximately million.
The real answer is $226 million, which means the estimate is wonderfully accurate, certainly more so than it deserves to be.
Without knowing anything about the actual revenue of any given MLB team, we were able to break the big problem down into several more manageable sub-problems that we could reason about with our common sense:
- stadium capacity
- typical attendance
- ticket prices
- the number of games in a season
- a reasonable guess for the contribution of ticket prices to revenue
By multiplying the answers to all five sub-problems, we arrive at a remarkably good first approximation for team revenue.
A Fermi problem is an approximate, back-of-the-envelope calculation of an arbitrary figure facilitated by identifying suitable factors of the figure, that are accessible to common experience. Depending on the difficulty of the problem, and the number of sub-problems required to get in touch with common sense, one can usually hope to be correct to within a factor of 2 or 3, and other times to within the correct order of magnitude.
You are an astronaut, it is a few hours before you are scheduled to launch for an intergalactic space mission, and you're sitting on the shore of the Atlantic Ocean near Cape Canaveral, savoring the view one last time. This mission is especially trying; because of the relativistic speed of your spaceship, Earth will be one billion years into the future upon your return.
Depressed beyond words, you decide, in one of your weaker moments, to pee into the Atlantic Ocean.
Flash forward: 1 billion years.
You return to Earth, expecting a hero's welcome, but instead you find that all of humanity has vanished. Instead, the Earth is run by a peaceful clan of telekinetic dolphins who made off with the lion's share of Bitcoins that were abandoned by the last humans as they uploaded their souls to the singularity server. In a disillusioned haze, you bend down and fill your astronaut survival cup with refreshing lake water, hoping for some clarity. After drinking the glass, you realize that this far into the future, all the Earth's water has been thoroughly well mixed since the time you took off on your mission.
Approximately how many molecules of your billion year old urine did you just consume?
Assumptions and details
- You released of urine into the Atlantic Ocean.
- Approximate your urine as consisting of pure water.
How many iPhone screen repairmen are there in the United States?
An important number for this problem is the number of iPhones in the United States. We can arrive at this number in two ways.
First, from common experience, it seems about 1 in 2 people has a smartphone, and those without them tend to be the very young or the very old. Furthermore, it is common to hear that Android phones dominate the market place, so let's estimate the Android figure to be about , and thus the iPhone share to be about , approximating Windows, Blackberry, and other platforms to be minor players in the market. This yields iPhones.
Another way is to think about the people you see on the street, and try to directly estimate how many of them have an iPhone. This number seems to me to be around 1 in every 5 people, which would make million iPhones in the country. Thus, an estimate in the range of fifty to sixty million seems to be approximately right.
How many of these are cracked? Phone contracts usually last for two years, and most people take their upgrade, so let's assume that the typical iPhone spends 2.5 years with the customer before being replaced. Now, how many of these screens will be cracked over the lifetime of the phone? I'd guess that this number sits somewhere around 20%.
Every cracked screen doesn't get replaced or else we wouldn't see them around too often. Let's assume that if a crack happens in the first of the time for which the customer owns the phone, they'll get it fixed, but otherwise they'll just wait for a new phone. This means that in a given year, of iPhones will need their screens replaced, or iPhone screens.
How many repairmen does this support? Let us assume that each iPhone screen takes the average repairmen to fix and that the average iPhone screen fixer spend about half their full time work week fixing iPhone screens (averaging over full and part time workers). Thus, we predict there to be enough broken iPhones to support the employment of approximately in the entire country.
This is roughly 10 times the number of Apple stores in the country, so it seems fairly reasonable.
As we saw with the MLB revenue problem, breaking down our big problems into small problems can be a big aid in tackling an estimate. However, there is potential for trouble. For instance, with the iPhone we are not necessarily getting more familiar with the numbers we have to estimate by breaking the problem down. Despite the number of cracked iPhone screens being a simpler problem than guessing the number of screen repairmen outright, few people are familiar with the frequency of iPhone screen breaks. We hope this number will be accurate to within a factor of two (we guessed 20% but it could easily be 10% or 30%). Similarly, we took a wild guess at the amount of time required to fix a given iPhone. As it turns out, the true frequency of screen breaks is close to one third of all iPhones and the average repair time is closer to half an hour. Thus in one of the numbers we were too low by a factor of and in another we were too high by a factor of two, which means that all told, those two factors place us about too high in our estimate. With long strings of guesses, these errors can multiply, and drag us away from the right answer. However, we are unlikely to have a consistent bias (too high or too low) on a series of unique sub-problems, and thus, our mistakes will tend to balance each other out, as in a random walk. If we are too high in some numbers, we are likely to be too low in other numbers.
Crudely, if we have a consistent uncertainty in estimating any numbers, we can model our guess as a random number drawn from a Gaussian distribution about the true answer with a characteristic variance . Breaking our guess into sub-problems means that our variance becomes (the variance of random variables is additive), and thus the standard deviation (the square root of the variance) increases with the square root of . This is the same behavior as a random walk (where the average displacement increases as with time), which we might expect since we hope to guess slightly too high or slightly too low at each number we estimate. Thus, when possible, we should avoid making too many sub-guesses.
On the other hand, it is likely that our familiarity with the numbers in our sub-problems is significantly better than our knowledge of our big problem, so to a point it will always make sense to break up our big problems. It isn't possible to precisely model this trade-off between random walks and accurate knowledge, so knowing when to stop finding sub-problems is a matter of intuition and building experience in making accurate predictions of this kind.
Another crucial technique in good estimates is to ignore everything but what you deem to be the most important factor(s) in the problem. In the baseball problem, for example, we assumed the major sources of revenue to be tickets, TV, and the merchandise that people buy at the stadium or elsewhere, and we took all other sources of income to be negligible. Likewise, if we estimate the daily energy budget of a person, we can assume that the energy used by their electric toothbrush makes a negligible contribution when compared with the energy that goes into making their food, driving their vehicle, heating their home, etc. If we consider the time required to get in a car and drive 100 miles, we can ignore the time required to open the door, et cetera.
This takes some bravado to get started. Getting a sense for the dominant contribution(s) is not always obvious and may take some numerical comparisons to get a good sense for it. However, reducing the number of variables we need to keep track of always reduces the complexity of our mathematical problems, and can lead us toward more accurate solutions. For example, if we solve a problem first in one extreme case, then in another, we may be able to identify the short and long time behavior, or low and high energy behavior, and therefore know what we should be looking for when we undertake the full-blown analytical solution. Treating the dominant parts leads us to answers that are substantially correct, i.e. only differ from the true answer by a factor of twenty or thirty percent, and therefore give us a good idea of what's going on.
How long will it take to melt a frozen water bottle by shaking it?
To make this estimate, we have to identify a mechanism for heating the ice. For simplicity, we assume that the system is at zero degrees Celsius. Under this assumption, we can't melt the ice if there's none melted to begin with, i.e. if we shake a solid block of ice back and forth, no heat is delivered to the ice.
To make progress, let's grant the water bottle a small initial volume of liquid water, also at zero degrees Celsius. Let's say assume that a water bottle has volume and that initially, some small fraction of the water is liquid, i.e. .
When we accelerate the bottle in one direction, all the material in the bottle is given kinetic energy. When we decelerate to force the bottle in the other direction, the ice is decelerates by traveling through the liquid water and the liquid water essentially moves out of the way to the back of the bottle. This is the manner in which heat enters the system. In decelerating through the liquid water, the ice experiences a frictional force that's converted into heat. Since the ice and water are at zero degrees Celsius, the heat goes toward melting the ice.
At time the liquid volume is given by , and the volume of frozen ice is given by (in this step, we explicitly ignore the density difference of ice and water). If we approximate the water bottle as a cylinder, the distance that the ice will decelerate through is given by the liquid volume divided by the cross-sectional area: . Let us make the crude approximation that the ice doesn't lose any velocity as it pushes the water out of the way, i.e. the accelerating force from our arm is equal to the force of water friction on the ice. Thus the ice travels at the speed until it hits the end of the bottle. Let's assume a vigorous shake. The human arm is capable of throwing a ball (baseball, cricket) at ; let's take 10% of that value as the sustained steady state velocity of the ice, .
Then, with each shake, our ice feels the drag force which acts through the distance , thus with each shake, our bottle acquires the heat .
We can turn this into an average power by estimating the time taken to decelerate through the water. Here we have several choices. We can make a simplistic model in which each shake travels through the same distance and is shaken back and forth at the speed . We can also make a slightly more sophisticated model in which the shaker changes directions as soon as they feel the ice hit the bottle. This is more realistic. First we'll calculate the simple model.
Constant shake frequency
Here, we have it that shake time is given by , and thus the average power is given by
How much ice melts with the addition of heat ? The heat of melting is given by , and we therefore require the heat to melt the mass of ice. Taking a time derivative, we have , or
which yields the solution
where . Solving for the time at which the fraction of the ice has melted (explained below), we find
This solution is asymptotic as we approach complete melting, so we have to use some reasonable cutoff, such as , i.e. the time to melt 99% of the ice. Plugging in the known values for the density of ice, and water, the heat of melting, some reasonable value for (0.5 m), and (volume of a typical water bottle), we get or about Since we shake at roughly (), this translates to about 160,000 shakes back and forth. Below, we plot the number of shakes required as (the initial fraction of melted ice) is varied from 0.1% to 99%.
Our choice for was poor because shakes at the beginning should have a higher frequency (short distance for ice to travel through liquid) while shakes at the end should have a lower frequency (greater distance to travel). Thus, initially, this model should underestimate the average power delivered, and as time goes on, it may overestimate it.
Scaled shake frequency
Now we calculate the model where we change shake direction as soon as the ice hits the end of the bottle. The time it takes to hit the end in this case is given by .
Using this new estimate for , we find the average power to be constant in time
This yields a much simpler result, i.e. , and thus
Plugging in the numbers, we predict that we will require only 2000 s, or 30 min. The average shake will last for , which is about 40 Hz, so we require 80,000 shakes.
Are these reasonable estimates for the number of shakes to melt a frozen water bottle? When should we expect these models to work? Are there any cases where this model should fail terribly, i.e. should it be accurate for small initial unmelted volume?
How long would it take for you to shake the melted ice from zero degrees Celsius up to boiling?
Often, when pressed for an explanation of the western military presence in the Arabian peninsula, government officials will point to the "strategic interests" of the west in the economic balance and stability of Middle East oil powers, the implication being that without constant intervention in this part of the world, oil markets would become unstable and the west would suffer economically. Putting aside the risks or lack thereof that would exist in a world with a hands-off foreign policy toward the Arabian Peninsula, this begs the question of just how much money our military presence in the Middle East has cost relative to the western consumption of oil. For simplicity, we'll consider the United States.
First we estimate the number of gallons of gasoline that the U.S. consumes every year. Let's approximate the gas used in cars to be roughly half of all gasoline consumed in the United States. Next, let's estimate the number of miles driven per year by the average American. From personal experience, it seems reasonable to take 40 miles as the average distance driven per day by Americans in their commute to and from work. Counting work days (ignoring holidays), this makes
Let's multiply that by that to account for leisure, and errands. That brings us to miles per year. Assuming that of the country's population is in the active work force, this makes people driving miles a year, or miles driven per year. Assuming a typical gas mileage of miles per gallon, this gets us to gallons of per year, or gallons per year all told.
Over the period from 2001-2007 years, the portion of the military budget of the United States devoted to force projection in the Persian Gulf has averaged about per year, which means that over that timespan there was an unseen tax on gasoline equivalent to roughly
Taking the popular rationale for Middle East foreign policy at face value, you might expect that the domestic cost of gasoline would have fallen by some amount during this effort or remained stable, but in fact the cost of gasoline rose considerably. The price of oil is the outcome of a complex interconnection of supply and demand, international trade, and the capriciousness of OPEC, but it is also a function of the stability of the region. Invariably, some of the price increase was due to the massive instability visited upon the region by the sustained wars and insurgency movements that rose up in opposition to them.
Though the connection between the wars and the pump is concealed by the shell game of how taxes are paid, common sense calculation can shine a light on the true cost and sense of military interventions.
Suppose you're flying from San Francisco (SFO) to New York (NYC). The plane (Airbus 320) carries 189 passengers and has 6 crew members who each make $20 per hour. Assume a minimal model where the only costs of operating the flight are paying the salary of the crew, and paying for fuel. Further, assume that the all the engine does is fight the drag force. If each ticket costs $300, jet fuel costs $0.75 per liter, and the crew is paid for the number of complete hours they work, what percent of your ticket price goes toward the airline's profit?
Assumptions and Details
- The Airbus has a drag coefficient of
- The density of air is .
- The plane flies at from start to finish.
- The distance from NYC to SFO is
- The energy efficiency of the jet engines is .
- Approximate the Airbus as a cylinder of diameter
- The energy density of the jet fuel is
- The density of the fuel is .
- The crew is paid for each full hour they work, i.e. if they work they are paid for
A favored technology of some environmentalists is the wind farm. Its advocates point out how it is inherently clean and carbon neutral, and that the wind is a resource we're leaving on the table. Together with the circumstance of our current reliance on petroleum, wind seems like a slam dunk. But how does it compare with competing renewable energy sources like solar power? How do its intrinsic limitations compare with those of the Sun?
In order to have a sense of scale for our energy budget, we need to know the daily energy use per person. Let's take the European Union states as a middle of the road energy-using population. Official figures state that energy use for E.U. citizens is approximately 40,000 kilowatt-hours per person per year, or per person per day. This number is fun to estimate on its own, but we'll take it for granted for the purposes of our present question.
When we think of a windmill, the basic thing that's happening is that the kinetic energy of the wind is being harvested to spin the windmill which uses the kinetic energy to drive an electric generator. The amount of kinetic energy that's in a parcel of wind of volume and velocity is given by . If we assume that the only wind that drives the propeller is that wind that's in the cylinder of air aligned with the blades, then we have kilowatts arriving at the blades per second. Thus, in a day, we have kilowatt-hours flowing through the blades.
Now, the density of air in the European Union is , and the average wind speed in the European Union is about . Taking the blade area to be , we have kilowatt-hours coming from a windmill.
Finally, windmills are not completely efficient not only because the wind has kinetic energy left when it leaves the blades. Let's be generous and assume that windmills can harvest 50% of the energy that flows through them.
Since the windmill operates by taking energy out of the air, we can't put one windmill directly behind another or we'll quickly end up with a region of dead air. Hence, we must put a reasonable amount of space between our windmills. To estimate how far this is, look at some pictures of wind farms in the wild. For example, in the picture below, it appears that the spacing between windmills is roughly 6 times the size of the blade diameter.
Thus, a dense packing of the windmills would have one windmill for every . Being extremely generous and taking the entirety of the land mass of the European Union () to be ripe for windmill placement, this gives us kilowatt-hours per day. Plugging in real numbers, this gives about or per person per day. Being a bit more realistic, we could probably only place windmills on about of the land area of the European Union when we consider mountains, places that aren't so windy, that people don't want their land covered by windmills, etc.
Windmills are more often useless than not during the day, with most E.U. windmills operating at something like capacity. This brings us down to about per person, which is about 10% of the typical E.U. citizen's energy budget. Altogether, this isn't bad, but it also isn't a solution to an energy crisis.
The easiest target for increasing this number is to simply build more windmills, but that likely requires going offshore (higher wind speeds, more real estate), which means rapid deterioration due to rust, seawater, etc. and thus more energy going into producing windmills in the first place. Therefore, wind power will likely supplement our overall energy budget.
Now let's consider solar power. The calculation is a good deal simpler than that for wind. The intensity of solar radiation at the distance of the Earth is about . Averaging over the entirety of the Earth, and accounting for the effective cross section, this turns into about for the average parcel of land on Earth.
Taking into account the amount of energy that's bounced off the top of the atmosphere back into space (), and also the amount bounced off the Earth's surface back into space (another ), this gets us to roughly 10% the solar radiation, or .
Let's again take the E.U. as our test land mass. Top of the line solar panels can reach efficiencies of , meaning that we can expect . Given a population density of this figures to about per person per day.
Adjusting down the usable land to just 5% of the land mass, we have per person per day, which could more than account for the average E.U. citizen's daily energy usage