# Josh Testing 1

###### This wiki is incomplete.

PS: Here are examples of great wiki pages — Fractions, Chain Rule, and Vieta Root Jumping

By understanding the physics of forest fires, we can help form policies that make them more manageable. It's an important application of computational science and the conclusions affect millions of households in the U.S. alone.

In this Left to the Reader, we will build up to a basic understanding of the factors that govern the life cycle of a forest fire, including the **structure of the landscape** and the **physics of the spread**.

Here's how we'll get there:

- We'll learn about some basic features of forest fires, look at data, and observe some simple behaviors.
- We'll design a minimal model that captures the basic structural and dynamic features of wildfire.
- We'll apply a technique from statistical physics, called mean-field theory, to make mathematical predictions about the model.
- We'll employ a computer simulation to evaluate the predictions of our model and see what it has to say about public policy regarding wildfire.

#### Contents

## Details and mechanisms

To take a stab at modeling a growing wildfire, let's get familiar with the details and mechanisms of forest fires.

As forests age and hydration fluctuates, trees and other foliage go through cycles of dryness that make them more or less susceptible to incineration. This leads to seasonal and longer term windows where forests are susceptible, characterized by a combination of drought, peak solar radiation, low humidity, and warm ambient temperature, and can be triggered by random events like lightning strikes.

There are also dynamical factors. For example, wind speed varies in space and time, and changes the pace of fire spreading and bias its course toward more or less flammable regions. Whether a fire turns toward a field of old pine or into a rocky ravine with sparsely planted, young spruce can be the difference between a fire's end and its beginning anew. Harder to predict, a spark or ember can catch a gust of wind, and transport very long distances and start a new fire in a distant part of the forest.

Knowing the particular details of a given forest would be great, and might allow us to make more accurate models of that forest — but it is hard to collect timely, forest-wide information for one forest, let alone several. Before getting too far into the weeds we should ask: how important are the details?

## Looking at the data

To get an idea about the severity of forest fires, we can look at some data. One might think that forest fires are, in a sense, the balance between incendiary factors like fuel, heat, etc. and the things that stand in the way of combustion like hydration and rocks. If forest fires were strongly dependesee data like the following, where there is an average behavior and the particular features of a forest make it more or less severe.

Instead what is found are patterns like those shown below. Notice that these distributions don't behave in the ways we're accustomed with the normal or a Poisson distribution, where there is a clear average around which the population varies — the burn areas of span well across seven orders of magnitude! Nor do they behave like the exponential distribution, where events much bigger than the average are unobservably rare.

In fact, these probability distributions show the hallmark of a remarkable property known as **scale-invariance** — let's dive into what that means.
Because the data is plotted on logarithmic axes and exhibits a linear relationship, we can write down the math straight from the graph — the \(\log\) of the frequency, \(\log p,\) and the \(\log\) of the area, \(\log A,\) are related by a line, or

\[\log p = \alpha - \beta\log A,\]

where \(\alpha\) is the \(y-\)intercept of the data and \(-\beta\) is its slope. If we exponentiate both sides, this relationship becomes \(p\sim A^{-\beta}.\) In other words, the probability of observing a forest fire of size \(A\) is proportional (the \(\sim\) symbol) to \(A^{-\beta}.\)

Approximate

Scale-invarianceLooking at the probability distribution of forest fire areas, which has the form \(p(A) \sim A^{-\beta},\) we can see that the forest fires that burn out the greatest area of forest floor, \(A_\textrm{max},\) are the most rarely observed. What if we're interested in the likelihood of fires of area \(A < A_\textrm{max}?\)

Given the simple form for \(p(A)\) we can see that \[\boxed{p(A) = \displaystyle p(A_\textrm{max})\times\left(\frac{A}{A_\textrm{max}}\right)^{\beta}}.\]

When this property holds over all values of forest fire area, it is known as

scale-invariance. Though forest fires may not be truly scale-invariant (see Grassberger 2002), the forest fire data appears to show scale-invariant behavior over a wide range of forest fire areas.The revelation is this: scale-invariance allows us to estimate the frequency of the largest (rarest and hardest to observe) forest fires in terms of the more easily measured frequencies of small ones.

Suppose that the measured frequency of forest fires that burn down \(\SI{100}{\kilo\meter\squared}\) is found to be \(\SI{0.5}{fires\ per\ year},\) and that \(\beta\) is known to be approximately \(1.2.\)

What do you expect the frequency of forest fires of burn area \(\SI{10000}{\kilo\meter\squared}\) to be?

If this widely observed behavior were dependent on the fine details of any particular forest, then it would be quite a conspiracy between the global growth of trees, landscapes, wind patterns, seed dispersal, etc. What's more likely is that fundamental features, shared by all forests, are what produce the patterns we see (in fact that's one of the implications of scale-invariance).

Scale-invariance raises a troubling question: if forest fires exhibit an indifference to details, then how can we anticipate the severity of a given forest fire? As we alluded above (link to details and mechanisms), the prediction of individual forest fires is subject to all sorts of complexities, model chaos, and in general is not a very reliable tool. Thus, while a holistic understanding of any particular forest gives us more information, it is not necessarily useful in understanding the observable patterns in wildfire severity.

Happily, there are less particular, structural features that we can use to characterize forests, and use to model the course of nascent forest fires. We're going to design a general model to interrogate forests in terms of their basic structure (tree density) and lightning frequency. In the process, we'll distill the complexity of factors like prevailing winds, geography, and other pesky factors and extract general lessons about how all forest fires behave.

## Modeling parameters

For the rest of our time here, we're going to model forest fires using the model outlined below.

Model rulesWe can model the forest as a 2D lattice of dimensions \(L\times L.\) The spatial density of trees \(\rho_\textrm{tree},\) measured in trees per unit area, captures the concentration of trees per unit area of the forest floor. We therefore also have a density of empty sites \(\rho_\textrm{empty} = 1 - \rho_\textrm{tree}.\) At time zero the forest contains no trees and the lattice sites are updated according to the rules below.

At each time step,

- Trees are replenished at empty lattice sites with probability \(r.\)
- Trees are lit on fire (e.g. by lightning) with probability \(f.\)
- If any of a non-burning tree's neighbors are on fire, then the tree is set on fire.

- In the setup shown above, the trees to the left and top of the flaming site would be on fire in the next time step.
- In addition, the trees in the four corners each have probability \(f\) of incinerating due to lightning strike.
- The two gray sites each have probability \(r\) of containing a tree in the next time step.

**Is that all?**

It might be surprising that a model meant to describe phenomena as rich and variegated as the range of forest fires can be stated so simply. It is important to stop here and ask whether we've missed something important.

On the surface, this model seems to contain no information about forest aging, hydration, or variations in landscape and it certainly does not incorporate prevailing winds, the succession of different plant species, or climate effects.
In fact, the model actually does incorporate the first three effects, and the contribution of the last three **do not appear necessary to explain the data**.

## Basic Model

Before we simulate a universe of forests, we can do some simple mathematics to see the basic relationships at play.

For example, how does the frequency of lightning strikes \(f\) affect the size of the average forest fire?

Let's introduce one last quantity—we'll call the number of trees that burn down in a lightning strike \(s\) and the average number of trees that burn down in any given lightning strike \(\langle s\rangle = \sum_s p(s)\times s.\) If we calculate this on the lattice, taking into account the possible correlations between neighboring sites and the exact dynamic of the lattice, we'd be swiftly overwhelmed by the complexities, at least as rookie forest fire modelers.

To facilitate this calculation, we're going to use the idea of an ensemble of forests. This is an idea from statistical physics where we imagine there to be a large number of forests over which we take an average. In the ensemble, we can forget about the behavior of any one system and instead focus on how things work on average, across all of the systems. This is far easier than the detailed work of finding the trajectory of one forest in particular.

Concretely, the average forest has tree density \(\bar{\rho}_\textrm{tree},\) and sees \(\langle s\rangle\) trees burn down per lightning strike, on average.

We can relate these quantities to each other by conservation equations. For example, because the average forest has tree density \(\bar{\rho}_\textrm{tree}\) (which we'll write as \(\rho_\textrm{tree}\) from now on) it cannot be losing or gaining any trees on average.

Therefore we can say

\[\langle\textrm{new trees planted}_t\rangle - \langle\textrm{trees burned down by lightning}_t\rangle = 0.\]

In other words, the number of trees that disappear due to fire in any time step \(t\) must be equal to the number of trees that are planted to replace them in that same time step, on average.

**The number of trees planted per time step**

This style of calculation is called **mean-field** theory, because we are assuming that we can ignore spatial variations and consider the interaction of the average values with themselves.
Since this is a strange and fascinating tool, we'll calculate the first part together.

Suppose we find the lattice in the state above and that \(p = 0.6\bar{6}.\) On the left we can see three empty sites that each have the potential to be filled by planting events. How many of these to we expect to be filled on average?

The answer is \(3\times0.6\bar{6} = 2.\) We mark the newly birthed trees on the right with asterisks to showcase their newcomer status.Looking case by case is a lot of fun, but it isn't going to get us anywhere. Instead, we can write the expected number of newly birthed trees as the density of empty sites \(\rho_\textrm{empty},\) times the probability of planting per lattice point per unit time \(p,\) times the number of lattice sites \(L^2.\)

Thus,

\[\boxed{\sum_\textrm{forests}\textrm{new trees planted}_t\times P(\textrm{forest}) = \rho_\textrm{empty}\times p\times L^2}.\]

With this under your belt, finding the number of trees that burn down per time step should be a breeze.

What is the average number of trees that are consumed by flames per unit time?

**Recall** that new fires are initiated by lightning striking a random site on the lattice; if it's occupied by a tree then it will ignite; if it's empty then nothing will happen.
The average fire rages until \(\langle s\rangle\) trees are burnt down.

With these two quantities in hand, we can equate them to find \(\langle s\rangle\) as a function of tree density and the frequency of lightning strikes.

Use the conservation equation to find \(\langle s\rangle\) in terms of \(\rho_\textrm{tree},\) \(f,\) and \(r.\)

What is \(\langle s\rangle\) when \(\rho_\textrm{tree} = 0.4,\) \(f = 0.01,\) and \(r = 0.05?\)

**Hint:** Recall that \(\rho_\textrm{empty} + \rho_\textrm{tree} + \rho_\textrm{burning} = 1.\)

**Assume** that \(\rho_\textrm{burning} \ll \{\rho_\textrm{empty}, \rho_\textrm{tree}\}.\)

After some algebra, and recalling that \(\rho_\textrm{empty} = 1 - \rho_\textrm{tree}\) we find\[\boxed{\displaystyle\langle s \rangle = \frac{p}{f}\frac{1-\rho_\textrm{tree}}{\rho_\textrm{tree}} \propto \frac{p}{f}}\]

## The tradeoff

The last result suggests an intriguing relationship between the number of trees burned down in the typical forest fire, and the frequency of lightning strikes that start new fires.
Naively, we might associate a great prevalence of fire with more fearsome infernos, but our **equation of state suggests the actual relationship is more subtle**.

Given the result we just obtained, what can we say about the relationship between the rate of fire setting events \(f\) and the size the average forest fire \(\langle s\rangle,\) all else being equal?

To really interrogate this question, we'll have to move beyond mathematical analysis and simulate the forest environment with code.

## Simulating the forest

Luckily, we can make some minor modifications to code we saw in a recent **Problem of the Week**.

In that problem we employed the same model we use here, with the exception that we looked at the course of fires resulting from single lightning strikes upon preformed forests. Averaging over thousands of instances of the forest, we found an intriguing relationship between tree density \(\rho_\textrm{tree}\) and the duration of the forest fire \(T_\textrm{burn},\) shown to the right. Counterintuitive though it may be, there is a simple explanation

- at low tree densities the forest is not connected enough for fires to spread far beyond their starting point
- at very high densities the forest is fully connected and fires spread at their fastest rate
- it is at the critical density \(\rho_c\approx 0.6\) where the forest first becomes dense enough to sustain large scale fires, but still has hard to access pockets, that fires burn the longest.

In statistical physics, this is known as a phase transition as we can think of the value \(\rho_c\) as separating two qualitatively different kinds of forest: sparsely connected forests that don't suffer large-scale wildfires, and thick forests that essentially burn to completion whenever lightning strikes. This result reflects a deep connection to the problems of percolation theory, the focus of a future Left to the Reader.

## Modeling the intervention strategy

Now that we've established our model, we return to our original question.

Wildfires are a problem society must face as long as people insist on living near forests — so how best do we manage them? The tradeoff we found between the size of the average fire and the frequency of lightning strikes, \(\langle s\rangle \sim 1/f\) presents an interesting question to those interested in controlling wildfires: what, if any, effect do common fire control tactics have upon the severity of wildfires?

We can simulate the impact of fire control policies by varying the value of \(f\) from low to high. To see this, realize that the impact of a lightning strike is to burn down some fraction of the forest. If we make it a policy to try to put fires out as soon as they start burning, so that they don't burn down any significant area of the forest, it is as if we've simply eliminated some fraction of the fires, so that \(f\) becomes \(f^\prime < f.\)

We'll call the high-\(f\) limit the "

natural regime" and the low-\(f\) limit the "Smokey-the-Bear regime", for the famous mascot of fire control who encouraged every one of us to put out even the smallest forest fire, lest it transform into a raging inferno.

To see how changing \(f\) affects the nature of forest fires, we can use the codex below where we have programmed the model we developed above. The simulation starts off with an empty of \(L^2\) sites, after which we apply the following routine for 4000 timesteps:

- Each site in the lattice is tested to see if it's empty. If it's empty, then we try to plant a tree there with probability \(r.\)
- If it isn't empty, we check to see whether any of its neighbors are on fire in which case we set it on fire.
- If none of its neighbors are on fire, but it is a tree, we allow lightning to set it on fire with probability \(f.\)
- If the tree is currently burning, we set the lattice site to \(\textrm{Empty}.\)

We can be as sophisticated as we want to in measuring the dynamics of forest fire, but a quick way to gain insight is to simply keep track of how many trees are burning as a function of time (inspect for yourself to see how the number of burning trees is kept track of in the code). Our lattice is set to \(L=25\) and \(r\) is set to \(0.02\) which means that in the absence of fire, it should take \(T\approx 1/r = 50\) time steps to plant the entire forest. We initially set \(f = 0.01\) so that the probability of lightning striking a lattice point is half the probability of a tree being planted at an empty site.

How does the nature of forest fire vary as you lower the incidence of lightning strikes?

Noteyou're encouraged to play with all the parameters (\(r,\) \(f,\) and \(L\)) but the codex has a runtime limit of \(\SI{10}{\second},\) so keep that in mind as you explore the forest.

[[codex-lttr-fire-time-series]]

For the default parameters of the system, we observe the time series shown in the top row of the figure to the right. The number of trees hovers around \(\langle s\rangle = 200\) so that \(\hat{\rho}_\textrm{tree}\approx s/L^2 = 200/625 = 0.32.\)

We can see that from moment to moment, the number of trees fluctuates, but the fluctuations are small relative to the mean number of trees. This implies that the number of trees that burn down per unit time is small relative to the number of trees. In other words, although there are many fires they are each relatively inconsequential and short lived.

As we lower \(f\) we notice several things.

- The average number of trees in the forest increases, rising up to nearly \(\langle s\rangle \approx 400\) when \(f = 0.00001.\)
- The fluctuations in the number of trees increases tremendously. When \(f = 0.01,\) the fires are not big enough to meaningfully affect the state of the forest, as we noted above. As \(f\) drops, the number of fires is reduced, but the swing in tree count explodes. This suggests that the severity of forest fires is increasing.
By the time \(f = 0.0001,\) the flucutations are almost equal to the size of the system \(\sqrt{\langle T^2\rangle - \langle T\rangle^2} \approx L^2.\) At this point the fires are so severe that the typical fire can essentially burn down the entire forest!

Notethat the drops in these plots slightlyunderestimatethe severity of fire, as new trees can start to grow in the fire's wake even as it continues to burn across the system.

To get a quantitative measure for the severity of fire, we should measure the average number of trees burned down in a fire, \(\langle s\rangle.\) It is not trivial to do this directly, as tracking the provenance of a single tree's burning down requires the use of recursive search algorithms over the lattice. Instead we can use the connection described below.

Note, that the average number of trees that grow between lightning strikes, \(\langle s\rangle_\textrm{strike to strike},\) is equal to the rate of tree growth divided by the rate of lightning strikes.

Thus we have

\[\begin{align} \langle s\rangle_\textrm{strike to strike} &= \frac{\left(1-\rho_\textrm{tree}\right)pL^2}{\rho_\textrm{tree}fL^2} \\ &= \frac{p}{f}\frac{1-\rho_\textrm{tree}}{\rho_\textrm{tree}} \end{align}\]

However, the right side is the expression for \(\langle s\rangle\) that we found before. Thus, we can

measure the size of the average clusterof trees burned down by a single lightning strike bycounting the number of trees that grow between strikes.

Measuring \(\langle s\rangle\) as a function of \(f\) is important, but we'll leave this for the first of the **Discussion Challenges** at the end of the post.
In the meantime, we can already explain a puzzling and painful lesson learned by the US Forest Service over a century of trying to contain wildfires.

## The Yellowstone Effect

Up until the early 1970s, the US Forest Service maintained a policy to suppress every fire discovered in the forests it managed within within 24 hrs of their discovery. This zeal for fireless forests knew few bounds, perhaps peaking with the invention of so-called "smokejumpers", firefighters who would parachute out of airplanes to fight fires located in hard-to-reach regions of the forest. The policy was well-intentioned, influenced by a string of devastating fires in the late 1800's that tore through midwest territories and resulted in significant casualties.

But based on our studies here, we can see a fatal flaw in the thinking of the US Forest Service. By stopping most fires dead in their tracks, before they have a chance to burn through any significant portion of the forest, the rangers were effectively setting \(f\) below its natural value. As we saw above, this can be expected to provide**short term stability at the expense of catastrophic, system wide forest fires**.

In June of 1988 this came to a head when lightning strikes set off a handful of small fires in the park. The service had recently come to appreciate the utility of natural fires and allowing small fires to burn out, and therefore decided to let them burn themselves out.

However, the forest was already tuned to \(f^\prime < f\) and was in a state ripe for inferno.
By the end of July, the fires had not burned themselves out and had destroyed \(\num{100000}\) acres of forest.
At this point, it proved too late to implement fire controls and the fires raged on, eventually consuming roughly \(\num{800000}\) acres of Yellowstone, or \(36\%\) of the park.
In executing the no-fire policy the service was myopic, and **tuned the forest to a state destined for a system-scale inferno**.

## What did we learn?

We've covered a lot of ground here, before getting into the discussions let's take a step back and recall the path that got us here. We started out by looking at data from real forest fires where we noticed that the probability of forest fires of different sizes is scale-invariant. This implies that the fine details of individual forests are not responsible for the global behavior. Inspired by this, we designed a simple, universal model that considered only two explicit characteristics of the forest: the rate of tree replenishment \(r\) and the rate of lightning strikes \(f.\)

We then used mean-field theory to calculate a simple prediction for the size of forest fires as a function of lightning frequency (\(\langle s\rangle \sim 1/f\)) and verified this by explicit simulation in the codex. Simple though it is, this model helped us to understand the devastating 1988 Yellowstone fires and the contribution of forest control policies to the severity of wildfire.

In the end, forest fires are a natural clearinghouse for old trees and flammable materials that work to keep the forest in a healthy state. Attempts to naively constrain them can bottle up the natural volatility of the forest, causing it to erupt violently all at once.

In the **Discussion Challenges** below we'll build on top of these lessons and see if we can take things to the next level.

## Discussion Challenges

**Discussion #1**

Modify the code we used to measure the number of trees as a function of time to measure \(\langle s\rangle.\) Remember, this is difficult to measure directly, and it's much easier to use the mathematical connection we derived in the post.

**Discussion #2**

Our simulation shows that a zero-tolerance policy increases the likelihood of large-scale fires, but it doesn't suggest that it's impossible to do fire control in a smarter way. Can you design a fire suppression policy that increases the density of trees in the forest while reducing the magnitude of the average forest fire? Can you show that it works using the simulation?

**Discussion #3**

Moving beyond measurement of the average fire size \(\langle s\rangle\) is non-trivial, as we mentioned in the post. Can you design an algorithm that measures the size of the fire that results from each individual lightning strike? How does your estimate of fire size counts \(N(s)\) change with \(s?\) Note that to do this with statistical significance, and to take the measurement into larger systems, you'll likely need to run your code offline.