Fibonacci series is a sequence of numbers where is computed by the summation of the previous two terms. In this wiki, we will be exploring various ways to compute Fibonacci numbers.
For those who do not remember what they are,
The most natural way to calculate the Fibonacci number is to use the recursive definition itself.
1 2 3 4 5 6 7
When this algorithm is run, the program goes all the way down the Fibonacci tree to calculate and then again all the way down to calculate . Clearly, there is a lot of re-computation going on.
Note that to calculate , we have to first calculate and to calculate , we have to calculate and so on, all the way to and . So, there are essentially levels of recursion. The stack space complexity is hence
Let be the time required to compute the Fibonacci number. Clearly, which is just the Fibonacci recurrence! Hence, calculating the Fibonacci number takes time of the order or simply
Is there a way to avoid recomputing the previous Fibonacci numbers over and over and to not overload the recursion stack as in the recursion algorithm? One way to do this is to use a bottom-up approach, i.e., to compute the Fibonacci numbers all the way from and to and and hence
1 2 3 4 5
Clearly, we are only storing at most numbers at a time. So, we have a constant space complexity.
The time complexity is , since we need to run the loop through times.
Consider the Fibonacci sequence, defined as follows:
Fibonacci(1) = 1 Fibonacci(2) = 1 Fibonacci(n) = Fibonacci(n - 2) + Fibonacci(n - 1)
The first two Fibonacci numbers are 1, 1. The following elements are computed by adding the prior two.
The first 6 Fibonacci numbers are: 1, 1, 2, 3, 5, 8.
Let F be the Fibonacci number. What are the last 3 digits of F?
Note that the above problem is going to be very expensive with recursion.
From this point onwards, we will use the sole properties of matrices to discover the fast computations of Fibonacci numbers.
This theorem is trivial but nevertheless, we will prove it.
This statement is obviously true for since
Assume that it also holds for some , then we have
Now, multiplying both sides with we have
which follows from the definition.
Hence, the identity holds for any .
This algorithm is only as smart as the dynamic programming until it is implemented with exponentiation by squaring. To do this, we need to realize that
Implemented that way, it takes time to calculate
The following is a direct consequence of the matrix exponentiation algorithm that enables us to do the same thing with some lesser computations.
Squaring both sides,
Hence, we have
The first identity can be simplified a bit:
This section requires a better explanation. If you have one, please add it and remove this flag.
We could extrapolate the above idea to recurrences of the form by using the following matrix form: