Suppose you're given the functional equation and asked to solve it. At this point, it seems pretty obvious that this won't have an analytical solution. This is the step where most people decide it's time to turn to a series solution for help. Its inverse function, , has a very simple taylor series from which you can invert it to find a taylor series for , but it has a very small radius of convergence (). You deserve better than that. We all deserve better than that.
What I'm about to do may seem very hand-wavy, but be assured there is a rigorous theory behind it; such that covering too much of it would just lead me off on tangents and make this derivation too painful to read.
Let's start with . The approach here is to approximate at large values of and work our way down to smaller and smaller orders. This is exactly the opposite approach by using Taylor series, where you start with a local approximation at a given point and move outwards. It'll become clear in a minute how we're going to do this. For now, take the natural logarithm of both sides.
This is where some theory comes in: For any term, we can assign a magnitude to it. Qualitatively, it denotes a function's growth at extremely large values. A function such as has a magnitude , and this can be written as . This holds because will outgrow enough to make the latter term insignificant. Rigorously, you can say that two functions and have equal magnitudes if
For some constant . Magnitudes are well-ordered, so we can determine whether a given term's magnitude is less than or greater than any other term's. We have special symbols to represent this: and . As an example, . It's a good exercise to check that for yourself. Notice that this is NOT the same as , as both and are true for certain values of , while is true no matter what.
Magnitudes tell us about the growth of a term at high values of . Thus, holds because terms will always outgrow terms, even if it may take a while to do so (compare with , the latter will eventually reach a point where it overcomes the former in value). Let's get back on topic. We know that , so we can write
As according to the exposition above, will outgrow , so at large values of the latter term may be ignored. However, we want to go deeper. In order to sharpen our asymptotic approximation, we introduce a new term with a lower magnitude than . This holds because is already the highest magnitude term in , as told by the functional equation. To find , let's plug back into the functional equation.
Taking the natural log of both sides
The second step holds because outgrows (remember ). Putting this back into gives
Again, to improve our approximation we need to find . Plugging this back into the functional equation leads to:
Putting this back into ...
This can be continued indefinitely, but I'll stop here. There is a very clear pattern appearing, however. Should you decide to continue, you'll get the identity
Which solves as long as .
In a way, this can be seen as an "inverse" Taylor Series. It's typical to approximate functions locally, to within a (typically small) neighborhood of values around a known finite point of interest. This method takes the opposite approach, approximating a function globally around an "infinite" point where the function is well-behaved under the ordering I showed above.