Waste less time on Facebook — follow Brilliant.

About 0!

I know that \(0! = 1\). If you don't, here, I show you.

We use \(!\) (or factorial) like this:

\(x! = x \mathtt{1} \times x \mathtt{2} \times x \mathtt{2} \times \dots 3 \times 2 \times 1\)

Here's an example:

\(5! = 5 \times 4 \times 3 \times 2 \times 1 = 120\)

Another way to do this is this. For example, \(4!\).

\(4! = \)\(\frac{5!}{5}\)\( = \frac{5 \times 4 \times 3 \times 2 \times 1}{5}\)\( = 24\)

Okay now, why \(0! = 1\)? Let's start from \(4!\) to \(0!\).

(already did the \(4!\))

\(3! = \frac{4!}{4} = \frac{4 \times 3 \times 2 \times 1}{4} = 6\)

\(2! = \frac{3!}{3} = \frac{3 \times 2 \times 1}{3} = 2\)

\(1! = \frac{2!}{2} = \frac{2 \times 1}{2} = 1\)

Okay, now... \(0!\)...

\(0! = \frac {1!}{1} = \frac {1}{1} = 1\)

Surprising right? Okay, now you know.

What I want to talk about is that why are we allowed to do \(0!\)? Because if we are, that mean \(0 = 1\) and that's wrong.

\(0! = 1 = 1!\)

\(0! = 1!\)

\(0 = 1\)

Can anyone tell me about this? I'm always thinking about this.

Note by Adam Zaim
3 years, 9 months ago

No vote yet
5 votes


Sort by:

Top Newest

You must be careful, for some function \(f\), there doesn't always exist a unique inverse \(f^{-1}\), other than for some domain. In this case, we may assume \(x>0\), or, alternatively that \(x\ne1\) Guillermo Angeris · 3 years, 9 months ago

Log in to reply

\(1!\) is the number of ways of arranging 1 item, which is 1. \(0!\) is the number of ways of arranging 0 items. Since there is only 1 way to do NOTHING, (that is to not do it) we have \(0!=1\). So by comparing \(0!\) and \(1!\), we are comparing the number of ways of arranging 1 and 0 items respectively. Just because the result is same doesn't mean that the number of items has to be same. So \(0\neq1\). Bruce Wayne · 3 years, 9 months ago

Log in to reply

@Bruce Wayne From this,we can also infer that if \(x!=y!\Rightarrow x=y if x,y\neq 0,1\) Bruce Wayne · 3 years, 9 months ago

Log in to reply

@Bruce Wayne But not to do it also the number of ways of arranging 1 items, that means 1! = 2. Mirza Baig · 3 years, 9 months ago

Log in to reply

Just because \(x! = y!\) does not always mean that \(x = y.\) By defining \(0! = 1,\) we have created an exception to that rule. Michael Tang · 3 years, 9 months ago

Log in to reply

@Michael Tang Actually not an exception but a rule. It is defined as to make whole body of factorials to make sense. Like everything else in maths this is just made up by us and we may tweak it as we like. Mirza Baig · 3 years, 9 months ago

Log in to reply

I totally agree with Guillermo, as you see,

If \(f(x_1)=y\) and \(f(x_2)=y\), do we say \(x_1=x_2\)? No.

For an example, \(3^2=9\) and \((-3)^2=9\), but \(-3\neq3\).

As explaining why \(0!=1\), Bruce has the exact explaination. The concept for \(n!\) is the number of ways to arrange \(n\) distinct object. Try imagine an empty space with no object, how many arrangements are there? The image in your mind, which is AN EMPTY SPACE, is the ONLY arrangement for 0 object. Thus, \(0!=1\) Christopher Boo · 3 years, 9 months ago

Log in to reply

The factorial function is a special case of what is called the Gamma function \[ \Gamma(x) \; = \; \int_0^\infty t^{x-1}\,e^{-t}\,dt \qquad x > 0 \] (Actually, the Gamma function can be defined for all complex numbers except \(0,-1,-2,-3,\ldots\), but that is more complicated). Integration by parts shows that \[ \Gamma(x+1) = x\Gamma(x) \qquad x > 0\] and \(\Gamma(1) = 1\). It is easy to see from these properties that \(\Gamma(n+1) = n!\) for any nonnegative integer \(n\) (including \(n=0\)). Mark Hennings · 3 years, 9 months ago

Log in to reply

@Mark Hennings While this is true, the sticking point is that the gamma function was motivated by the definition of the factorial function on the positive integers, so to then define the factorial in terms of the gamma is pedagogically and historically inverted, even though it is perfectly valid from a mathematical standpoint. So, in order to motivate the value of \( 0! \) using the gamma function, we would need to discuss concepts like continuity, limits, and integrability--i.e., most of the fundamentals of calculus on the real line, much in the same way that to motivate the value of \( \Gamma(z) \) for complex \( z \), we'd need to talk about analytic continuation.

What the usual combinatorial and elementary definitions of factorial have going for them is that they can more intuitively justify \( 0! = 1 \), even though such reasoning may not be rigorously derived from established axioms. Hero P. · 3 years, 9 months ago

Log in to reply

@Hero P. Firstly, we do not need to use analytic continuation to study the Gamma function for \(\mathfrak{R}z > 0\), and that is all it takes to handle \(n!\) for \(n \ge 0\).

As to your main point, yes and no. There are times where it is both necessary and useful to invert the historical development of a subject - just because something was done first does not mean it was done better that way.

For example, the "definition" of \(\pi\) as the ratio between the circumference and diameter of a circle is a bad definition of \(\pi\), since it can only be used to calculate \(\pi\) by a process of measurement, which is inherently inaccurate. The "definitions" of the trigonometric functions in terms of the sides of a right-angled triangle are similarly flawed. At some stage, you have to change your point of view and define the trigonometric functions in terms of their Maclaurin series; \(\pi\) then appears in terms of the periodicity of the trig functions, which can be established from the series definitions. You can then go on to prove that the circumference/diameter stuff, or the triangle ratio results, are all consequences of these definitions.

Similarly, at some stage you have to realise that the factorial function is only a portion of the truth and that, seen in the light of the Gamma function, the identity \(0!=1\) is not in the least mysterious, and does not have to be explained by linguistic sleight of hand of the "there is just one way to do nothing" variety, or by "believe me, it works, and it makes the formulae for the binomial coefficients neat" arguments. If you choose to make the argument that "the value of \(0!\) is chosen so that the identity \(n! \,=\, n \times (n-1)!\) is valid for \(n=1\)", then you are doing a little bit of analytic continuation yourself!

It is important pedagogically to offer both, and complement these utilitarian arguments with the observation (if only as a passing comment) that there is a better reason out there. Mark Hennings · 3 years, 9 months ago

Log in to reply

By the way, it is defined to be so. The basic idea of combinatorics is reducing the large list into a recognisable pattern which doesn't have to repeated time and again. So, when we observe somthing like this, we have to define it in such a way that it agrees with our recognized pattern. There is one way to take a single object.(1!) There is one way Not to take that object.(a possibility).( 0!) So, even though the number of ways are same, it doesn't mean that the ways are same! Varun Ramaprasad · 3 years, 9 months ago

Log in to reply

0! is a conjecture..it is just like u cannot define a point or define infinity in classical mathematics!it is an assumption that 0!=1. and again the factorial function is one-many as seen in case of 0! and 1!.so while proving both equals 1,u have to do the inverse which doesnt always exist and x!=y! doesnt imply x=y. Aritro Aich · 3 years, 9 months ago

Log in to reply


Problem Loading...

Note Loading...

Set Loading...