Waste less time on Facebook — follow Brilliant.
×

Exponentiation of differential operator and shift operator?

We know the Taylor's theorem:

\[ f(x) = \sum_{i=0}^{\infty} \frac{f^{(k)}(a)}{k!} (x-a)^k ...(1)\]

Taking \( x = a+h \) , we get

\[ f(a+h) = \sum_{i=0}^{\infty} \frac{f^{(k)}(a)}{k!} h^k ...(2)\] \[ = \sum_{i=0}^{\infty} \frac{h^k}{k!} \left( \left.\frac{d}{dx}\right|_a \right)^k f ...(3)\]

\[ = \sum_{i=0}^{\infty} \frac{1}{k!} \left( h \left.\frac{d}{dx}\right|_a \right)^k f ...(4)\]

\[ = e^{\left( h \left.\frac{d}{dx}\right|_a \right)} f ...(5)\]

This is what I just learned. Well, does exponentiating a differential operator makes any sense? We always take derivative of some function (function acts as an operand). What is an operator without an operand? Separating the differential operator in (3) from its function seems to be unreasonable?

What do you think about \( e^! \), \( e^+ \), \( e^{\div} \)? I think its garbage.

Raising a value to some 'operator' seems non-sense. Please explain this thing to me. How do I justify that what I have been taught is correct and not meaningless?

And also tell me about so called Shift Operator - it can be obtained by removing \( f \) from expression (5). What it does?

Note by Avinash Pandey
4 years, 6 months ago

No vote yet
4 votes

Comments

Sort by:

Top Newest

This is a very short and incomplete answer to your question. Think of the exponential of an operator as another operator. Before addressing your specific question, in general, you can make sense of functions on an operator. Remember that an operator is defined by the way it acts on the underlying object. To see how to make sense of functions on operators, let us consider something which we know well and use frequently : matrix algebra. Remember that a matrix \(A\) is a linear operator and is defined based on the way it acts on a vector \(x\). Have you ever wondered what \(A^2\) means i.e. what is the square of a linear operator? In general, have you ever wondered what the product of two matrices/linear operators mean, and why we multiply matrices in the weird way as we do? Matrix multiplication was initially defined by Cayley in \(1858\), in order to reflect the effect of composition of linear transformations. See paragraph 3 at this link i.e. we have the matrix/operator \(A\) that does the the linear transformation \(x \to Ax\) and matrix/operator \(B\) that does the the linear transformation \(x \to Bx\), then the operator \(BA\) is to denote the operator that does the linear transformation \(x \to B(Ax)\). In the above lines, we tried to make sense of multiplication of two linear operators/matrices. In general, you can use the above idea inductively to define what \(A^n\) and \(A^nx\) means for \(n \in \mathbb{Z}^+\). Once you have this you can make sense of matrix exponentiation i.e. \(e^A\). It is the unique operator that acts on any vector \(x\) and outputs \(x+\displaystyle \sum_{n=1}^{\infty} \dfrac{A^n x}{n!}\). (Remember that for what we have written on the right side to make sense, you need to prove that this converges for all \(x\).) To cut the long story short, \(e^{d/dx}\) is an operator that acts on a function \(f\) as follows: \[(e^{d/dx})f = f + \sum_{n=1}^{\infty}\dfrac1{n!} \left(\dfrac{d}{dx} \right)^n f = f + \sum_{n=1}^{\infty} \dfrac1{n!} \dfrac{d^n f(x)}{dx^n}\] You can also make sense of the operator \(e^{!}\) as follows: \[e^{!}(m) = m + \sum_{n=1}^{\infty} \dfrac1{n!} (!)^n m\]where \((!)^n m = \underbrace{((((\cdots (m!)!)!)!) \cdots)}_{n \text{ factorials}}\) However, you will find that the series on the right converges/makes sense only for \(m=0,1\) and \(2\). Marvis Narasakibma · 4 years, 6 months ago

Log in to reply

@Marvis Narasakibma Good to see you here Marvis! Great explanation as always (assuming familiarity of matrices) Calvin Lin Staff · 4 years, 6 months ago

Log in to reply

×

Problem Loading...

Note Loading...

Set Loading...