TransWikia.com

How should one understand the "indefinite integral" notation $int f(x);dx$ in calculus?

Mathematics Asked by user9464 on December 20, 2021

In calculus, it is said that

$$
int f(x); dx=F(x)quadtext{means}quad F'(x)=f(x)tag{1}
$$

where $F$ is a differentiable function on some open integral $I$. But the mean value theorem implies that any differentiable function $G:Ito mathbb{R}$ with the property $G'(x)=f(x)$ on $I$ can be determined only up to a constant. Since the object on the right of the first equality of (1) is not unique, we cannot use (1) as a definition for the symbol $int f(x);dx$.

Formulas for antiderivatives are usually written in the form of $displaystyle int f(x);dx=F(x)+C$. For example,
$$
int cos x;dx = sin x+C;tag{2}
$$

where $C$ is some "arbitrary" constant.

One cannot define an object with an "arbitrary" constant. It is OK to think about (2) as a set identity:
$$
int cos x; dx = {g:mathbb{R}tomathbb{R}mid g(x)=sin x+C,; Cinmathbb{R}}. tag{3}
$$

So sometimes, people say that $int f(x);dx$ really means a family of functions. But interpreting it this way, one runs into trouble of writing something like
$$
int (2x+cos x) ; dx = int 2x;dx+int cos x; dx = {x^2+sin x+C:Cinmathbb{R}};tag{4}
$$

where one is basically doing the addition of two sets in the middle, which is not defined.

So how should one understand the "indefinite integral" notation $int f(x);dx$? In particular, what kind of mathematical objects is that?

5 Answers

Long story short: indefinite integrals should be thought of as a family of functions, and the addition of such sets is indeed well defined; you just define the addition as you have done it.


First we fix some notation:

  • Let $Usubset Bbb{R}$ be a non-empty open set (think of an open interval if you wish).
  • Let $D_{U,Bbb{R}}$ be the set of all differentiable functions $F:U to Bbb{R}$.
  • Let $E_{U,Bbb{R}}$ be the set of all "exact functions"; i.e the set of $f:U to Bbb{R}$, such that there exists $Fin D_{U,Bbb{R}}$ such that $F' = f$ (said differently, $E_{U,Bbb{R}}$ is the image of $D_{U,Bbb{R}}$ under the derivative mapping $Fmapsto F'$).
  • Finally, let $Z_{U,Bbb{R}}$ (Z for zero lol) be the set of all $Fin D_{U,Bbb{R}}$ such that $F'=0$ (i.e for every $xin U$, $F'(x)=0$).

For the sake of simplicity, since I'm going to keep the open set $U$ fixed for most of this discussion, I'll just write $D,E,Z$ instead of $D_{U,Bbb{R}},E_{U,Bbb{R}},Z_{U,Bbb{R}}$. Now, notice that $D,E,Z$ are all real vector spaces, and that $Z$ is a vector-subspace of $D$. So, we can consider the quotient vector space $D/Z$.

With this in mind, formally, indefinite integration/anti-differentiation is a map $E to D/Z$. So, given a function $fin E$, when we write $int f(x), dx$, what we mean is that begin{align} int f(x), dx &:= {G in D| , , G' = f} end{align} (of course the letter $x$ appearing is a "dummy variable", it has no real significance). And suppose we know that $Fin D$ is a particular function such that $F' = f$. Then, begin{align} int f(x), dx &= {G in D| , , G' = f} = {F + g | , , g in Z} end{align}


For example, take $U = Bbb{R}$, and let $f(x) = x^2$. So, when we write $int x^2 , dx$, what we mean is the family of functions ${F| , text{for all $xin Bbb{R}$, $F'(x) = x^2$}} = {x mapsto frac{x^3}{3} + C | , , C in Bbb{R}}$.

Next, if we have $f(x) = 2x + cos x$, and $U = Bbb{R}$ again, then we have (after proving linearity) begin{align} int 2x + cos x , dx&= int 2x, dx + int cos x , dx \ &= {x mapsto x^2 + C | , , C in Bbb{R}} + {x mapsto sin x + C| , , Cin Bbb{R}} \ &:= {x mapsto x^2 + sin x + C |, , C in Bbb{R}} end{align} The last equal sign is by definition of how addition is defined in the quotient space $D/Z$. We can rewrite this chain of equalities using the $[cdot]$ notation for equivalence classes as follows: begin{align} int 2x + cos x , dx &= int 2x , dx + int cos x , dx \ &= [xmapsto x^2] + [x mapsto sin x] \ &=[x mapsto x^2 + sin x] end{align}

So, really, any indefinite integral calculation you have to do, if you want to be super precise, just put $[]$ around everything, to indicate that you're considering equivalence classes of functions; with this all the equal signs appearing above are actual equalities of elements in the quotient space $D/Z$.


Just in case you're not comfortable with quotient spaces, here's a brief review: we can define a relation (which you can easily verify is an equivalence relation) $sim_Z$ on $D$ by saying $F_1 sim_Z F_2$ if and only if $F_1 - F_2 in Z$ (in words, two functions are related if and only if the difference of their derivatives is $0$, or equivalently, $F_1sim_ZF_2$ if and only if they have the same derivatives $F_1' = F_2'$). Then, we define $D/Z$ to be the set of all equivalence classes.

This means an element of $D/Z$ looks like ${F + f| , , f in Z}$, where $Fin D$. Typically, we use the notation $[F]_Z$ or simply $[F]$ to denote the equivalence class containing $F$; i.e $[F]= {F + f| , , f in Z}$. Now, it is a standard linear algebra construction to see that the quotient of vector spaces can naturally also be given a vector space structure, where we define addition and scalar multiplication by: for all $cin Bbb{R}$, all $[F],[G] in D/Z$, begin{align} ccdot[F] +[G] := [ccdot F + G] end{align} This is a well-defined operation. So, this is a way to define the addition of two sets, and multiply a set by a scalar multiple, all in the context of quotient vector spaces.

Finally, I'm not sure how comfortable with linear algebra you are, but let me just add this in, and maybe you'll find it helpful in the future. Here's a very general construction and theorem:

Let $V,W$ be vector spaces over a field $Bbb{F}$, let $T:V to W$ be a linear map. Then, this induces a well-defined map on the quotient space $overline{T}: V/ker(T) to W$ by begin{align} overline{T}([v]) := T(v) end{align} The first isomorphism theorem of linear algebra states that $V/ker(T)$ is isomorphic to $text{image}(T)$, and that $overline{T}: V/ker(T) to text{image}(T)$ is an isomorphism (i.e linear with linear inverse and also bijective).

The reason I bring this up is because it relates very much to indefinite integration. For example, take $V = D_{U,Bbb{R}}$ to be the space of all differentiable functions, and $W = E_{U,Bbb{R}}$, and consider the derivative mapping $T =frac{d}{dx}$ going from $V to W$. Now, the image of the differentiation map $frac{d}{dx}$ is $W = E_{U,Bbb{R}}$ by construction, and the kernel of this map is exactly $Z_{U,Bbb{R}}$ (the set of functions whose derivative is $0$). So, by the general considerations above, this induces an isomorphism $overline{T}:V/ker(T) to W$ (i.e by plugging everything in, we have an isomorphism $overline{frac{d}{dx}}: D_{U,Bbb{R}}/Z_{U,Bbb{R}} to E_{U,Bbb{R}}$), and indefinite integration is defined as the inverse of this map: begin{align} int := left(overline{frac{d}{dx}}right)^{-1}: E_{U,Bbb{R}} to D_{U,Bbb{R}}/Z_{U,Bbb{R}} end{align}

Answered by peek-a-boo on December 20, 2021

Let me suggest also following view: $$int f(x)dx = {F: F^{'}(x) = f(x),x in A } = {F(x) +C}$$ so indefinite integral is 1) set of functions which derivative equal to integrand on 2) some set $x in A$. Many sources omit these details, possibly, because they are some type of mathematical quiet agreement.

So when, for example, is written $int 2cdot xdx = 2cdotint xdx$, then we know, that here we have equality between sets ${F^{'}(x) = 2x} = {2}cdot {F^{'}(x) = x } = { x^2 +C}$.

Second detail, set on which holds derivative equality, is more subtle. When we write $int frac{1}{x}dx = ln x +C$, then we know, that integrand, in case of real numbers, is defined in more wide set, then right side and again silently we understand such set on which make sense derivative equality. Even we can wrote $int text{sgn}(x) dx = |x| +C$, though integrand is not even continuous function and on left right hand function have not derivative in $0$, silently understanding set $mathbb{R} setminus {0}$ for derivative equality.

Answered by zkutch on December 20, 2021

I cannot comment yet, not enough reputation.

Indefinite means there are not upper and lower bound. Usually when there is no bound indicated to the integral then it is usually integrated into infinity, for example integrated for the whole $mathbb R$.

As you write antiderivates, it is usually done by adding the constant and no bounds are given/mentioned at all. It is a bit confusing, yes. Sometimes also some initial conditions or other conditions are given and you can later determine those constants.

So, if indefinite word is used, integration is done into infinity and you do not need to add arbitrary constants.

If the definite (not indefinite) word is used and interval of integration is left out, then add the arbitrary constants.

Answered by Mikael Helin on December 20, 2021

Unless the equal sign "=" in the first identity of (1) is not considered [the same] as the equal sign in "3+5=8" ...

This is precisely what is done.

When you move on to studying measure theory and consider $L^p$ spaces, two functions are considered "equal" if they only differ on a "small" set of points (where "small" has a precise measure-theoretic definition). Mathematicians are not computers, and know how to use the context of a statement to understand what version of equals is being used.

In the world of computing anti-derivatives, "=" means "differ by a constant", or more generally, "differ only by a constant on each connected component of their domains".

You can get into problems when you forget which version of "=" is intended, and think "=" means more than it does. (There are a few math brain-teasers out there based on that.) I think of it as the same problem as if you went into the teacher's lounge and asked for "the calculus teacher", as you were expecting Professor Liang, who is 6'4" tall and you wanted help getting something off a high shelf, but you didn't realize that Professor Smith, who is 4'11", also teaches calculus, and that's who shows up. You thought that specifying "calculus teacher" carried with it Prof. Liang's height, but that's not the case.

Answered by JonathanZ supports MonicaC on December 20, 2021

I think the problem is not only for antiderivative, but more generally it has to cope with the abuse of notation for multivalued functions.

Take for instance the complex logarithm $ln(z)=overbrace{ln(r)+itheta}^{operatorname{Ln}(z)}+i2kpi$.

You have to understand $$ln(z_1z_2)color{red}=ln(z_1)+ln(z_2)$$

As $$exists (k_1,k_2,k_3)inmathbb Z^3mid operatorname{Ln}(z_1z_2)+i2k_3pi=operatorname{Ln}(z_1)+i2k_1pi+operatorname{Ln}(z_2)+i2k_2pi$$

In the same way, the expression $$int (f+g)color{red}=int f+int g$$

Should be seen as

$$exists (C_1,C_2,C_3)inmathbb R^3mid H(x)+C_3=F(x)+C_1+G(x)+C_2$$

In all these instances, you can simply regroup all the constant terms on RHS and write:

$$int (f+g)color{red}=F(x)+G(x)+C$$

This is the red equal sign $color{red}=$ which is overloaded from the normal equal sign $=$, we give it extra properties (equality modulo a constant) when the context concerns multivalued functions, that's all.

Answered by zwim on December 20, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP