Wikipedia:Reference desk/Archives/Mathematics/2013 November 7

From Wikipedia, the free encyclopedia
Mathematics desk
< November 6 << Oct | November | Dec >> November 8 >
Welcome to the Wikipedia Mathematics Reference Desk Archives
The page you are currently viewing is an archive page. While you can leave answers for any questions shown below, please ask new questions on one of the current reference desk pages.


November 7[edit]

Single versus multiple integral signs[edit]

It seems like different authors (including Wikipedia editors) use different conventions for integrals of multivariate functions. For example, a surface integral might be written as . I personally prefer the convention with multiple integral signs in order to be clear on the dimensionality of the integral, but my physics textbook uses single integral signs for all multivariate and univariate integrals alike. Which one is considered to be most proper?--Jasper Deng (talk) 02:36, 7 November 2013 (UTC)[reply]

I think the single integral sign notation is more generally consistent. What will you do if you're integrating in a space of dimension 10? Or infinity, or fractional (I think this makes sense, not sure)?
It's similar to how you use multiple summation signs when explicitly listing the iterator bounds, but over a specified set of indexes you use a single sign. -- Meni Rosenfeld (talk) 05:51, 7 November 2013 (UTC)[reply]
Maybe for 50 dimensions, but for small natural numbers of dimensions, I find it most clear to use multiple signs. Before learning vector calculus, the use of single signs for multivariate integrals confused me, as I had perceived them as indefinite integrals (because the book simply listed integrals without domains, for example for electric flux). It seems like mathematics textbooks tend to use multiple signs while physics textbooks and articles tend to use single signs.--Jasper Deng (talk) 06:44, 7 November 2013 (UTC)[reply]
Our article Multiple integral uses exclusively multiple integral signs, including
Maybe the article should mention that a single integral sign is sometimes used instead (but, I think, only when is replaced by ). Duoduoduo (talk) 15:20, 7 November 2013 (UTC)[reply]
  • There is really no question of "proper" -- writers are allowed to use whatever notation they find most convenient, as long as it is well-defined and consistent. My personal view is that once the dimension goes above 3, multiple integral signs would be obnoxious, and below 3 it's a matter of personal choice. Looie496 (talk) 15:55, 7 November 2013 (UTC)[reply]
  • It's sometimes useful to analytically continue integration over d variables to the complex d plane, so a single integration sign is better. Count Iblis (talk) 18:26, 7 November 2013 (UTC)[reply]
  • It's not just notation unless I'm not comprehending the question. is a surface but that surface is created by a line integral that starts and ends in the same place. I guess you could write it as two integrals to define the same surface, but the circle notation I thought had identities that were useful for solving them. Is that not related to the question? --DHeyward (talk) 10:26, 8 November 2013 (UTC)[reply]
    • Define "created by a line integral". When referring to a surface, implies a surface that's closed, at least in the sense that a finite volume is enclosed by it. Line integrals over closed loops don't really differ from those over open curves by much, except for the fact that Green's theorem (and generally, Stokes' theorem) is applicable to it, just like the divergence theorem is applicable to flux over a closed surface.--Jasper Deng (talk) 11:03, 8 November 2013 (UTC)[reply]
      • I always thought of being the perimeter of the surface. So a 2D surface such as a circle on a plane, the double integral of the curl of the field on the surface is equivalent to the integral of the vector field along the linear contour (Stokes Theorem?). Similarly, it can be expanded to a 3D volume and a 2D surface but the notation is \oiint noting a closed surface, not a volume (though they are obviously related. Gauss' law for magnetism, for example, states the surface of a volume will have a net 0 vector field flowing through the surface (no magnetic monopoles). The surface encloses a volume but it's an inherently 2 dimensional closed surface that has a vector being integrated over an area. You can still do the volume integral of the curl of the vector field. Even with identities that make the operations equivalent, it makes it more natural to express as a closed line or a closed surface rather than an area or volume. Take, for example, inverse-square law forces like gravity. It's the flux through an area that is more intuitive to understand. Whence it makes more intuitive sense to me to use a \oiint operator as opposed to a triple integral of volume even if the answers are the same. --DHeyward (talk) 07:51, 9 November 2013 (UTC)[reply]

Fundamental theorem of calculus fallacy[edit]

For a while, I've been thinking of the following fallacy. The function f is differentiable and continuous everywhere, which should be more than enough for the fundamental theorem to be valid, but it obviously isn't in this case, as follows. Let for -1<x<1 and elsewhere. The derivative of f is -2 at x=-1 and +2 and x=1 and f is continuous at +/-1. Now here lies the problem. The following should be the same, but they are not.

My arithmetic may not be exactly right, but I think my point is made. These two are obviously not equal, despite f satisfying the conditions of the fundamental theorem of calculus. What's wrong with my reasoning?--Jasper Deng (talk) 22:39, 7 November 2013 (UTC)[reply]

Sorry, your first line (of displayed LaTeX, starting with ) is just wrong, and I don't understand why you think it should follow from the fundamental theorem (or anything else). What line of reasoning did you have in mind? --Trovatore (talk) 22:45, 7 November 2013 (UTC)[reply]
The fundamental theorem says nothing about whether the function is piecewise or not, it just says the function must be continuous in order to be able to use the the theorem to integrate from -3 to 4 (or any piecewise function in general). The antiderivative is defined piecewise ( for -1<x<1, elsewhere - although I will note that it does fail to be continuous at the endpoints!), but the theorem only says to evaluate the antiderivative at the endpoints.--Jasper Deng (talk) 22:52, 7 November 2013 (UTC)[reply]
No, sorry, is not an antiderivative of f. That's your mistake. --Trovatore (talk) 23:03, 7 November 2013 (UTC)[reply]
Then what is? There must exist one by the fundamental theorem.--Jasper Deng (talk) 23:07, 7 November 2013 (UTC)[reply]
Any function F such that the derivative of F is defined and equals f, everywhere. Your piecewise function doesn't qualify, because it's not differentiable at ±1. --Trovatore (talk) 23:11, 7 November 2013 (UTC)[reply]
I suspect you swapped intervals in the definition and meant to say: Let for -1<x<1 and elsewhere. That would give more meaning to some of your later expressions, but some things would still be wrong. An antiderivative must by definition be differentiable and therefore continuous, so a piecewise definition must adjust constants in the pieces to make it continuous where the pieces meet, in your case x=-1 and x=1. Here is a valid antiderivative: for x≤1, for -1<x<1, for x≥1. All other antiderivatives are this expression plus a constant (the same constant must be used in all 3 pieces). Note the opposite signs in -4/15 and +4/15. They mean your first computation of the integral is off by 4/15 - (-4/15) = 8/15. With the correct antiderivative we get = (1/3×43 + 4/15) - (1/3×(-3)3 - 4/15) = 463/15. PrimeHunter (talk) 00:58, 8 November 2013 (UTC)[reply]
The fallacy, while not clearly explained, is this: in a neighborhood of the points and , an antiderivative for the function f is given by . So by the fundamental theorem of calculus . The trouble with this argument is that F is not an antiderivative of f on the whole interval , as required by the fundamental theorem. Another way to think of this is that in a neighborhood of each of the points and corresponds to a different antiderivative of the function f, differing by a constant of integration. The fundamental theorem requires that you use the same antiderivative at both end points. Sławomir Biały (talk) 14:49, 8 November 2013 (UTC)[reply]
That might still be confusing. To be absolutely clear, if a bit tedious, I'd put it this way:
Each piece can be separately integrated using the basic rule for :
To apply the second fundamental theorem of calculus to calculate
we need for all . The only places where this might not hold are and . We need to be differentiable, and thus continuous, at these points, so we need to have
which leads to and . This yields
which matches Jasper's other calculation. Indeed, the only thing different in these calculations is being careful about the constants; the original is OK except it's missing the bit. When dealing with integration constants, it is not a good idea to generally assume that you can just pick one arbitrarily (e.g. C=0, as Jasper did originally), because other considerations may constrain your choices. Here, continuity constrains you: you can pick one of the constants arbitrarily, but not all three of them! Differential equations are another area where being careless with the constants will cause you grief. -- 212.149.196.26 (talk) 23:02, 8 November 2013 (UTC)[reply]