Talk:Quadratic variation

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Untitled[edit]

Why can we use quadratic variation to test differentiability? From Advanced Stochastic Processes by David Gamarnik: Even though Brownian motion is nowhere differentiable and has unbounded total variation, it turns out that it has bounded quadratic variation.

The theorem might be correct, but what is it useful for?

A differentiable function needn't have a continuous derivation:

—The preceding unsigned comment was added by Theowoll (talkcontribs).

I don't follow the your argument. If a function is everywhere differentiable, then it has vanishing quadratic variation. If a function has non-vanishing quadratic variation, then it is not everywhere differentiable. The theorem says neither more nor less than that. The function you've exhibited has a continuous derivative everywhere and therefore has vanishing quadratic variation. There are plenty of functions, such as |x|, which have discontinuities in their derivatives and nonetheless have vanishing quadratic variation. (In fact, every continuous function with a piecewise continuous derivative has vanishing q.v., whereas any discontinuity has non-vanishing q.v. as do nowhere differentiable processes like Brownian motion.) I have removed the dubious tags, but I also removed the very imprecise statement from the intro. –Joke 21:39, 3 January 2007 (UTC)[reply]

In the meantime I learned that a differentiable function needn't be of bounded variation (like ). So the theorem is intereresting indeed. The function I've exhibited above () is a counterexample for the following statement in the proof: "Notice that |f'(t)| is continuous". The example is differentiable everywhere but its derivative is not continuous. Theowoll 18:58, 13 January 2007 (UTC)[reply]

definition vs. application[edit]

Most of the content in this article is the application of the quadratic variation. However, its definition is not quite right. It was said that the quadratic variation should be defined as sup over all partitions, and it is infinite for Brownian motion. see Richard Durrett (1996) page 7. Jackzhp (talk) 23:08, 18 May 2009 (UTC)[reply]

Evidently there are two definitions of quadratic variation, the quadratic variation of a single function (the sup over all pertitions, called the "true" quadratic variation in Durrett), and the quadratic variation that is the limit in probability over partitions as the mesh size goes the zero (called the quadratic variation in this article). Apparently, they are not equal, being infinity and t, respectively, for Brownian motion.
The latter concept can only be defined probabilistically, it has no meaning for individual sample paths, much like Ito integration itself.
I find this state of affairs strange, but possible, because the order of the quantification is somehow rearranged between the two, though I haven't quite got my head around it. In the first, you consider sample paths, and for each fixed sample path, you vary the partition, and in the second, you consider partitions, and for each fixed partition, you vary over the sample paths to get a random variable.
I wonder if there is an inequality between the two concepts, with the first definition being larger.

2A02:1210:2642:4A00:1C7:9FF5:70A5:4795 (talk) 21:53, 11 October 2023 (UTC)[reply]

definition[edit]

The definition says: Its quadratic variation is the process, written as [X]t, defined as

where P ranges over partitions of the interval [0,t] and the norm of the partition P is the mesh. This limit, if it exists, is defined using convergence in probability.

What does it mean? The probability that the sum of the square differences over a given partition is different from the limit by more than epsilon is at most a given function of the mesh, for any partition of a given mesh? Is this true, or do we only have almost sure convergence for any particular sequence of partitions with mesh going to zero? Commentor (talk) 13:39, 9 July 2010 (UTC)[reply]