Talk:Discrete Fourier transform/Archive 2

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 2

Hi Are you aware of any possible advancement to the DFT, in discrete-frequency? I am particulartly working on a topic which have reached restrictions on frequencies to become an infinite set of discrete values rather than continuous variable. My problem is to determine a correct inverse to an already derived fourier transform. Any comment in this regard is appreciated.

Fourier Transform calculation

Frequently many computer modeling people use DFT to calculate the actual Fourier Transform. I think in this page we need a small section on the realtions between the continuous fourier transform, and the discrete fourier transform. This would really help.--129.107.47.144 (talk) 20:44, 24 March 2009 (UTC)

Periodicity section

This section could do with a rewrite. The notation is different from the rest of the article - brackets not subscripts. There is no need to refer to the dtft article. It's easy to show straight from the dft definition that it is periodic. Paul Matthews 09:44, 3 October 2006 (UTC). I also find it strange that nowhere does the article say that a DFT is a truncation of the Fourier series - well it does now!

I agree that the DTFT should not be relied on so heavily in discussing periodicity of the spectrum. The point that the author of that section (Bob K, I think) wanted to make was that the periodicity of the spectrum is an inherent consequence of the discrete sampling and doesn't rely on any other property of the DFT, really. But perhaps this point could be made in a different way. —Steven G. Johnson 16:29, 3 October 2006 (UTC)
Thanks for remembering that, and I agree that a rewrite could be helpful. I also hope we don't lose sight of the fact that the DFT can be viewed as samples of the DTFT of a windowed function. --Bob K 18:37, 3 October 2006 (UTC)
I would go farther and say: We shouldn't lose sight of the fact that the DFT can be viewed in many different ways. I'm almost inclined to suggest that there should be a section ("Approaches to the DFT" or something) briefly summarizing the most important starting points from which the DFT can be derived. The question, here, is whether the DTFT viewpoint should be so central to the periodicity section—another possibility would be to point to the DTFT as another, related example of the rule that sampling leads to aliasing (rather than the source of this rule). —Steven G. Johnson 21:48, 3 October 2006 (UTC)
I recommend we consult some books, and pick one or two to motivate a good treatment of this subject, not just make it up as we go along. Do you have an approach or book in mind already? Dicklyon 21:55, 3 October 2006 (UTC)
The trigonometric interpolation section is not the right place to talk about the relation to the Fourier series, I think. First, the truncation is only approximate (due to aliasing), and it's a bit tenuous to say that the interpolation "shows" it. The statement of the relationship to the Fourier series needs to be made more precise, and the level of precision required should go somewhere else. —Steven G. Johnson 16:29, 3 October 2006 (UTC)
That's what I was going too say. Wrong place, and over-simplified. The Fourier series is a function of a continuous-time function, and the DFT of a discrete-time function. Tieing them together requires either a modulating comb of Dirac delta functions or a limit of the Fourier series of nascent delta functions. Nontrivial to get this right, but it might be worth doing to prevent the all-too-frequent attempts that do it wrong. Dicklyon 16:33, 3 October 2006 (UTC)
You don't need Dirac delta functions to tie them together. It is more common, and more useful for most applications (e.g. spectral methods), to simply view the DFT as a trapezoidal rule approximation for the Fourier-series integral. The error in this approximation depends on the smoothness of the function being sampled, and is closely related to aliasing. (Note that, because of periodicity, the endpoints aren't half-weighted unlike the usual trapezoidal rule. If you have a periodic function and no other information, then none of the fancier non-adaptive quadrature methods help you; by translational symmmetry, equally weighted and equally spaced points are optimal.) But still, this introduces a whole new set of topics. —Steven G. Johnson 16:56, 3 October 2006 (UTC)
I'm not sure I follow. You refer to trapezoidal rule, but also to approximation. How does that relate to the claim in question that "a DFT is a truncation of the Fourier series". It is my belief and understanding that that's true only is the continuous-time function that the Fourier series is computed from corresponds to the dirac impulse train, or a version of it filtered with a filter that is exactly flat at 1.0 up to half the sample rate. How does trapezoidal rule fit into that? Dicklyon 21:54, 3 October 2006 (UTC)
Given a function f(x) with period L, the DFT of the sampled function f(nL/N) is precisely a trapezoidal-rule approximation for the integrals giving the first N Fourier series coefficients of f(x). Thus, the DFT is an approximate truncated Fourier series for f(x), in the same way that the DTFT of a non-periodic sampled function f(x) is a trapezoidal-rule approximation for the Fourier transform of the original function. (A better term might be a "collapsed" Fourier series, because the DFT is the sum of the aliased coefficients in the series, but this terminology is nonstandard.)
Note that here I'm talking about "functions" in the classic sense, not distributions. i.e. I'm excluding delta functions and the like, for which numerical integration is impossible (and unnecessary). Yes, it's also true that the Fourier transform of a periodic delta train is a DFT. However, the approximate viewpoint bears a more direct relationship to actual usage of the DFT — in reality, you rarely have trains of delta functions, but you often have samples of a continuous-time function whose Fourier transform/series you want to approximate. (Of course, there is a relationship between the two viewpoints: the DFT of the sampled function f(nL/N) is the Fourier transform of f(x) multiplied by a unit-impulse train.)
As you increase N for a fixed period L (decreasing Δx as L/N), the DFT rapidly approaches the exact Fourier series coefficients. How rapidly depends on the smoothness of f(x). If f(x) is k-times differentiable and the k-th derivative is piecewise continuous, then the error decreases as 1/Nk+1; if f(x) is infinitely differentiable, then the error decreases at least exponentially fast. (Much faster than the trapezoidal rule for smooth non-periodic integrands, which is only 1/N2 convergence.)
Clearer, I hope? —Steven G. Johnson 22:26, 3 October 2006 (UTC)
Yes, that's all clear. But that's about approximating a Fourier series by a DFT, which brings up the whole big subject of methods of spectrum estimation, etc. Let's not let it dominate the discussion of what the DFT is, and its properties. As to the original statement in question, there are conditions under which it can be true, but it's complicated. Dicklyon 22:43, 3 October 2006 (UTC)
An energetic discussion. I'm with Steven J (not surprisingly - we both teach spectral methods). I see the DFT as truncated FS. There is no need to talk about delta fn combs or a 'sequence of impulses'. See steve's comments above. And for the IDFT, you just start from the complex FS, f(x) = sum c_n exp(i n pi x/L), (period is 2L) evaluate it at grid points x_j = 2 j L / N and you get f(x_j) = sum c_n exp(2 i j n pi/N) which is the IDFT when you truncate the sum. Sure, this is not very rigorous as you have the aliasing question, but I think this approach gives a much clearer picture of what the DFT is about, which is what an encyclopedia ought to be doing. Obviously there are at least 2 different communities (signal processing and spectral methods) that see things differently. Paul Matthews 09:27, 4 October 2006 (UTC)
I have feet in each community. To me it seems important to be very clear in distinguishing mathematical properties of the DFT that are exact from those properties that make it useful in approximate applications. The truncated Fourier series is then an approximation in your applications, yes? That is, it relates the DFT values of a discrete function only approximately to the FS of a continuous function that those discrete values may be samples of. Under certain conditions the relationship may be exact, but that's a long discussion, which is why we have other articles on such things. So, OK to mention any of this, but as applications, not the core of the DFT exposition itself. Dicklyon 16:35, 4 October 2006 (UTC)


I'm with Dick. I believe I know everything that I need to know about the DFT to use it successfully for spectral analysis. And I have never thought of it as a truncated Fourier series. The clear picture that I think an encyclopedia ought to be giving is this one:
1. Sampling a continuous-time signal changes the spectrum (into a DTFT).
2. Much to the great surprise and delight of most neophytes, the old spectrum is still largely intact, provided the sample-rate is sufficient and one knows where to look.
3. , or if you wish, can only be numerically evaluated exactly if either:
3.1 is non-zero only within a finite interval, or
3.2 is periodic
4. In case 3.1, is continuous, so it can only be evaluated (numerically) at discrete frequencies; i.e. "sampled". The DFT can do that task exactly and with arbitrarily good resolution.
5. In case 3.2, is zero except at a discrete set of frequencies, which the DFT can evaluate exactly.
--Bob K 18:45, 4 October 2006 (UTC)
I'm not so sure you're with me. These things are all about spectra of continuous-time signals, and how to evaluate and approximate them. That's all fine, but the DFT itself has no necessary or explicit connection to anything continuous-time. It is a discrete transform on a finite set of values. All else is applications and approximations, yes? Dicklyon 18:55, 4 October 2006 (UTC)
Sorry. I was focussing (too much) on your statement:
"To me it seems important to be very clear in distinguishing mathematical properties of the DFT that are exact from those properties that make it useful in approximate applications."
I think it is important for encyclopedia readers to understand that under the conditions of 3.1 or 3.2, the approximation error can all be attributed to sampling (i.e. aliasing), and none to the DFT. The samples of the DTFT that it computes are exact. Describing it as a "truncated Fourier series" gives an impression of truncation error, which is misleading, at least in these two cases. If the truncation error we're talking about is the usual leakage that results from windowing an infinite sequence into a finite one (thereby achieving condition 3.1), I don't think that "truncated Fourier series" is the clearest way to describe that to an encyclopedia audience.
--Bob K 21:12, 4 October 2006 (UTC)
Bob, your blinkers are showing: you claim to know everything that [you] need to know about the DFT to use it successfully for spectral analysis, but there's much more to the DFT than signal processing. For example, in spectral methods for PDEs, the boundary conditions may well be a priori periodic, in which case there is zero error due to windowing. In such cases, it is useful to separate the error into two components: first, the difference between the DFT amplitudes and the Fourier-series amplitudes, which is due to aliasing; and second, the error due to the higher-frequency terms you are not computing. There are several reasons it is useful to separate these two errors—perhaps the simplest is that the truncation error is impossible to avoid, while the aliasing error can be avoided if you compute your Fourier series coefficients by an exact integral or quadrature (i.e. a Galerkin method, whereas the DFT corresponds to a collocation method).
In this context, it is also more fruitful to focus on the relationship between the aliasing and truncation error and the convergence rate of the Fourier series (which determines the magnitude of the high-frequency components that are truncated/aliased).
Yet another context in which it is important to view the DFT as an approximate integral is numerical quadrature. Here, the relationship between the integration error and the rate of convergence (via aliasing) is central to methods such as Clenshaw-Curtis quadrature. It also arises in Chebyshev approximation, where again the truncation error is conceptually separate from the error in the expansion coefficients.
I absolutely agree that the DFT is a wonderful beast in its own right that has lots of interesting properties that can be derived without reference to any other transform. However, its applications often (not always) involve some approximation for another transform. For signal processing, these approximations are due to aliasing (from sampling) and leakage (from windowing). For spectral PDE methods, it is more useful to describe the errors as I have above.
All of this has nothing to do with the periodicity section, of course. I agree with Dick that it should go under applications, etc. But don't belittle things you don't understand.
—Steven G. Johnson 00:02, 5 October 2006 (UTC)
I had no intention of belittling, which is why my statement was so carefully worded and narrow in scope. As Paul said, "there are at least 2 different communities (signal processing and spectral methods) that see things differently"... the usual intractable Wikipedia problem. So it shouldn't surprise you that we disagree. My intent was to challenge Paul's assertion that viewing the DFT as a truncated Fourier series "gives a much clearer picture of what the DFT is about". That blanket statement does not apply to all of us, and I seriously doubt that I am a minority of 1 or even a minority at all.
--Bob K 02:55, 5 October 2006 (UTC)

I just removed the whole section in question, because it makes no sense (to me). Here it is in case anyone wants to fix it up, remember it, or explain to me what it's trying to say:

Periodicity

It is shown in the Discrete-time Fourier transform (DTFT) article that the Fourier transform of a discrete time sequence is periodic. A finite length sequence is just a special case. I.e., it is an infinite sequence of zeros containing a region (aka window) in which non-zero values may occur. So , the DTFT of the finite sequence , is periodic. Not surprisingly, the DFT is periodic; e.g. . Less obvious, perhaps, is that the inverse DFT is also periodic; e.g., . It is a periodically extended version of the finite sequence.

The DTFT of the periodically extended sequence is zero-valued except at the discrete set of frequencies sampled by the DFT. I.e., it is effectively identical to the DFT. The DTFT of the finite sequence has other non-zero values, but it is still identical to the DFT at the frequencies sampled by the DFT. So the approximation error of , as an approximation to , lies in the missing non-zero values, not in the coefficients. In terms of the inverse DFT, that approximation error becomes the periodic extension of the finite sequence.

  • Commonly, is a modification of a longer, perhaps infinite, sequence, whose DTFT is only approximated by . In that case, of course, too is only an approximation to [samples of] the original DTFT.
  • The shift theorem, above, is also an expression of the implicit periodicity of the inverse DFT, because it shows that the DFT amplitudes are unaffected by a circular (periodic) shift of the inputs, which is simply a choice of origin and therefore only affects the phase. Periodic boundary conditions play an important role in many applications of the DFT. When solving differential equations they allow periodic boundary conditions to be automatically satisfied, and thus can be a useful property. See also the applications section below.

Here's what I don't get: what is meant in the first place by "the Fourier transform of a discrete time sequence"? Sequences don't have Fourier transforms. To get a periodic Fourier transform, aren't they assuming a modulated comb of delta functions? Do we really want that complexity introduced into an article on a simple discrete transform? And then, the periodicity stuff presumed definitions different from what we have in this article. I was looking at a book today in what defined periodic extensions of x and X, which might then give something to say. But what does this article refer to when it says "it" in "It is a periodically extended version of the finite sequence"? It seems to be contradicting the definition that we have for the IDFT. And so on. This is all too fuzzy for me. If anyone can make it sensible, then put it back. Dicklyon 03:35, 5 October 2006 (UTC)

Yes, you've raised a lot of the things that led me to flag this section. Ok, all thats needed I think is the 1-line proof that

This is a simple property that follows directly from the definition so I hope it should not be controversial. I have put that in, please feel free to add to it. I have put it just before the shift theorems because they use periodicity. Periodicity is also very useful for understanding aliasing and the 'common mistake' problem argued about earlier - you cant distinguish between X_N-1 and X_-1, and it makes more sense to call it X_-1. Paul Matthews 10:12, 5 October 2006 (UTC). Again, to me, the periodicity proof shows that the DFT is a form of FS.
I fixed it to talk about periodic extensions, since the DFT and IDFT as defined are finite sequences. This follows a book I have (I'll have to look up which one if anyone cares; it introduces a specific notation for the periodic extensions so that they can be used for certain properties that benefit from the infinite sequence). Dicklyon 15:13, 5 October 2006 (UTC)
I would argue for a broader phrasing, something like:
Although the DFT is ostensibly just a transformation of N numbers into N numbers, it has a number of properties closely associated with the notion that both the input and the output are periodic sequences, with period N. And then give various examples, such as:
  • Defining the periodic extension just by evaluating the formula for all k (or n, for the IDFT).
  • The notion that periodicity in the transform arises whenever you have uniformly spaced discrete samples. (Relate to the DTFT and Fourier series for periodicity in k and n, respectively.)
  • The fact that the DFT can be derived from a Fourier transform of periodic sequences of delta functions (or from the DTFT of a periodic sequence, or...).
  • The (circular) shift theorem: under a cyclic shift, only the phase of the output changes, analogous to the usual shift formula for the Fourier transform and DTFT.
  • More abstractly, the relation to cyclic groups. (e.g. the DFT can be viewed as the projection operator onto the irreducible representations of the order-N cyclic group.)
  • Others...?
—Steven G. Johnson 17:25, 5 October 2006 (UTC)

Yes, those might be good ideas for additions to the section.

I found my book: Peled & Liu's Digital Signal Processing: Theory, Design, and Implementation. I misremembered a bit. Their notation is not specific to the periodic extension, but rather "taken to mean the periodic extensions when appropriate". That is they use the same notation for the DFT and its periodic extension, essentially equivalent to treating them as circular, to make it easier to state certain properties, such as even and odd symmetries, reverse directions, shifts, etc. that involve negating or offsetting subscripts, to avoid introducing circularity mid mod N subscripts I suppose.

However, the idea of periodicity arising from uniformly spaced discrete samples gets a lot more complicated. There is nothing in the DFT that cares about uniform spacing, so to tie its periodicity periodicity of spectra of continuous-time signals will take a lot of work, which is already well covered, I think, in DTFT article.

The circular convolution and shift properties are already covered, I believe. You don't need to drag periodicity into the explanation of circular.

Dicklyon 17:41, 5 October 2006 (UTC)

Well I would prefer to Keep It Simple, especially in the properties section, stick to things that follow straight from the definition, like the periodoc extension. Reference to DTFT and talk of sampling can come later in the applications section (and please, no representation theory here!). Remember, the article is already too long. Paul Matthews 09:03, 6 October 2006 (UTC)

Orthogonality

I just took a brief look on this article after a long time. Still only the editors can read it. Fundamental problems remain unsolved from the very beginning of the article. Quote:

The vectors form an orthogonal basis over the set of N-dimensional complex vectors

The expression is not for a basis but for a vector component, as it contains no information distinguishing component adress and vector adress. The matter was discussed in talk:function (mathematics)#standard notation and it was resolved that the notation for such a basis could be

the k th vector of which is

the nth component of which is

Bo Jacoby 14:52, 7 February 2007 (UTC).

The statement you have excerpted does not have to stand on its own. My opinion is that the context you have omitted is sufficient. With due respect, if I had to clarify it, I think a more conventional notation is something like:
--Bob K 15:15, 7 February 2007 (UTC)
I think Bo Jacobys point was clearly, that is not a vector but the n-th element of the vector . Not respecting the logic behind vector notations leads to ambiguous and cumbersome expressions. I vote for Bo's notation. Alternatively a notation analogously to set comprehension might be useful: . HenningThielemann (talk) 18:48, 19 October 2010 (UTC)

There's another problem too. This article doesn't mention anywhere that the DFT is applicable to any orthogonal basis function, not just . I'd like to see it generalized a bit more, and be less specific in its focus on sinewaves. ~Amatulić (talk) 21:25, 21 October 2008 (UTC)

You seem to be implying that any unitary matrix is a "DFT". This is not conventional terminology. (Some generalizations of the DFT are described in the "Generalizations" section, however.) —Steven G. Johnson (talk) 01:33, 22 October 2008 (UTC)

Definition

The definition is incomprehensible to the uninitiated reader.

1. It is more important to tell the reader that is a primitive N'th root of unity, than that "e is the base of the natural logarithm, is the imaginary unit (), and π is Pi".

2. A few simple examples might help.

Bo Jacoby 07:42, 8 February 2007 (UTC).

The examples didn't do anything for me, but I agree with you that the excerpt you quoted is too patronizing for my taste. I would also omit "primitive N'th root of unity" for the simple reason that I have been using for 40 years and I have never needed that particular description.
--Bob K 23:17, 8 February 2007 (UTC)

A person with long experience reacts differently than a person who just heard the word 'discrete fourier transform' and is now curious about it. WP ought to be helpful to our young friend. He/she is likely to stop reading at the very first formula. Is it possible to improve with this in mind? What problem is DFT solving? Is there a simple example? Perhaps it should be even simpler.

Bo Jacoby 12:09, 9 February 2007 (UTC).

I don't agree with this sentence "The sequence of N complex numbers x0, ..., xN−1 is transformed into the sequence of N complex numbers X0, ..., XN−1", since DFT doesn't necessary trasform N complex numbers into other N (that is only true for Fast Fourier Transform). I think this definition should be more general giving different cardinalities to the numbers of the 2 different domains (ie: transforming a sequence of N complex numbers into a sequence of K complex numbers).

79.18.198.247 (talk) 12:42, 25 January 2008 (UTC)

I have not seen any definitions of your form--DFT is always N to N. What are you using for a source? Dicklyon (talk) 17:07, 25 January 2008 (UTC)
Rabiner and Gold, Theory and Application of Digital Signal Processing, 1975, p51. They don't exactly commit to either position, and it is not necessary to, in my opinion. Both definitions are useful and valid. We don't all agree on the definition of Fourier transform either. Anyhow, they give the formula:
and say, "it has been called the discrete Fourier transform (DFT)". And they also say, "it is clear that X(k) is periodic with period N samples".
--Bob K (talk) 18:46, 25 January 2008 (UTC)
I don't deny that it's sometimes useful to compute fewer than all N values of a DFT, but I don't think Larry and Ben meant to define it that way by omitting a specific mention of the range of values. I can't find my copy of that one right now, but in the Oppenheim and Schafer of the same year, they also don't seem to have a firm definition. But they do talk about various properties, as we do in this article, that only work if you take DFT to mean all N values. If we define it otherwise, we won't be able to claim invertibility, circular convolution, etc. Dicklyon (talk) 01:29, 26 January 2008 (UTC)
I'm not sure why you say "fewer than all N values of a DFT". I think the issue here is whether the transform domain is [0,N-1] (like an FFT) or unbounded. Larry and Ben certainly acknowledge the latter possibility by mentioning "X(k) is periodic with period N samples".
It's really just a specific form of the issue that arises with the DTFT:
Is the domain (-π, π] or [0, 2π) or unbounded? It can be any of those, depending on the point you are trying to make. All are valid. That is why Oppenheim and Schafer are reluctant to settle on just one.
The unbounded viewpoint, by the way, is particularly useful for insight into the nature of and mitigation strategies for aliasing. When we draw pictures like this one, we are depicting the unbounded DTFT.
When the input to a DTFT is periodic, the DFT can be used to compute the coefficients of the DTFT's modulated Dirac comb output, using just one period of the input. The coefficients are actually a Fourier series, which everyone seems to agree (hooray) is an unbounded number, even if the series is periodic.
--Bob K (talk) 13:36, 26 January 2008 (UTC)
I suppose I may have misinterpreted the idea behind the question above about computing K values from N. You're right that the DFT can be viewed as either N values or as an infinite periodic sequence of N, but since there are still only N distinct values there, I hadn't thought that was what he was asking about. And since the inverse transform is written in term of using only N values, you know... Dicklyon (talk) 17:06, 26 January 2008 (UTC)
Confession: I too may have misunderstood. I did not take "K of N" literally. I guess what he is actually talking about is this formula:
    (from DTFT#Finite-length_sequences),
where L is the actual sequence length. When N > L, it can be computed directly from that "DFT" formula, whereas an FFT requires zero-padding. I guess that is his point. The question is whether or not we want to call that formula a "DFT". It's OK with me, but you might have a hard time finding that definition in a textbook.
--Bob K (talk) 20:59, 26 January 2008 (UTC)
I'd say not OK unless we find it in a textbook. Dicklyon (talk) 21:13, 26 January 2008 (UTC)
According to Oppenheim-Schafer, they don't actually use a strict definition on the cardinalities; they actually point out that for the representation of a finite sequence of length N or of a periodic repetition of a sequence of length N, it is enough to use N coefficients to represent them with a DFT without losing information. Anyway the basical concept behind DFT is actually a transformation between 2 discrete domains (bounded or unbounded is inessential) and I think it should be more general than a "N-to-N transformation"; that's why both Oppenheim-Schafer and Rabiner-Gold don't settle this argument when they give the very first definition. (By the way, I'm the same person of the first message) 79.33.154.55 (talk) 17:51, 27 January 2008 (UTC)
It might be OK to mention that some sources use a more general definition. But you need to be careful and at least state a definition that makes the claimed properties true. Most sources (as far as I can see) do use a strict N-to-N definition; for example this one, this one, this one, etc. Dicklyon (talk) 18:01, 27 January 2008 (UTC)

Notation

The reader has 3 equivalent expressions to learn

  1. The name "Discrete Fourier Transformation"
  2. The abbreviation "DFT"
  3. The symbol

The article becomes easier to read if we get rid of the third one and write DFT rather than like this:

Bo Jacoby 08:03, 8 February 2007 (UTC).

But the single-symbol transform function with the big F is pretty conventional, isn't it? The reader really ought to learn it. This should probably be argued with respect to how different authoritative books present it. Dicklyon 15:33, 8 February 2007 (UTC)

If a reader wants to study an authoritative book, then he will learn the notation from that book, and then he does not need to read the wikipedia article. The purpose of a encyclopedia article is to provide a comprehensible explanation to persons who do not already know the subject matter, and who do not necessarily intend to commence a deeper study. So the article should not be complicated by different ways of expressing the same thing. The big F is used not only for discrete fourier transform, but also for other transforms such as the continuous Fourier transform. Therefore it is easier to get rid of the big F here rather than to get rid of the abbreviation DFT, which specificly means Discrete Fourier Transform. (It would be nice to stop distinguishing between discrete and continuous fourier transform and create a common presentation, but that is quite another story). Bo Jacoby 23:00, 8 February 2007 (UTC).

I know you are big on unique, unambiguous, standalone, notation, and in my more idealistic years, I might have bought into that. But the place I'm at now is that notation is not enough to handle the complexities and multiple conventions of the real world. E.g., even within Wikipedia we are stuck with two different meanings of sinc() for engineers and physicists. There will always be a need for definition, context, and prose. Both the writers and the readers will need all those communication tools and skills. Having placed that stake in the ground, let me point out both an unmentioned advantage and another disadvantage of keeping here:
  1. The advantage is your stated desire to "stop distinguishing between discrete and continuous Fourier transform". Although I don't think the goal is realistic, it seems like a step in that direction is to proliferate the rather than restrict it.
  2. The disadvantage is that the does not distinguish between DFT and DTFT. But I'm not really worried about it, because I also have the other clues.
--Bob K 23:51, 8 February 2007 (UTC)

Finite Fourier transform

"Finite Fourier transform" redirected to "Discrete Fourier transform" here at Wiki. But I put up a disambiguation page for the "Finite" term, because alas it is used in a different sense by some people, ee.g.

http://www.dtc.army.mil/navigator/apnd_D03.html

http://library-dspace.larc.nasa.gov/dspace/jsp/bitstream/2002/10966/1/NASA-97-tm110340.pdf

LDH 11:07, 26 February 2007 (UTC)

I see you "fixed" it. I have disputed your fix; see talk:Finite Fourier transform. Dicklyon 21:08, 26 February 2007 (UTC)
And I'm disputing your dispute. =) I don't see anything wrong with LDH's fix...the references for those definitions seem easy to find. However, some of the references you point out give a third definition (grrr), in which "finite Fourier transform" is a synonym for the Fourier-series coefficients. —Steven G. Johnson 06:41, 27 February 2007 (UTC)
Thanks for adding refs and the other meaning, and getting a little closer to typical usage on the last one; I don't believe it's typically limited to x of finite support, though. I'll look for a good ref (such as the first one LDH gave above, maybe). Dicklyon 07:39, 27 February 2007 (UTC)
The difference seems to be just a matter of interpretation—does x(t) have compact support, or do we just happen to restrict the Fourier integral to compact region? It's just a question of whether you are looking at the function before or after windowing. (In any case, it would be nice to have a more authoritative reference; I hate to rely on web pages and technical reports.) —Steven G. Johnson 07:46, 27 February 2007 (UTC)

Division by 2 error?

Is there not an error in the denominator for the forward transform? (Or the compound forward/inverse scaling factor?) I get half result when testing in program and spreadsheet. Apparently for all indices other than 0 and N/2, the scale factor (product) should be 1/(N/2), rather than 1/N. (Could be written 2/N). See link below, and this value makes my program work, I believe.

http://www.dspguide.com/ch8/5.htm

63.76.53.47 17:35, 6 June 2007 (UTC)Gordon Elliott63.76.53.47 17:35, 6 June 2007 (UTC)

The sums on that dspguide page only go to N/2. The standard form (which works more generally, not just for real signals with symmetric transforms) shouldn't need the hacked factor. Dicklyon 18:00, 6 June 2007 (UTC)
There is no error that you asked about. Real-valued signals have symmetrical transforms. Since the information is redundant, you can do an inverse transform with just the positive frequency components, but then you have to compensate for the energy that you did not include in the reconstruction. And of course it is exactly half the total, thus the factor of 2.
--Bob K 18:34, 6 June 2007 (UTC)

DFT polynomial multiplication algorithm

a(x)=3x^2+9x+1

b(x)=8x+2 d=4 d>deg(a(x))+deg(b(x)) 4>3 a={3,9,1,0}//append last zero b={8,2,0,0}//append last zero's

(3x^2+9x+1)*(8x+2)=24x^3+78x^2+26x+2

yet FFT^-1(FFT(a)*FFT(b))={42,78,8,2} which does not equal {24,78,26,2}

because FFT({3,9,1,0})={13,11,-6+i,-6-i} FFT({8,2,0,0})={10,10,6,6} {13,11,-6+i,-6-i}*{10,10,6,6}={130,110,-36+6i,-36-6i} FFT^-1({130,110,-36+6i,-36-6i})={42,78,8,2}!={24,78,26,2}

what am I doing wrong?

--ANONYMOUS COWARD0xC0DE 20:43, 25 June 2007 (UTC)

That's easy: you're treating a wikipedia article's talk page as a forum. Dicklyon 21:21, 25 June 2007 (UTC)


FFT({3,9,1,0}) = {13, 2-9i, -5, 2+9i}
FFT({8,2,0,0}) = {10, 8-2i, 6, 8+2i}
--Bob K 20:39, 26 June 2007 (UTC)

Too technical

Someone removed the {{technical}} I stuck up there because I didn't leave any comments about what I didn't understand. So, I put it back, and here's what I don't understand: just about all of it. It goes right into high-level mathematics without so much as an explanation of what you're even doing. Wikipedia's here to create en encyclopedia accessible to everyone, not just people who took 4+ years of math in college. So, in short, how about a little simplification? — SheeEttin {T/C} 02:16, 29 June 2007 (UTC)

OK, let's start with the lead. Is it understandable? If not, what words or phrases throw you? Next, in the definition, since it's a mathematical transform defined by a summation, do you want that spelled out more in words? Or description of what that summation is doing? And is there any hope of presenting something that will make a reader happy if the reader doesn't have the background to understand the material? I'm willing to help, but it's not so easy for those of us who understand to understand what to do to satisfy those who don't. Help us out. Try this as an definition:

Definition

The DFT is a linear algebraic transform that takes as its input a set of numbers that can be thought of as a sampled time-domain signal that repeats, wrapping from the last sample back to the first forever. The transform produces as output a set of just as many numbers, which can be thought of as measurements of the amounts and phases of different frequencies present in the original periodic sequence. The numbers are complex in general; in particular, even if the input numbers are real, the output numbers are still usually complex. Each output number corresponds to a frequency of some number of cycles in the length of the original data; the first, at index represents "DC", or the amount of unchanging signal in the input; the second, at index , represents the amount and phase of a sinusoid of one cycle in the input data (that is, one cycle per repetition in the infinite periodic repetition of the input); similarly, output values for higher index values quantify the amount and phase on sinusoid of that many cycle in the input data. The amount and phase are the magnitude and phase of the output complex numbers, which can also be interpreted as real and imaginary parts specifying the amounts of cosine and sine waves in the input data. Each output value of the DFT is computed by summing the values of the input times samples of a complex exponential of the frequency corresponding to the output index. For the higher index values, above half the number of input points, the high frequencies exceed a half cycle per input sample, and may alternatively be viewed as negative frequencies.

In algebraic notation, the sequence of N complex numbers x0, ..., xN−1 is transformed into the sequence of N complex numbers X0, ..., XN−1 by the DFT according to the formula:

where e is the base of the natural logarithm, is the imaginary unit (), and π is pi. The transform is sometimes denoted by the symbol , as in or or .

The DFT for the case of real inputs can also be defined in terms of sums of purely real sines and cosines, for the real and imaginary part separately:

And so on...

This may be pretty unpalatable to someone who would prefer to use mathematical notation, but tell me if it helps with your request. It's not less technical, just a different notation for what is inherently a very technical topic.
Dicklyon 04:49, 29 June 2007 (UTC)
Hmm, I'm not entirely happy with your rephrasing. The input and outputs are not merely "sets" of numbers. How does adding the unexplained acronym "DC" help a completely nontechnical reader? And you can't interpret the real and imaginary parts as the amount of cosine and sine in general (not for complex inputs). For real data, it makes much more sense to write the outputs in polar form and express the inputs as a sum of sinusoids of the form , rather than a sum of sine and cosine (since it is not completely obvious that a sum of sine and cosine is just a sinusoid of the same frequency with different phase). Any of the frequency components may be viewed as negative frequencies, not just the upper half; aliasing applies to every component.
—Steven G. Johnson 20:10, 30 June 2007 (UTC)
Yes, I should have said ordered sets, or sequences, or something like that. And yes some of what I said only makes sense for real inputs. And you're right that any of the frequencies can be treated as any of their aliases; maybe this is best left out, but sometimes the "primary" frequency of interest is the one with lowest absolute value, which is what I was getting at. Feel free to work it over, but I don't think polar expressions with complex exponentials is likely to be a part of the solution of making it intelligible to the less technical readers. Dicklyon 22:05, 30 June 2007 (UTC)
I don't think we should try to write out the whole equation in English anyway; if one's mathematical background is too weak to make head or tails of the equation, writing it out "longhand" is very unlikely to be useful or illuminating either. For non-technical readers, I think it is sufficient to (a) have one sentence saying that the DFT, the discrete analogue of the Fourier transform, takes a finite sequence of numbers and expresses it as a sum of sinusoidal components and (b) after the equation, mention that the exp(2pi i nk/N) are the sinusoidal components and the X_k are the complex amplitudes which (in polar form) directly represent both the amplitude and phase of these components, and link Euler's identity. It is unreasonable to attempt to further explain the equation in detail without expecting the reader to know complex arithmetic and Euler's identity. This is not the article in which we should be explaining the relationship between complex arithmetic and trigonometry...the reader should go elsewhere for that. —Steven G. Johnson 06:04, 1 July 2007 (UTC)
I'm good with that approach. You want to take a stab at it? Dicklyon 16:50, 1 July 2007 (UTC)
Okay, will do. —Steven G. Johnson 23:08, 1 July 2007 (UTC)
Guess I should've checked back sooner. Anyway, it still makes no sense to me, but it's written much better. — SheeEttin {T/C} 14:50, 3 July 2007 (UTC)

Confusing

Also see below: Too simple. -Wikid77 19:42, 16 October 2007 (UTC)

I must say this article is pretty confusing. I've done some research about the DFT to try to understand and apply it in a program, but it looks like there are many variants of this... For example, on [1] it is said:

Each of the four Fourier Transforms can be subdivided into real and complex versions. The real version is the simplest, using ordinary numbers and algebra for the synthesis and decomposition. For instance, Fig. 8-1 is an example of the real DFT. The complex versions of the four Fourier transforms are immensely more complicated, requiring the use of complex numbers. These are numbers such as: 3+4j, where j is equal to sqrt(-1) (electrical engineers use the variable j, while mathematicians use the variable, i). Complex mathematics can quickly become overwhelming, even to those that specialize in DSP. In fact, a primary goal of this book is to present the fundamentals of DSP without the use of complex math, allowing the material to be understood by a wider range of scientists and engineers.

The formula in that article is:

and do not require complex numbers. Also, these formulae yield me N+2 values in the frequency domain (N being the number of samples). The formulae in the Wikipedia article give me 2*N values (N complex numbers). What's the difference between both formulae (if there is any)? Why aren't these versions written in this article? It seems to me that they are easier to use and are sufficient for most purposes. —Preceding unsigned comment added by 87.65.199.96 (talkcontribs)

Those are like the ones I proposed immediately above here as part of an attempt to make it more "simple" to understand. They are however limited to the case of real x; they can be expanded to work for complex x, but look more complicated then. These do NOT avoid complex numbers, just compute complex numbers with real and imag parts separately. It's not clear to me why you're confused, but it's clear that you are, as you've misstated the number of points that these yield. Dicklyon 14:27, 6 July 2007 (UTC)

I agree that the article is too technical and confusing. One year ago I suggested a more pedagogical approach, See Talk:Discrete Fourier transform/archive 1#Always begin with the simplest example. I gave up because some other editor was opposing rather than cooperating. Now, after a year, the problem still exists, the article is still too technical and confusing. The editor in question has had his chance and blew it. He must give room to other editors. Bo Jacoby 13:31, 27 July 2007 (UTC).

If the article is too technical and confusing, we should be looking for a standard pedagogical approach that helps to clarify it. I just reviewed what you proposed a year or so ago, and I think I agree with Stevenj that it was a bit too "original" for wikipedia, not to mention long-winded and probably really not that easy for someone to follow if they can't follow formulas with summations and exponentials. And I would argue that starting off with a representation of periodic extension of the bases, the form of an infinite matrix, is quite beside the point, since the transform in question is explicitly finite. Perhaps something in a section on "elementary introduction" or something like that, but not in the main line lead or intro that's better targeted at people who actually have to capacity to appreciate the material that's on topic. Dicklyon 15:21, 27 July 2007 (UTC)
Bo, if I recall correctly your basic suggestion was to start with the N=2 case (equivalent to a Hadamard transform). Although the N=2 case is easy to understand, it is a very special case that sheds very little light on the properties and applications of the general-N case, unfortunately. Almost all of the practical applications of the DFT follow from its properties in the limit where N is large. There is no question that some people will find this material hard to follow, but that will be true in any presentation of an advanced-undergraduate-level topic; it doesn't necessarily indicate a fundamental failure of the current structure (although improvements can undoubtedly be made), and indeed the current structure is quite standard pedagogically if you compare to any of the popular and time-tested textbooks. —Steven G. Johnson 16:28, 27 July 2007 (UTC)

Steven, the idea of writing an encyclopedic article is not to summarize the "standard" approach of the "the popular and time-tested textbooks", but rather to move the topic from the advanced level down to the elementary level. By writing that "some people will find this material hard to follow, but that will be true in any presentation of an advanced-undergraduate-level topic" you indicate that you do not believe that a good encyclopedic article can be written at all. An encyclopedic article should not describe every variation of notation and definition found in litterature, but rather explain the essence of the subject to the uninitiated reader, using the sufficient minimum of notation and definition. For example, the trancendental numbers e and π is no problem on the advanced-undergraduate-level, but they are unnecessary and should be removed, because discrete fourier transform is pure algebra. Explaining a special case helps, when a reader does not understand the general case. From the easy case N = 2 one may proceed to the case N = 3, introducing a complex cube root of unity, then to the general finite case, introducing a complex, primitive Nth root of unity, and then to the limiting cases, namely the fourier series and the fourier integral transform. I do not suggest to stick to the case N = 2. Bo Jacoby 09:35, 28 July 2007 (UTC).

I prefer the undergrad level, including e and π, FWIW. But what I really want to add here is the question: Why do these things have to be mutually exclusive? The advantage Wikipedia has over a conventional encyclopedia is seemingly unlimited space. Why not create 2 articles, aimed at different segments of the reader population? Why not let the readers pick the article they want to read? The "pure algebra" point-of-view does nothing for me. I have gotten along fine without it for 40 years. But if other folks want to think about it that way, why should I care? To take it a step farther, I would like to see Wikipedia add a way for readers to rate the usefulness of the articles, as many other web pages do. A counter of times visited would also be interesting, if not useful.
--Bob K 11:09, 28 July 2007 (UTC)

Bob, after 40 years you have no problem with e and π and you have no need to express the discrete fourier transform in terms of simple algebra. So you are not a member of the elementary segment of the reader population. Nor are you a member of the advanced segment, because you do not need the article at all. It tells you nothing new. You are neither complaining that the article is technical and confusing, nor are you praising it for providing you with just the insight you need. We could prefix the article with a note that the reader is supposed to have 40 years of mathematical experience in order to understand it, and in that case he has no benefit from reading it anyway. So we do not need two articles. We need one encyclopedic article! Your suggestions for improving the services of Wikipedia are nice, but besides the point I am trying to make. Bo Jacoby 12:47, 28 July 2007 (UTC).

First, an encyclopedia article on the DFT is not an axiomatic textbook, starting from the elementary definitions of real numbers and arithmetic. Nor can every encyclopedia article stand completely on its own. This is not the article in which to introduce complex arithmetic and complex roots of unity to the reader for the first time; we have other articles for that if need be. Even if in principle one can introduce roots of unity without mentioning π, pedagogically it is unreasonable since virtually no English-speaking students will encounter general complex roots of unity before encountering π, and roots of unity are much simpler in polar coordinates. The first rule of technical writing is to know your audience. (And this is why following approaches in standard introductory textbooks is helpful: popular textbooks have been tested against many audiences for the material in question.)
Second, hyperbole will get you nowhere—you don't need 40 years of experience to follow this article; many advanced freshmen would follow most of it, and in a few more years will have progressed well beyond it. On the other hand, there are aspects of DFTs (like many other math subjects) that require years of study to fully appreciate, including aspects (like the choice of eigenvectors) that even professional researchers do not understand completely. A good encyclopedia article should cover the spectrum, from a one-paragraph overview for a general audience, to an introduction to the basics assuming an appropriate background, to at least a hint of more advanced material that may inform even people who are quite familiar with DFTs (and which can eventually be expanded in additional articles on subtopics).
Third, it's simply a fact of life that not every article will be completely understood, from top to bottom, by every reader. This does not mean that the article is a failure, just that the reader does not have sufficient background to fully appreciate the topic, and a single article cannot compensate for the lack of years of study. As mentioned above, encyclopedia articles are not axiomatic textbooks, and do not exist in a vacuum independent of the rest of education or the encyclopedia. Of course, every article should begin with a brief introduction that conveys at least the gist of the subject to as broad an audience as possible. But demanding that we completely rewrite the article every time a single reader doesn't understand everything is absurd—the article will never converge, nor will rewrites for the benefit of one reader (especially a hypothetical reader who understands complex roots of unity but not π) necessarily improve the article for other readers.
—Steven G. Johnson 16:07, 28 July 2007 (UTC)

The article is too technical and confusing. The readers tell you so. How come you don't listen to what is said? Not every article in WP has that problem. You argue that it cannot be helped. I see no entry in this talk page where advanced freshmen testify that they follow most of it. How come you listen to what is not said? Bo Jacoby 16:29, 28 July 2007 (UTC).

I don't think either of has argued that it cannot be helped. We did express the opinion, however, that your particular exposition might not be a great one, and that it smacks more of original research than of an established pedagogic approach. I do appreciate your attempt to help, but you also need to listen to the reactions of other editors. It's just us here. Dicklyon 16:56, 28 July 2007 (UTC)

Ok. The problem was there one year ago and it is still there today. Lets talk again in another year from now to see whether your established pedagogic approach has solved the problem by then. If any deviation from the approach of the advanced textbooks is considered forbidden original research, then the problem has got no solution. Good luck. Bo Jacoby 18:15, 28 July 2007 (UTC).

I have never suggested that the article could not be improved; in fact, I explicitly stated the opposite (although I doubt that a solution which will satisfy every reader exists). What I said is that your specific suggestions were unlikely to be improvements, and I explained why. Your argument is based on a logical fallacy: you seem to think that if some reader has a problem understanding the material covered by the article, a completely different approach is automatically an improvement. —Steven G. Johnson 23:29, 29 July 2007 (UTC)

Section "Orthogonality"

In section "Orthogonality" you have mentioned . What is the range of indexes k and k'? Is it possible to add a link to where it comes from? It`s not from article "orthogonal", the only somehow relevant link I can see. Arkadi kagan 13:01, 11 September 2007 (UTC)

Too simple

16-Oct-2007: When expanding the article with text for general readers, please isolate explanations to limited sections, or else the article will become "too simple" (or "too tedious") for experienced readers. Several readers have complained about articles being "baby-fied" with wording that over-simplifies the technical details they feel are needed. This is just a general concern: avoid making the technical writers afraid to use technical terms or formulae, or afraid to expand advanced concepts in the article.-Wikid77 19:42, 16 October 2007 (UTC)

Added basic explanation

16-Oct-2007: To handle concerns of "too technical" (or "too simple"), I have expanded the article with a new bottom section, Basic explanation. That section gives a simple explanation of a "discrete Fourier transform" (plus the applications and branches of mathematics used), without cluttering the main text of the article. I think such a bottom section is about the best way to explain many novice issues about the subject, without annoying the more advanced readers about the background basics, such as using "formulas in finite mathematics" and "algebra". At this point, I am removing the "{technical}" tag at the top. -Wikid77 23:42, 16 October 2007 (UTC)

I don't agree with the explanation of DFT in terms of numerical integration.
--Bob K 15:49, 17 October 2007 (UTC)
25-Oct-2007: In explaining the DFT formula, I am using several analogies, but I'm afraid that the overall article has been so complicated that almost no one understands it. If I didn't have degrees in both mathematics and computer science, I don't think I would have attempted an explanation. Whenever a formula is summing terms of the form "x * e^k" it is summing rectangular areas, although each is being skewed by an exponential factor, based on the counter/index value k. However, I have omitted the discussion of complex numbers to simplify the explanation. Numerical integration can produce similar formulas, as an approximation to an equation in integral calculus, by restricting the counter to discrete values, whereas in a calculus integral formula, the summation involves all possible values of the "counter" variable, usually x, as it ranges from low value to high, during the integration. For that reason, the formula for the DFT summation is similar to numerical integration. However, to reserve some differences of meaning, I have added the term "skewed curve" to the explanation, to not oversimplify the effect of the exponential terms in the DFT formula. Are there some other phrases that might be better to include in the explanation? -Wikid77 11:24, 25 October 2007 (UTC)
I'm with Bob here (which is why I removed that). The DFT has no relation to numerical integration. It is a discrete transform. If it can also be viewed as an approximation to a Fourier series of a continuous periodic function, that's fine, and can be discussed, but it's not what it is. Dicklyon 15:13, 25 October 2007 (UTC)


If the goal is to make DFT understandable/accessible to more people, I don't think it helps to describe the product of a function with an exponential basis function as a "skewed curve". What they really need to understand is that it works like a matched filter. The sinusoidal component of the function that matches the basis function (in shape, not amplitude) is effectively squared, every sample contributing a positive value to the summation. Or in complex (vector) terms, the sinusoidal component of the function that matches the basis function contributes vectors that all point in the same direction. The components that do not match the basis function, contribute values that alternate in sign or vectors that rotate in direction and tend to cancel out of the summation. The final value is therefore dominated by the component that matches the basis function.
Another problem is that you appear to be saying that the summation is a DFT. I don't think you make it clear that it is just one point on a whole function. The DFT is the whole function, not the point.
--Bob K 22:29, 25 October 2007 (UTC)
I edited it to be more nearly correct. Please feel free to work on it more. Dicklyon 23:49, 25 October 2007 (UTC)

Needs images

This sections needs LOTS OF IMAGES. —Preceding unsigned comment added by 18.243.2.30 (talk) 19:31, 14 November 2007 (UTC)

Why? What kind of images do you have in mind? Dicklyon 20:40, 14 November 2007 (UTC)
I just came here thinking there should be a new page devoted to Fourier transforms of images. Perhaps that's what Anonymous had in mind? I may start Fourier transform of images. I worry it verges on how-to, but there are particulars about it that aren't covered elsewhere right now. There's a lot of encyclopedic "what is it?" information that could be added on the topic. —Ben FrantzDale (talk) 01:50, 18 January 2008 (UTC)
I agree more images would be valuable, in particular graphs of DFTs of measured data, such as audio, and showing graphs of transform pairs. These will greatly aide those trying to develop intuition with the subject from a practical standpoint. This was standard in engineering textbooks when I was an undergraduate, and probably still is - see for example Analog and Digital Communication Systems by M.S. Roden, Prentice-Hall, 1985. Yaman32 (talk) 00:32, 24 October 2008 (UTC)

To complicated

Yes, this and entire Fourier series articles on Wikipedia are too complicated. Here is Wikipedia, not a math courses. I looked over the discussion page and I saw only cheap talk among "specialists" . Please insert sections with basics explanations about Fourier and then you can put whatever complex stuff you want. —Preceding unsigned comment added by 192.35.17.15 (talk) 12:39, 12 February 2008 (UTC)

I tend to agree. A layman's introduction to the Fourier transform would be useful. Integral transforms can seem complicated and unintuitive when they are really fairly simple and are elegant, powerful generalizations of a change of basis in finite-dimensional linear algebra. In an Introduction to Fourier analysis page, I would start with relating it to a change of basis, then show the basis in pictures, and discuss some intuition behind it. For example, that "high-frequency information" means something very specific that can be different from a human intuition for frequency. For example, that a square wave with a given frequency has more than just that frequency component in the Fourier domain. Such a page might also mention the convolution theorem as an important application. —Ben FrantzDale (talk) 12:05, 13 February 2008 (UTC)
I don't think anyone would disagree with more good-quality, well-organized information, since we are not space limited. At the risk of stating the obvious, I think the only "problem" is that formulas are easier/faster to create than pictures, once you get the hang of it. And of course they are more generally useful.
--Bob K (talk) 14:39, 13 February 2008 (UTC)

Take it from occasional reader, the editors of this page seem to be way too locked up on the way it is always done in math courses. People "who dont know" need broader perspective, e.g. where the hell the core comes from, why it is the way it is (not "just because of its properties", but why it has them in the 1st place), how does it relate to cross-correlation, etc, etc. Oh ya, the signature: 92.112.247.110 (talk) —Preceding undated comment was added at 15:20, 9 November 2008 (UTC).

One way to understand "where it comes from" is to start with the Fourier series, which led to the Fourier transform. Then came the analog-to-digital revolution and the discrete-time Fourier transform (DTFT), which is an approximation to the Fourier transform of the analog signal that was sampled. But the DTFT is still a function of continuous frequency. So the DFT is simply a way of sampling the DTFT at uniformly-spaced frequencies to reduce an infinite set to a finite set.
The problem, I think, is that other applications (besides spectral analysis) have been found for the DFT. Think of each one as a leg of a spider whose body is the DFT in its basic, mathematical, context-free formulation. The description above starts with one leg, instead of starting with the spider's body. When you try to do that here at Wikipedia, you risk objections from editors who aren't used to thinking of it in that way. Textbook writers have more freedom to do what you are suggesting.
--Bob K (talk) 18:13, 9 November 2008 (UTC)

Anyone willing to "Do" something about the complexity of this article...

It's been noted, apparently for years that this article is too complex (to which I concur in the extreme), but nothing has been done to correct the situation. As far as I can tell, the only people who can understand the article as it is currently written are people who already know the subject matter perfectly. So the current article serves NO ONE. The people who already understand, don't need it, and the people who don't, can't comprehend it. I would dive in a do something myself, but I'm another person coming to this topic for enlightenment, so I'm hardly qualified to rewrite it. I would strongly suggest someone look over Steven W. Smith's "The Scientist and Engineer's Guide to Digital Signal Processing" for some inspiration on how this topic could be explained so that a newcomer has some prayer of understanding what's going on. If no one else steps up in a reasonable time frame, I'll at least toss in a few graphs of what a signal looks like before and after DFT has been applied to it. Also I'll throw in my best guess of a simple description of what DFT is good for. However I won't be too surprised if my edits mostly get tossed, since as I stated above, this subject is far from a strong point for me. I just don't want to sit by and do nothing when it has been pointed out for so long that this article isn't helping anyone. Please note that I'm not suggesting that the complex material be removed, just some simple description added so the mathematically challenged can gain some understanding. —Preceding unsigned comment added by Twikir (talkcontribs) 13:55, 20 June 2009 (UTC)

What assumptions are you planning to make about the readers' background?
--Bob K (talk) 17:19, 21 June 2009 (UTC)
That they understand basic algebra, but not much more. Certainly no understanding of higher math (since my own understanding of higher math is limited). As I said above, I would be much happier if someone with a good understanding would write something up, but no one seems to be coming forward. So I'm willing to look like a fool in order to move things forward so at least people might find something that will help them when they come here. Twikir (talk) 09:35, 24 June 2009 (UTC)twikir
Well I think I see your problem. I don't want to discourage your good intentions, but you are editing the wrong article. The DFT may look simple to you, but it is usually taught last in line behind Fourier series, Fourier transform, and DTFT. And they all involve integral calculus.
--Bob K (talk) 13:12, 24 June 2009 (UTC)

Finite Fourier Transform

I keep moving the sentence in the third paragraph of the introduction to a separate paragraph, and someone else keeps putting it back. I thought I would start some discussion about this small point. The sentence in question is this: "The terminology is further blurred by the synonym finite Fourier transform for the DFT, which apparently predates the term "fast Fourier transform" (Cooley et al., 1969) but has the same initialism." I have no problem with the sentence itself - only its placement. My thinking is that conflating "finite Fourier transform" and "DFT" happens nowhere near as often as conflating "FFT" and "DFT", and that it seems inappropriate to mention what seems to me to be a relatively obscure term ("finite Fourier transform") in such a prominent place in a long article. Also, the paragraph that it's currently in is otherwise about FFTs and the relationship between FFTs and DFTs. It seems to me that the sentence about finite Fourier transforms should either be split into a separate paragraph immediately following, as I had done, or, removed from the introduction and moved elsewhere. The new home for this sentence could be in an existing section of the article, a new section about terminology, or even a new article about DFT- and FFT-related terminology. -- ATBS 30Aug09

FWIW, I agree with you. I think this article is the only place I have encountered the term "finite Fourier transform". Interesting tidbit, but it certainly could stand to be moved to a less prominent position.
--Bob K (talk) 13:37, 31 August 2009 (UTC)
The suggested cures seem worse than the disease to me.
Splitting the sentence into a separate paragraph in the lede makes it more prominent, not less (and it makes logical sense to keep discussion of the terminology per se in one paragraph). Terminology in general, and the commonly confused FFT/DFT distinction in particular, needs to go in the lede (the primary purpose of the lede is to say what the article is about, and clarifying DFTs vs. FFTs is part of that). Given that FFTs vs. DFTs should be discussed in the lede, it doesn't make sense to move this one sentence into its own section somewhere, and there's no other place in the article where it seems to fit.
(A prominent example of someone who still uses the "finite" term is Cleve Moler of Matlab.) — Steven G. Johnson (talk) 17:00, 31 August 2009 (UTC)
PS. ATBS, if you intend to edit Wikipedia extensively, it is strongly recommended that you register for a editing account and username, which makes it much easier for other editors to communicate with you. — Steven G. Johnson (talk) 17:10, 31 August 2009 (UTC)
Steven J: I registered last night. My signature was not showing up so I was adding it manually. I hope to have fixed this so that a full fledged signature shows up.
As I had organized it, the main topic of the third paragraph was the intimate relationship between FFTs and DFTs. That is why the last sentence of the previous paragraph was also included in that paragraph. Someone saw fit to move that sentence to paragraph 2. It reads fine in that paragraph, too, but it does change the focus of the third paragraph. Paragraph 3 was not primarily about terminology confusion. Like Bob K, I also had never heard of the term "finite Fourier transform". If this term is indeed relatively obscure and unimportant compared to the other topics discussed in the introduction, then it is hard to see how it belongs in the introduction. As I said, I think there are other places that this kind of information can be added. Putting it in the introduction because there is no better place to put it is not a good reason to put it in the introduction. The introduction of a long and widely-read article such as this should contain only things vital to the discussion. Is this information actually vital? To find out, compare it with the information around it. If it turns out not to be of the same level of importance as the other information in the introduction, then probably it should be removed from the introduction. If there is no logical place for it in the remainder of the article, a new section, or a new article, then so be it - it can reside in the discussion pages only until a better place can be found. But I think a place could be found or created. -- ATBS —Preceding unsigned comment added by ATBS (talkcontribs) 22:25, 31 August 2009 (UTC)
Could the problem be solved by adjusting the first sentence to read: "In mathematics, the discrete Fourier transform (DFT) (occasionally referred to as the finite Fourier transform) ...", and removing the discussion in the 3rd paragraph? Oli Filth(talk|contribs) 22:31, 31 August 2009 (UTC)
No, it could not. The connection between the DFT and FFT is of vital importance. How does it make sense to remove the discussion of that? - ATBS —Preceding unsigned comment added by ATBS (talkcontribs) 22:35, 31 August 2009 (UTC)


Originally I misunderstood what you meant. I now think you were suggesting removing the discussion of "finite Fourier transform" from the third paragraph, not the entire third paragraph. As I said, I do not see how the term "finite Fourier transform" has anywhere near the same importance as the other things in the introduction. Why would it belong in the introduction? - ATBS
Yes, sorry, I meant only removing the discussion of "finite Fourier transform". After some brief web searches, it appears that the alternative term is not negligibly unused. I was therefore going to suggest that Finite Fourier transform should redirect here, but it turns out that it already has its own (brief) article. That article implies that the scope of the term is somewhat wider than simply a synonym for DFT. Consequently, I think that perhaps it should be removed from the intro entirely, and perhaps just listed in the "See also" list. Oli Filth(talk|contribs) 18:43, 1 September 2009 (UTC)
That sounds good to me. Maybe when this change is made, the Finite Fourier transform article could be modified to mention potential confusion between finite Fourier transform and fast Fourier transform. 20:00, 1 September 2009 (UTC) —Preceding unsigned comment added by ATBS (talkcontribs)
It's been quite a while now - does anyone have reasoned objections to these changes being made? ATBS 19:37, 13 September 2009 (UTC)

By the way, here is an observation: the introduction contains a lot of parentheses. Adding information by tacking on parenthetical remarks may be convenient for the editor, but it often makes sentences harder to read. Perhaps someone would like to take a shot at rewriting the introduction so as to convey the same information without using so many parentheses. - ATBS —Preceding unsigned comment added by ATBS (talkcontribs) 22:40, 31 August 2009 (UTC)

Finite Fourier transform is used (by some authors) as a synonym for the DFT. It is also used by other authors for completely different meanings. Since it is an occasional synonym for DFT, it should be mentioned here — the standard practice in WP articles is for the article lede to mention all major synonyms. Since it has other meanings as well, however, finite Fourier transform needs a disambig article giving pointers to the different meanings, which is what finite Fourier transform is. Again, this is all standard WP structure. I don't see any cogent reason to change it here. Because it is an uncommon synonym these days (despite a few prominent adherents), I don't think it belongs in the first sentence of the lede, however. — Steven G. Johnson (talk) 03:38, 14 September 2009 (UTC)
As a totally separate topic, fast Fourier transforms are so crucially important for the practicality of using the DFT in applications that they must be mentioned in the lede, and hence the (often confused) relationship between FFTs and the DFT should be clearly delineated. I don't think it's a good idea to move this into another article. — Steven G. Johnson (talk) 03:42, 14 September 2009 (UTC)

Section "Spectral Analysis"

I recently tried to add the following sentence to the "Spectral Analysis" section: One more problem of spectral analysis by DFT is that the DFT shows different behaviour for deterministic signals and for noise; ... which Oli Filth reverted with the comment Reverted good faith edits by Hanspi ... Thanks for the "good faith" comment, but my edit was due to rather more than just good faith.

I guess you are aware that a sine will (ideally) give a peak in the DFT whose height is the RMS value of the sine, while for white noise it will be the noise integrated over one frequency bin. However, are you also aware that non-white noise -- noise with an autocorrelation function that is not a Dirac impulse -- can behave in quite unexpected ways? E.g., random-walk noise (cumulative sum of white noise) comes out 3dB higher than one would expect from the idea that the DFT just integrates the noise power in a single frequncy bin. If you use, e.g., a Blackman-Harris window on random-walk noise, this will give you good precision at high frequencies but errors as big as 7dB at low frequencies. Read the Paper "BIASES AND VARIANCES OF SEVERAL FFT SPECTRAL ESTIMATORS AS A FUNCTION OF NOISE TYPE AND NUMBER OF SAMPLES" by Walls, Percival and Irelan, 43rd Annual Synposim on Frequency Control, 1989. I can send you Matlab files to play with, if you wish.

So while I think we could discuss whether this kind of information belongs on the Wikipedia -- and I think not, one sentence plus one reference would be enough, ant that's what I did -- I know two things for sure: first, I keep on having to educate engineers and other scientists on how to read a spectrum made with a DFT and read SNR out of it, because so few of them can do it; and second, my edit was not based on "good faith" but on "proficiency in application". Oli, I'd welcome your comments on this. Hanspi (talk) 18:55, 7 September 2009 (UTC)

Hi. From your edit, and from Section 1 of the reference you provided, it appeared that you were neglecting the fact that power (or energy, depending on how you view it) of a signal or noise is the integral of the magnitude-square of its DFT across the entire frequency vector, and not simply the RMS of its DFT. The fact that the average noise PSD drops by 20 dB when you increase the DFT length by 100 is exactly what one would expect.
If that's not what you were referring to when you wrote "the DFT shows different behaviour for deterministic signals and for noise", then I must have misunderstood you. I'm still not sure precisely what you are referring to; for instance, it's not clear what you mean by "the DFT just integrates the noise power in a single frequency bin". If it is that the DFT behaves in "unexpected" ways for various combinations of noise-type and window, that's not the same as stating that it "shows different behaviour", and is nothing to do with the signal being deterministic or random (AFAIK).
However, there may be a better way of stating this in the article, although I'm not sure what that would be yet, as it would need to be concise. Equally, the paper you referenced above would be a better reference than a non-published report (although I haven't read through the paper properly yet). Let me know your thoughts, Oli Filth(talk|contribs) 19:22, 7 September 2009 (UTC)
Describing it as "the DFT shows different behavior" seems wrong to me. The transform is the same, its properties are the same, regardless of the inputs. Nor does the DFT "integrate" any output per se: it is the same finite matrix multiplied by a finite set of inputs, regardless of the inputs, and interpretations of "integration" are only imposed externally when you try to use the DFT of a finite window to spectrally analyze an infinite signal. However, if your input is more complicated (noise vs. a smooth periodic signal), and especially if the questions you are asking of the output are more complicated (spectral estimation of an infinite signal rather than the DFT of a finite set of points), then necessarily the way in which you use the DFT will be more complicated. The whole problem of spectral estimation of a noisy signal has been studied long before the paper you cite, however, and we have a whole article on the subject (currently rather short) that should probably be linked somewhere from here. — Steven G. Johnson (talk) 19:28, 7 September 2009 (UTC)
Dear Oli and Steven, I agree that it might be better to talk about the interpretation of a spectrum obtained using a DFT being non-trivial. I'm not happy about the paper I cited either, because the authors give a reference [3] as "in preparation" that I could not find. The problem remains, though: few people can read an SNR out of a spectrum obtained by windowed DFT. I will soon give a talk about this at a measurement symposium for industry engineers; I'll be able to discuss with the audience and may find out a way to phrase it properly and concisely.
Thanks for your interest in this discussion!Hanspi (talk) 04:27, 8 September 2009 (UTC)
Hanspi, you might want to check out a standard textbook like Discrete-Time Signal Processing by Oppenheim et al, which discusses the problem of spectral estimation from noisy signals at length and gives numerous reference. — Steven G. Johnson (talk) 05:06, 8 September 2009 (UTC)
So I will look at it again; that book was my text book in my post-graduate courses. Thanks!Hanspi (talk) 08:38, 8 September 2009 (UTC)

Please carefully explain all the terminology--all the variables. For example, explain n and k very carefully. These concepts are not entirely clear to the learner, especially those like myself who are self taught!!!

Thanks. —Preceding unsigned comment added by 74.232.131.125 (talk) 20:33, 21 February 2010 (UTC)

I think this article is way over your head at this point in your development. This is not the place to teach basic mathematical notation. But I wouldn't be surprised if Wikipedia also has an article for that. Good luck to you.
--Bob K (talk) 02:31, 24 February 2010 (UTC)

Actual use in signal processing

It seems to be difficult to identify an actual use of the DFT in everyday consumer technology. — Preceding unsigned comment added by Jfgrcar (talkcontribs) 21:49, 12 December 2010 (UTC)

Your Wi-Fi card will use one.
Oli Filth(talk|contribs) 23:53, 12 December 2010 (UTC)


OFDM modulator/demodulator. Perhaps DTMF also. I don't recall offhand how to do it, but you can search US patents for things like DFT and FFT. That will probably turn up thousands of examples.
--Bob K (talk) 21:06, 15 December 2010 (UTC)

Application: Diagonalize a Matrix

Could someone add a subsection to the Application section about how the DFT can be used to diagonalize a Toeplitz matrix? Also, is the Toeplitz matrix the most general form of a matrix that the DFT can diagonalize? Bender2k14 (talk) 01:22, 21 April 2011 (UTC)

Small related comment: is there really no page on the displacement equation? Peter Stalin (talk) —Preceding undated comment added 13:18, 8 September 2011 (UTC).

Eigenvectors

From the article currently:

"The eigenvalues of the DFT matrix are simple and well-known, whereas the eigenvectors are complicated, not unique, and are the subject of ongoing research."

http://web.ebscohost.com/ehost/pdfviewer/pdfviewer?sid=e430c43e-55b7-45f2-9d82-cc3f7bbe8523%40sessionmgr15&vid=2&hid=24

Abstract of linked article:

"The eigenvalues and eigenvectors of the nxn unitary matrix of finite Fourier transform whose element is , is determined. In doing so, a multitude of identities, some of which are new, are encountered. A conjecture is advanced."

The eigenvectors are are stated rather plainly on the second column of the first page.

I'm not quite sure who to believe here. Since I recall listening to a lecture by Gilbert Strang stating (back in 200X, whereas this paper was published in the 80's...) that the eigenvectors are currently unknown and undergoing research, and Wikipedia seems to be backing this up. However, the article I linked to seems fairly plausible.

For the moment I'll Be Bold, but leave this comment here in the discussion so someone else can double-check me.

Peter Stalin (talk) —Preceding undated comment added 13:15, 8 September 2011 (UTC).

Domain of k?

It is not clear what the domain of k is in the definition. I suppose integers 0<=k<n -- in fact I am almost certain -- but it is not clear from the onset (which to me is crucial in a definition) — Preceding unsigned comment added by 184.161.209.68 (talk) 03:41, 30 January 2013 (UTC)