Talk:Hierarchical temporal memory

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Similarity with Bayesian Models[edit]

This article currently references HTMs as modeling different aspects of the neocortex as a bayesian model (17-10-2007). I think this is actually backwards based on what I've read of HTMs. The model is similar to a bayesian model, but differs in a number of important ways. Further, it's backwards because the model was built by starting with neuroscience and trying to fit a model to the data, rather than simply starting with a computer science model and describing how it is similar to the brain. (See Bionics for more information on starting with nature)

Does anyone have any further information on the overlap/differences with Bayesian networks? —Preceding unsigned comment added by Gokmop (talkcontribs) 17:58, 17 October 2007 (UTC)[reply]

Paste[edit]

This is obviously a paste of numenta's marketing materials. Furthermore it is more of an attempt to somehow legitimize the product, probably on behalf of Numenta, not on the behalf of education or reference.

That is not true - the article is not an attempt to legitimize the product. Moreover, the machine learning community is smart enough to know that only prediction scores legitimize products, rather than worthless marketing materials. You are welcome to improve the article to enhance its NPOV. --Amit 21:40, 2 March 2007 (UTC)[reply]

Copyright[edit]

This article copies whole paragraphs out of Numenta's Copyright white paper and therefore needs major editing! quota 09:16, 24 July 2006 (UTC)[reply]

  • A request was made on July 25 2006 to Phillip B. Shoemaker, Director, Developer Services, Numenta, to relicense the HTM Concepts paper under the GFDL. No response was received. --Amit 08:45, 7 August 2006 (UTC)[reply]
I will be editing the article further. --Amit 08:45, 7 August 2006 (UTC)[reply]
We are waiting for the edit with bated breath! (Seriously.) quota (talk) 19:29, 30 January 2008 (UTC)[reply]

Possible POV statement[edit]

"may ultimately equal the importance of traditional programmable computers in terms of societal impact and financial opportunity." is a ridiculous claim for something that hasn't even been implemented yet!

The statement has now been removed the article. --Amit 20:50, 20 September 2006 (UTC)[reply]

Out of date[edit]

The research release is out and the article at this time is seriously out of date. --Amit 04:16, 5 March 2007 (UTC)[reply]

Article[edit]

This article is completely POV. Hawkins is a rich guy, and no-one feels like telling him that his stuff is crap. He had a few smart people working for him at some point, but when they told him his ideas were half baked and not new, he just fired their asses.

Here is what many people in machine learning and computer vision think about Hawkins stuff:

  • It's way, way behind what other people in vision and machine learning are doing. Several teams have biologically-inspired vision systems that can ACTUALLY LEARN TO RECOGNIZE 3D OBJECTS. Hawkins merely has a small hack that can recognize stick figures on 8x8 pixel binary images. Neural net people were doing much more impressive stuff 15 years ago.
  • Hawkins's ideas on how the brain learns are not new at all. Many scientists in machine learning, computer vision, and computational neuroscience have had general ideas similar to the ones described in Hawkins's book for a very long time. But scientists never talk about philosophical ideas without actual scientific evidence to support them. So instead of writing popular book with half-baked conceptual ideas, they actually build theories and algorithms, they build models, and they apply them to real data to see how they work. Then they write a scientific paper about the results, but they rarely talk about the philosophy behind the results.

It's not unusual for someone to come up with an idea they think is brand new and will revolutionize the world. Then they try to turn those conceptual ideas into real science and practical technologies, and quickly realize that it's very hard (the things they thought of as mere details often turn out to be huge conceptual obstacles). Then, they realize that many people had the same ideas before, but encountered the same problems when trying to reduce them to practice (which is why you didn't hear about their/your ideas before). These people eventually scaled back their ambitions and started working on ideas that were considerably less revolutionary, but considerably more likely to result in research grants, scientific publications, VC funding, or revenues.

Most people go through that "naive" phase (thinking they will revolutionize science) while they are grad students. A few of them become successful scientists. A tiny number of them actually manage to revolutionize science or create new trends. Hawkins quit grad school and never had a chance to go through that phase. Now that he is rich and famous, the only way he will understand the limits of his idea is by wasting lots of money (since he obviously doesn't care about such things as "peer review"). In fact, many reputable AI scientists have made wild claims about the future success of their latest new idea (Newell/Simon with the "general theorem prover", Rosenblatt with the "Perceptron", Papert who thought in the 50's that vision would be solved over the summer, Minsky with is "Society of Minds", etc......).

No scientist will tell Hawkins all this, because it would serve no purpose (other than pissing him off). And there is a tiny (but non-zero) probability that his stuff will actually advance the field. At any rate, he seems to have donated money to fund a university research group in California. He probably can't advance science, but his money certainly can. —Preceding unsigned comment added by 74.98.253.40 (talkcontribs)

Doesn't matter for me. Even the Hierarchical Temporal Memory is a warming up of old ideas, its based on some very good ideas. Also its a existing and operational implementation which makes it very important in my eyes - and the Wired magzine would't have written a article about. You may be right if you think that the article is either marketing or copy&paste from the Numenta-Homepage, but that means that we just need to rewrite the article. Recognizing 8x8px images is very impressive, because its at the lowest layer while the whole picture is recognized at higher layers - this is a normal approach to do so. But what you should respect is that its working with moving patterns. I feel that very impressive, because it might get combined with other solutions like CBCL's (MIT) technique for doing the same with still images. Note that in the human brain there is a center in the cortex for still-images and one for moving images, so this two techniques might be a interesting combination. Also there is a project for improved optical sensors by the IOO of TU Berlin to mimicry the human eye, which might get used for this application field to. Combining such techniques can lead to very exiting applications, so I wouldn't corrode any of them.
MovGP0 00:37, 11 March 2007 (UTC)[reply]

There are really too many ad-hominem attacks in the above comments to take them too seriously. I doubt anyone else has demonstrated the ability to detect a 2D helicopter flying across a screen in a noisy environment with only 3 layers of such simple nodes. Hawkins does not declare that he has presented anything new. He is just pushing in a direction that combines various existing elements in a fairly specific way based on his overall views of how the brains works. 65.81.139.105 (talk) 18:56, 31 January 2009 (UTC)[reply]

IEEE Spectrum magazine article[edit]

There's an article on it: [1] 41.241.197.221 19:01, 5 May 2007 (UTC)[reply]

Copyvio[edit]

Almost this entire article was a copyright violation from the texts concerning the subject. I have deleted the article and replaced it with a stub that contains all of the non copyvio text I could salvage. Please do not re-add any of the copyvio text; short excerpts from copyrighted works are acceptable for the purpose of critical commentary, but anything else will be removed on sight. Thanks --Spike Wilbury 15:05, 17 May 2007 (UTC)[reply]

htm vs neural networks[edit]

If anyone is familiar enough with this, could you throw in something on how HTM differs from traditional artificial neural networks? It sounds exactly the same, with the exception of the temporal aspect perhaps (which I'm sure has also been tacked onto existing nn applications), yet there is no mention of neural networks anywhere. —The preceding unsigned comment was added by 24.68.157.4 (talk) 00:55, August 22, 2007 (UTC)

Where's the beef?[edit]

Whatever the merits or otherwise of HTM, the article as it now stands contains almost no factual material. (I understand that it may have been removed due to copyright infringement, but paraphrasing is usually acceptable.) Without any substantive content, the article is unworthy of Wikipedia and as such should be withdrawn. 84.9.75.24 (talk) 21:14, 17 November 2007 (UTC)[reply]

Lowercase[edit]

This article starts off using lowercase hierarchical temporal memory yet the title is uppercase. I think this should be consistant, so is this a reference term or something which has been titled? Tyciol (talk) 07:01, 27 September 2009 (UTC)[reply]

Too vague and possibly not notable[edit]

The article is too vague: it doesn't describe nor the mathematical details of the model neither the algorithms. References to experimental results are also missing.

Moreover, the claims that compare the HTM model to neocortex are unsubstantieted. Overall, from reading this article, it appears that the HTM model is essentially the work of a single person (Jeff Hawkins) and his employees, with little recognition, if any, by indipendent researchers.

I wonder if the subject is notable enough to warrant a Wikipedia article. If it is, then the article must be expanded and its references should be improved. — Preceding unsigned comment added by 79.22.167.27 (talk) 06:32, 11 July 2011 (UTC)[reply]

How to represent an XOR gate with HTM?[edit]

Hierarchical temporal memory seems very promising, but I need to know that it can learn to XOR two inputs before choosing to implement it. How would it go ahead doing that? All implementations of an XOR gate using normal feed-forward type neural networks I have seen so far have used synapses with negative weights. But in HTM all synapses have weights that are either 0 or 1, and can't have negative weights. So how would you go ahead and represent an XOR gate with an HTM type neural network? —Kri (talk) 19:33, 15 February 2014 (UTC)[reply]

AFAIK an XOR would be implemented by having a higher overlap of 2 representations (A and B) than either individual representation would have on its own (an SDR for just A or B). HTMs seem to be pretty good and making partial and 'more-fitting' representations of an input, so you'd like see a repesentation for A, B, and A+B. Representation take residence via overlap- or how much of the input space they represent. The A/B representation is simply more active (more specific) than the individual A and B ones. There's also no logic really in the HTM/CLA as-is, so you're probably going to deal with anomaly score- which will simply be based on occurrence/coincidence (if you fire A with B, and it hasnt happened that way before in sequence, it will look anomalous) Colinrgodsey (talk) 19:45, 9 June 2015 (UTC)[reply]