Talk:Artificial general intelligence/Archive 1

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia
Archive 1 Archive 2 Archive 3

Genuine strong AI

"Genuine" strong AI is no different from strong AI. The piece, now removed, supporting "genuine" strong AI was a naked point of view statement by a proponent of strong AI put in under another heading to publicise their opinion. This is an encyclopedia, not a forum for particular viewpoints. Please read the section on the 'Philosophy of Strong AI' in this article before launching into diatribes that are just a statement of belief that the brain must be a digital computer. It will be found that the strong AI viewpoint is already covered in the context of philosophical discussion of this issue. Loxley 10:21, 18 Apr 2005 (UTC)

Move section

Shouldn't the section "Philosophy of strong AI and consciousness" be moved to Philosophy of artificial intelligence? --moxon 15:05, 22 October 2005 (UTC)

I wrote an article on strong AI Strong Artificial Intelligence, not realising this one existed. Looking at it, it seems that this one is about the philosophy whilst the one i started is concerned with the technology involved, and a detailed summary of existing futurist predictions. Ill change the title to reflect this if someone shows me how. Crippled Sloth 15:11, 27 February 2006 (UTC)

It says im too new to move an article. could someone please move it to Strong AI Technology. Cheers. Crippled Sloth 19:19, 27 February 2006 (UTC)

Done Crippled Sloth 12:37, 3 March 2006 (UTC)

What on earth is the notion of "Utopian Marxism" doing on this page? The notion does not have rigorous precedent; it reads as if someone had an idea, and decided to coin a phrase, applying two words (one would have been bad enough) with enough preloaded connotations and meanings to bury a legion of students under their weight. 3-14-06

Artificial Intelligence != Strong AI

This was recently added to the article:

In the true semantic sense, the term "Artificial Intelligence" is really the same thing as what we now call "strong AI", and weak AI is just a recreation of one small, narrow ability which itself helps create true intelligence.

But this is controversial, to say the least. Artificial intelligence encompasses many things besides Strong AI, so it's wrong to say they're "really the same". It's very POV to say that the "true meaning" of Artificial Intelligence is this one specific thing - maybe you can say that's its original meaning, but if it really meant that then I think the page on AI would say so. Also, to address the second part of that added sentence, weak AI doesn't necessarily recreate an ability, nor does it necessarily help create true intelligence. --Tifego 21:54, 15 March 2006 (UTC)

I added that section, and I added "In the true semantic sense" before "AI == Strong AI" to distinguish what researchers and experts in the field think AI means as opposed what someone unfamiliar with the field is likely to take "intelligence" to mean. If you know zero about AI, you are unlikely to consider something like an 'expert system', or 'Bayesian networks' to be actual intelligence, even if you understand what they are. The paragraph is merely meant to fill in the uninformed that the Strong AI article is what they are looking for if they want to read about human-like intelligence and not the more obvious and intuitive plain AI article. It is not POV, just look up the dictionary definition of "intelligence" and compare it with the very first bit in this article describing strong AI: "In the philosophy of artificial intelligence, strong AI is the claim that some forms of artificial intelligence can truly reason and solve problems". Thus, in the semantic sense, artificial intelligence is what we now call Strong AI, however it is not what people knowledgable in the field think of plain AI as. Thus, I added the clarification paragraph.--Jake11 00:47, 16 March 2006 (UTC)
Well it's worded confusingly, then. "Semantic" means "meaning", so it sounds like you're saying that AI == Strong AI. You're saying that by "In the true semantic sense" you meant "To those unfamiliar with the field"? --Tifego 00:55, 16 March 2006 (UTC)
I will think about how to re-word it, but I think it needs to be stated that what experts in the AI field regard AI as is not what a laymen would expect AI to mean, and more importantly is categorically NOT related to the word "intelligence" more than Strong AI is. In other words, going by the definition of "intelligence", strong AI should be the default meaning. Obviously this isn't the case, but it needs to be clarified.--Jake11 01:12, 16 March 2006 (UTC)
Also, your "clarification paragragh" replaced what appears to me to be a perfectly well-written paragraph with something that doesn't cover the same points (although it does cover some new ones) --Tifego 01:01, 16 March 2006 (UTC)
The paragraph i replaced was not accurate. It stated that the difference between weak AI and strong AI was that strong AI achieves consciousness. As if that was the sole difference. This is in contrast to the very first paragraph of the article which states that Strong AI is distinguished from weak AI because it can achieve real problem solving capabilities, whereas the paragraph i replaced said Weak AI is not necessarily weaker at problem solving than strong AI.--Jake11 01:17, 16 March 2006 (UTC)
That is only your opinion that it was inaccurate. It's my opinion that it was accurate the way it was before, albeit somewhat incomplete, and that you're ignoring other relevant statements of the first paragraph of the article in your justification of why it was inaccurate. Anyway, I'll see if I can combine the two in a way that covers both sides... --Tifego 02:13, 16 March 2006 (UTC)
Hold on a sec. It's not my opinion that it was innaccurate. the replaced paragraph directly contradicts the first two paragraphs of this article, which are partially defined by the man who actually coined the term Strong AI. I will post here a side by side comparison.--Jake11 02:31, 16 March 2006 (UTC)
Regardless, I've made my attempt to combine the two sides. Sorry to change the wording so much from your last edit, but tell me if you have any specific objections to what it says now. --Tifego 02:33, 16 March 2006 (UTC)
Here is the first paragraph after your revision: "In the philosophy of artificial intelligence, strong AI is the claim that some forms of artificial intelligence can truly reason and solve problems;". Now here is a highlight from the Weak AI paragraph: "The distinction is philosophical and does not mean that devices that demonstrate weak AI are necessarily weaker or less good at solving problems than devices that demonstrate strong AI.". Note the two bold parts. If the first paragraphs claim is that Strong AI is what it is because it can problem solve and reason, what is the point in later on saying that Weak AI can be just as good at reasoning and problem solving as strong AI? Where is the distinction?
The distinction is that Weak AI, while it can be just as good at problem solving, cannot solve a wide variety of problems. It is a fact that Weak AI can be as good or better at solving certain specific problems than any human intelligence is or ever has been at solving those problems. --Tifego 02:54, 16 March 2006 (UTC)
I added 3 more words to the Weak AI section to make the distinction more clear. --Tifego 03:02, 16 March 2006 (UTC)
Also I dont get the following quote:
"What distinguishes strong from weak AI is that in strong AI, the computer becomes capable of true understanding (in the same way that a conscious human mind is), whereas in weak AI, the computer is merely an (arguably) intelligent, problem-solving device. The distinction is philosophical and does not mean that devices that demonstrate weak AI are necessarily weaker or less good at solving problems than devices that demonstrate strong AI."
What is the distinction between human "true understanding" and and intelligent program which can also understand and solve problems? I am going to make a minor change to the AI == strong AI paragraph to reflect my original meaning --Jake11 02:45, 16 March 2006 (UTC)
OK. That sentence was a little wordy (said "literal" twice, had extra words like "actually") so I trimmed it down, but hopefully still kept your meaning. --Tifego 03:05, 16 March 2006 (UTC)

I modified the original text in the Weak AI section a bit, and also removed the part about the strong/weak AI distinction as being philosphic, as I see no indication that that is the case. —Preceding unsigned comment added by Jake11 (talkcontribs)

What is the justification for removing the part about studying the "behavioristic and pragmatic view of intelligence"? "Behavioristic" and "pragmatic" are both very appropriate descriptions of the typical nature (or goal in creating) of weak AI programs, and I don't see a reason for removing so much emphasis from the studying of such AI. --Tifego 03:46, 16 March 2006 (UTC)
Those terms are far, far too broad and non-specific to use as a primary description of Weak AI. 'Behavoristic'? In what sense, in what context? And how can that not equally apply to Strong AI? As well, pragmatic: 'Dealing or concerned with facts or actual occurrences; practical.'...This is so general that I wonder that someone didnt mean this for another article. Again, this is too weak to use as a primary description for Weak AI, the one up now suits the purpose much better.--Jake11 03:51, 16 March 2006 (UTC)
Pragmatic in this context means "dealing with problems in a practical way", i.e. most Weak AI programs are created for the practical reason of dealing with specific problems rather than with the intent of contributing to an understanding of true intelligence. Anyway, I agree the new description is better, although I don't like the way "study" appears to be applying only to the "tasks", because weak AI is also used in an attempt to study Intelligence in general. --Tifego 04:07, 16 March 2006 (UTC)
I don't object, but I think a way of wording needs to be found that doesn't put too much emphasis on using Weak AI to study strong AI, as I don't think it happens much intentionally. Perhaps its too ambigous to state either way. --Jake11 04:14, 16 March 2006 (UTC)
It happens intentionally at educational institutions, if nowhere else. But I think the statement in question is fine, it's not overly misleading and you might be right that the studying of it doesn't need to be emphasized any further. --Tifego 05:02, 16 March 2006 (UTC)
I re-deleted the "true human understanding" bit from Weak AI. That terminology also does not appopriatly distinguish between Weak and Strong AI. —Preceding unsigned comment added by Jake11 (talkcontribs)
Edit war! No, I just touched it up a bit realized that the part being argued over was off-topic in its section anyway and removed it, so hopefully it's fine now. --Tifego 05:05, 16 March 2006 (UTC)

Coulda, Shoulda, Woulda

Re: Text deleted by reason of being speculative fiction:

Utopian marxism
Strong AI, coupled with improved robotics, could potentially outperform humans in any kind of work. Once affordable, such machines would mean that no man would ever have an incentive to exploit another. Strong AI could theoretically run the entire economy without human assistance (complete mechanisation). The profit motive would become unnecessary and the fruits of machine labour could be distributed according to need, eliminating both work and poverty.
See Iain Banks The Culture a fictional society set in many of his science fixtion novels, for fiction based on this outcome.

JA: WP is not the place for this brand of creative writing. Best to discuss authors and their novels at the articles dedicated to them. Jon Awbrey 02:48, 20 March 2006 (UTC)

I don't understand the logic of this decision. ALL of the applications section is speculation, as a actual implementation of Strong AI is virtually non-existent in the world. What is more speculative about the Utopian Marxism than perhaps the section about "The Arts"?--Berger 02:55, 8 April 2006 (UTC)
I agree. What is the difference between the potential social impacts of strong AI vs. the impact on the Arts. It's speculation, but strong AI itself is speculation. I think this statement should be replaced.

Parallelism vs speed

I think it may be important to note that the idea of experiencing time "slower" relative to human perception is debatable.

The mind in question may be able to perform more calculations within a given time period, but that does not necessarilly imply that it will experience time more slowly. Yes, time is a relative measurement, but isn't it a construct of perception?

In other words, time isn't necessarilly measured relative to your cognitive thought processes, but rather, relative to your perceptive processes.

So, given that human eyes, ears, nerve endings, etc., are limited to specific amounts of input per second, does that not imply that they will, at least in part, define perception of time?

Therefore, an artificial mind would have to define time based on the limitations of its input devices, correct?

I'd be willing to accept that time is defined jointly by both cognitive and perceptive thought processes, but I truly think both are required. Without an input device to which the mind can compare the speed of cognitive processes, there truly is no perception of time.

Furthermore, if time were defined solely by cognitive processes, then and artificial mind would not be a "mind" when it is not performing calculations. There would be no time. For instance, when a person sleeps, the mind shuts down (mostly) both cognitive and perceptive processes. It appears that time doesn't pass.

Time is a good article that brings up interesting questions about it. This excerpt may be of interest:

"Time is currently one of the few fundamental quantities These are quantities which can not be defined via other quantities because there is nothing more fundamental known at present. Thus, similar to definition of other fundamental quantities (like space and mass), time is defined via measurement. Currently, the standard time interval (called conventional second, or simply second) is defined as 9 192 631 770 oscillations of a hyperfine transition in the 133caesium (Cs) atom."

--Cwiddofer 23:02, 3 May 2006 (UTC)

(The following is my belief:) All of this is true. However, the distinction between "cognitive and perceptive thought processes" does not exist. A cognitive processes is anything that goes on in the mind, including perception. No doubt certain cognitive processes affect perception, some don't. And certain cognition affects perception more than other cognition. In any case I don't believe input speed determines maximum or minimum perception speed, because the input only relays information to the artificial brain, the brain can still be made to process very little or very much for each input signal. Therefore I think perception speed maximum and minimum would be based on how fast processing is done within the brain itself --Jake11 13:29, 14 May 2006 (UTC)

godlike powers

"Arguably the most important thing is to equip the Seed AI (which is likely to gain godlike powers) with only the desire to serve mankind"

Ontological proof, anyone?

remove? --Froth 02:04, 5 May 2006 (UTC)


I certainly wouldn't say "godlike" because that would require a redefinition of the concept of god. I wouldn't even say "powers". It would be better just to say something like "unimaginible intelligence".
I say either change or remove it. --Cwiddofer 04:37, 5 May 2006 (UTC)

"random spashes of paint"

"The Arts

A strong AI may be able to produce original works of music, art, literature and philosophy. However, no clear standard exists for what qualifies as art in these fields, indeed many works of modern art are just random splashes of paint, thus a random creation process could be developed even on conventional computer systems."

I think this section shows an opinion about modern art and is not appropriate for an encyclopedia. The artistic value of anything is debatable, and not everyone would agree that some modern art pieces are just random splashes of paint. They still represent a level of human expression and communication that could not be replicated by a "random creation process". Kikkoman 01:22, 14 May 2006 (UTC)

"Computer simulating human brain model ... Futurist Ray Kurzweil estimates 1 million MIPS"

This definately looks like very low figure, orders of magnitude too low. Lower than most of reasonable estimates of brain computing power (and to simulate some processor in hardware transistor by transistor you'd need lotta more power, same, to some extent, applies to other simulations.)

1 million millions operations per second = 10^12 , that is, 1 trillion operations per second. Blue gene does over 280 TFLOPS = 280*10^12 floating point operations per second. By this estimate, it should be able to simulate about 280 human brains.

I just noticed that this futurist's estimate is 10^16 operations per second, which makes more sense.

I'm is not sure about 2030 claim tho

Did the poster mean 1 petaflop? Tdewey 03:43, 30 October 2006 (UTC)

Pound sign.

The price for a artifical intelligent computer by 2020 is listed in british pounds. isn't the american dollar the standard world currency? Pure inuyasha 00:02, 11 July 2006 (UTC)

Human mind

"Firstly it relies on the assumption that the human mind is the central nervous system and is governed by physical laws."

Come on. Surely this doesn't need to be said. Anyone with a reasonable grip on reality surely dismisses mind/body dualism. Only the most fervant bible thumper would feel the need to add this. Of course, with the worryingly massive distribution of religious fools in the world, it wouldn't suprise me if a weak and stupid member of this fraternity of self delusion managed to infiltrate our temple to knowledge. Idiots like that don't deserve to be literate, as any sense they read will be dismissed, and anything they write will be a pollution. 84.64.168.185 01:25, 7 August 2006 (UTC)

I disagree. I'm not a "biblethumper" or even a christian, and I'd like to think that I have a reasonable grip on reality. But I don't want to give a kneejerk dissmissal of the idea that some aspects of the mind are nonphysical (metaphysical?). Software doesn't have a physical existence. You can describe how it works, but it essentially exists solely in the world of Ideas. The same may be true of the mind and therefore any form of strong AI. I would however argue that the end should be changed to "...is governed ONLY by physical laws." surely the brain (the hardware of the mind) is governed by physical laws. But the mind (the software of the brain) can be governed by a completely different emergent set of laws.
Think of it this way, a computer is governed by physical laws. the electrons behave exactly how real physics predicts. But is MS Word or Half-life or Sim City governed directly by the same physical laws? No (though some my try to emulate physical laws) they are governed by the laws of the operating system, the OS is governed by the hardware, and the hardware by physics. In the same way there could be aspects of the mind that are indirectly influenced by physical laws, yet exist on a different plane of rules just as software obeys different rules than hardware. Even if we could make an exact atom for atom copy of a human brain, would we be able to put memories and a personality into this brain? I don't think so. these are a system built on the brain, not the brain itself. One could have a nonchristian, nondualistic interpretation of this statement. I think it should stay. afinlayv 5:37 8-10-06
According to The Harris Poll, 70% of Americans believe in survival of the soul after death. -- Schaefer (Talk) 12:52, 11 August 2006 (UTC)
I would propose something along the lines of the following:
Firstly it presumes that the human mind is created only by the aggregate functioning of the central nervous system whose parts are governed by known physical laws.
I think this covers both the soul-issue as well as the we-might-not-fully understand the physical processes-that-create-the-mind issue (e.g. neurons may do more than we think they do). With respect to the original poster, as one of the 70% in the poll, I consider the soul-problem the biggest issue in AI research (and I'm not the only one [1]) but I consider the unknown-physical law problem almost as serious. We're making predictions on the development of strong-ai based on only a partial understanding of the mind. Note that I don't consider the lack of a soul a showstopper. For example, Frederick Pohl suggested in Omni all the way back in 1993 that human intelligences might be transferreable to computers -- imbuing them with souls. [2] Greg Bear's fiction has similiar suggestions. Now as I'm also a fan of Tracy Kidder so I don't think computers with their own souls are entirely out of the realm of possibility Tdewey 15:43, 1 November 2006 (UTC)


Need a discussion on emergent behavior here as an alternative method for "creating" strong AI. Tdewey 03:45, 30 October 2006 (UTC)

Convention for capitalization of S/strong AI and W/weak AI?

I capitalized a "strong AI" term to "Strong AI" midway in the article, but then noticed the term being spelling without capitalization earlier. Is there a convention for the proper spelling here? 61.222.31.123 14:09, 30 August 2006 (UTC)

a quick bout of google shows that the lower case is more common -- capitalization outside of titles and beginnings of sentences happens on only about 20% of pages. Personally capitalizing it seems like a real error to me, since I would much prefer this page being about strong and weak AI in the technical sense rather than the quasi-religious. (why is the web page about the technical singularity so much better than any of the real AI pages :-( --Jaibe 13:07, 31 August 2006 (UTC)
not that strong AI wouldn't have significant social and philisophical consequences, but I'd be a lot happier if extensive discussions of these had their own pages.--Jaibe 13:11, 31 August 2006 (UTC)

Human Map

It's my theory that the human mind is an infinite number of layered matrices containing possible decisions that are determined through random probabability. As little as I know on the subject, I am familiar with Anthropology, and further, liminality. Each time the human mind is confronted with certain conflict, he decides how to react by randomly choosing a decision available to him. The decision itself is also influenced by its favorability, which relies on the individual. Of course, all matrices contribute decisions, thoughts, conflicts, and the alteration of other matrices. These as well are also influenced by external conditions.

I'm curious to know how such a thing might be represented in the form of actual data. If my idea is actually original, and if it is even fathomable in this day and age, how long might it take to even come up with the foundation of this program. Of course, children, before birth, start out with nothing, and by reacting to the influences around its shell it begins to grow a mind. As soon as it is born the child will evolve even further. Such as when a child learns to walk: He does so through experimentation and discovery, much like a random selection of "This works", and "feet really help". If it's possible to create this data, or at least simulate it in such a way that the program may develop itself, it might be a breakthrough.

Again, I'm not sure if this idea is original or not, so please let me know. Since it's my first contribution, then I'd like to know how I'm doing.

Look, Wikipedia isn't really the place for original research, speculations, and thoughts; Wikipedia is about noting the present state of established knowledge about a topic.
That said, your idea, if I understand it well enough, isn't totally dissimilar to the idea of neural networks. Neural nets work well for some applications, but are limited in the maximum size they can have before the "training time" it takes for the net to learn the right behaviour becomes very, very large. --Robert Merkel 07:01, 1 September 2006 (UTC)

Weak AI != Strong AI

Why on earth does Weak AI redirect to Strong AI, wherein it has a subsection?

This is as if White redirected to Black, where White had a subsection.

Surely they should be separate, and refer to each other...--Kaz 20:04, 7 September 2006 (UTC)

'Vastly superior'

Quoting from the article: "... social interaction or perhaps even deception, two behaviours at which humans are vastly superior to other animals."

I have to agree that humans are superior to animals when it comes to deception. (This is not the same as agreeing that deception is a "superior" trait; quite the contrary.)
But the idea that human social interaction is superior ... let alone "vastly superior" ... to that of animals is a mistake along the lines of "the sun orbits the earth". We seem to be the only animals with greatly developed egos, and this exaggerated development not only puts us at a dangerous distance from the natural order (which we are often eager to violate in our haste to 'get ours') but also leads us to underestimate the wisdom and ancience of the natural order ... which we are only just -beginning- to appreciate.
So I'm uncomfortable (but not surprised) to see that this rather 19th-century concept of "human" superiority to "animals" and other "savages" is still propagating itself in an article devoted to AI. The original attitude about AI ... "easy peasy" ... was founded in the same hubris ... and I'd bet that AI will not be uncovered (it already exists) until the blindness created by that hubris abates. Sad that we may have to be faced with an imminent destruction we've created before we can see that. -- Twang 9/7/06

While I'm familiar with the sort of affected humbleness you're expressing, the criteria under which humans are not superior to other animals are very rare and trivial. If the measure of superiority were size, then many animals are superior to humans...and yet humans can see smaller things than any animal, can utilize smaller spaces, et cetera. Humans, thanks to their minds, are faster than any other living (known) thing (of terrestrial origin). They can travel farther. They can, with sufficient warning, survive more than any other. Given time, humans are already on the path to spread through and "rule" the universe (though it's unlikely that they'll discover it free of existing rulers), where any other species would have to depart radically from its current course in order to ever evolve such abilities. Humans can even live better with "nature" than any other creature, they just choose not to.
No, humans are superior by almost any worthwhile measure.
On the other hand, speaking with certainty of AI as secretly already existing is a very crackpot sort of thing to do. The odds are that you do not have firsthand knowledge of the existence of such an AI, but are simply guessing based on the same clues anyone else can see. And to leap from guessing to certainty is irrational, to the point of being a bit out of touch with reality. But even if you had special knowledge which confirmed this, the fact that people will not realize you have special knowledge will pretty much guarantee them seeing you as a crackpot, which means that you are a crackpot, insofar as you're blurting out things any socially rational person would understand to seem crazy. --Kaz 20:22, 8 September 2006 (UTC)


Prospective applications

the listed prospective applications seem almost trivial to me. Art? Sure, thats a possible application, but I am sure there are more ground-breaking applications (as any futuristic sci-fi movie about robots will tell you) 72.130.177.246

Made a few edits to Computer simulating human brain model

I made some edits to the section on computer simulations of human brains, since the section didn't really seem to explain what the idea is or how it might work. I didn't try to edit the existing content much (eg. references to Kurzweil), but it did seem to me that the section (and indeed, the entire article) as it stands is has a lot of highly speculative pondering and relatively little about concrete subjects. For example, the whole question of "how fast must a computer be to simulate a human brain" has been the subject of much debate for a long time. Surely it deserves more than an off reference to Kurzweil? Nadalle 19:34, 16 October 2006 (UTC)

Synthetic Intelligence

This sections regarding deletion/merger of the SI page has been moved here from the SI discussion page. It holds arguments also relevant to the deffinition of strong AI --moxon 15:34, 7 December 2006 (UTC) SI is the engineering term; artificial intelligence is the scientific term. Both terms refer to the study and development of intelligent devices or mechanisms but with different objectives. Science has the main objective of analysis and so artificial intelligence work seeks to understand and duplicate naturally existing intelligence. Engineering has the main objective of synthesis to solve problems and so synthetic intelligence duplicates and develops aspects of intelligent behaviour as a means to solve problems, this may generate different systems with unique behaviour not found in naturally existing intelligence.

Does this mean that the terminology should change as we move between analysis and application? As for the ruby: the distinction between 'artificial' and 'synthetic' as applied to intelligence is purely philosophical since there is no categorical definition of 'intelligence'. Therefore these discussions belong with Strong AI.--moxon 10:37, 30 November 2006 (UTC)

SI deletion proposal

Synthetic Intelligence is the evolution of Artificial Intelligence. When machines gain the ability to ask questions, learn, and modify what they learn into new ideas, not to just follow a set of preprogrammed scenarios as with AI.--Ssmith2k 13:07, 22 August 2006 (UTC)

AI isn't restricted to following preprogrammed scenarios. These articles go very bad when people start inserting their own notions, especially when those are uninformed. That's why all material should be sourced. Editors are just that -- editors, not essayists. -- Jibal 10:31, 17 August 2007 (UTC)

If I may Synthetic intelligence seems to deal with the creation/manufacture of intelligence as opposed to artificial intelligence which may more accurately reflect the simulation of intelligence.--voss

Regardless of dismissing the term as a neologism, it is a term in use and thus diserves some manner of definition/discussion of the argument for its use. changing deletion request to suggestion for merge.Darker Dreams

I can't find any evidence of this term being used? Can you provide some? —Ruud 16:38, 2 May 2006 (UTC)
I quit digging around that sort of thing when I left Computer Science for Pols/Law. I think I originally ran across it in science fiction. I don't recall how I ended up on this topic, however I rarely flip to random entries. However, I will present the fact that someone else started the article as evidence that this is not something I randomly generated. More importantly, simply because you (or I) cannot immediately point to a place it originates does not invalidate the argument that it is an existant term unless you are proposing that you can prove it is not. In short; the fact the article exists and has information is proof that there is such a term, which is not outwieghed by your lack of proof to substantiate it. If you could show evidence, even if that evidence is not sufficient to constitute conclusive proof, the term is soley a neologism with no other contextual, linguistic, or argumentative value then I'd put the deletion flag back myself. Meanwhile, this entry definitely needs citation, which would probably be why it already had a "needs cleanup" flag.Darker Dreams 18:19, 2 May 2006 (UTC)
google search; Results ... about 5,710,000 for synthetic intelligence. (0.17 seconds)
including;
Darker Dreams 18:28, 2 May 2006 (UTC)
You need to be more careful when doing a Google search. I can find only 616 unique hits. ALso please read WP:V. Everything on Wikipedia needs to be verfiable, so absence of evidence can safely be taken as evidence of absence. —Ruud 22:05, 2 May 2006 (UTC)
if you're going to just throw policy at me, WP:AGF. I have shown that there is some notability WP:N, which seems to be what you actually meant with your neolgism tag. Whether or not it is sufficient notability is something I will leave to those more experienced with wikipedia than myself. Given that there has now been more discussion than your original delete prod allowed I'll suggest that if you still feel it should be deleted AfD it.
No one challenged your good faith, and even if they had, someone else's violation of policy is no defense of your own -- see tu quoque. And please sign your comments. -- Jibal 10:27, 17 August 2007 (UTC)

SI reverted "Merge"

Given that nothing was added to the target page except that the term exists, including the reasons for the term, I can't help but feel that even with the massive number of warning/concern headers this article has it doesn't do more to explain the term/its use/etc than the article it was merged to. A more appropriate linkage would be to reference Computational intelligence, or Artificial intelligence, as a source for broader definition, information, and discussion of the importance and use of such programs. However, simply adding the clause "sometimes called Synthetic Intelligence" to an article does not constitute a merge of anything more than the title of the article. In short; if that merge is appropriate when I search for Synthetic Intelligence, simply being redirected to Computational intelligence would tell me something about the term for which the actual search was used. Darker Dreams 22:45, 31 October 2006 (UTC)

That kind-of makes sense to me. I'd suggest a redirect to Artificial intelligence, which is usually considered to be the broad category that contains all approaches to the creation of non-natural intelligence. But whichever page the redirect goes to, I think a single sentence could be written that contains everything useful from here. Perhaps something like:
"Some researchers prefer the term synthetic intelligence, which avoids the possible misunderstanding of 'artificial' as 'not real'.[some reference to somebody talking about the terminology here]"
It needs work, but I think it would be an improvement over a rambling page like this, which isn't really useful to anyone. JulesH 22:52, 19 November 2006 (UTC)
While I fully agree that the article is rambling and unhelpful I think that any attempt to condense it into a single sentence is a disservice or, at the least, probably more difficult than simply cleaning up a multi-sentence article or sub-section of another article...Darker Dreams 02:43, 20 November 2006 (UTC)

Article name change

Being as this article is about both strong and weak AI (the article Weak AI redirects to Strong AI) does anyone object to changing the article title to "Strong and weak AI"? -- Schaefer (talk) 17:28, 20 November 2006 (UTC)

How about "Strong AI vs. Weak AI" --moxon 15:34, 7 December 2006 (UTC)

Proposal for Split

It seems that this page has several subjects lumped in on the basis of comparing/contrasting them with Strong AI. While I do not doubt that these (mostly contrasts) are useful- making them the central and default source of information on Wikipedia about the subject seems weaker than providing them with their own pages- even if those pages are stubs that need cleanup and expansion. In addition to the Strong AI article this page includes;

  • Weak AI
  • Synthetic Intelligence

Consequently, the sub-sections on General AI and Strong AI have become mini-articles on those subjects instead of being part of a consistant whole. The General AI section should be renamed to indicate that it is not giving a definition of AI but providing a contrast, and having a Strong AI section in the Strong AI article is highly redundant- the entire article should be about Strong AI. Information not on that subject should be in the appropriate, and seperate, article- not dumped here. I can perform this seperation, but I assumed that others who were working here might have some more specific ideas.Darker Dreams 17:47, 13 January 2007 (UTC)

The problem with weak AI is that it only signifies the denial of the quote in the first paragraph. It doesn't add up to much more than arguments against strong AI. It would be like an article on 'black' discribing the absence of light. How about renaming this article to include both and list properties and arguments for and against the emergence of 'mind'? --moxon 08:58, 16 January 2007 (UTC)
Note that Weak AI should not be confused with GOFAI. --moxon 09:21, 16 January 2007 (UTC)
The problem with the term SI is that it is very rarely used, it holds no technical specification, it is merely a philosophical claim, has nothing to add and is in full agreement of the Strong AI philosophy. This is evident from the quote in the first paragraph. I agree that one line can do the trick. --moxon 08:15, 16 January 2007 (UTC)
By renaming this article to "Strong AI vs. Weak AI" it leaves space for pure articles of 'Strong AI' and 'Weak AI' (if the need should occure) and justifies the material on the current page as argument of the debate, similar to the Neats vs. scruffies article. --moxon 09:21, 16 January 2007 (UTC)
My main concern is that this single article reads like a series of short articles about sub topics and related topics. Even if Weak AI is nothing more than a construct used to contrast with "Strong AI" the way the article has been approached makes it seem like a stub that has been added to a list of stubs instead of part of a coherant whole. As for the SI section, which contributes to this feeling of discontinuity; while I do not disagree that SI is a philisophical claim, I have a significant issue to the idea that differences in philosophy are insufficently important to have distinct articles. Darker Dreams 18:58, 16 January 2007 (UTC)
Is there any objection then to renaming this article as suggested? That would give the different philosophies an opportunity for a fresh start in distinct articles, while this one can be structured to focus on aspects of disagreement. --moxon 10:14, 17 January 2007 (UTC)
Renaming the article is fine, and probably more accurately represents though direction in which this article is headed. However, it seems like it would predicate the removal of the sections on General artificial intelligence (or Artificial general intelligence-terminology needs clerified there) and Synthetic intelligence- which really fall outside an X v Y schema. There is also a significant amount of discussion here that strongly, and really only, applies to Strong AI as opposed to Weak AI (Comparison of computers to the human brain, Methods of production). Thus, I would still suggest that some splitting happen...
Frankly, reading this article it feels like people were unhappy with several stubs and instead of filling out each and cross-linking they were agrigated into the most developed article.
What I would preferr to see is each of these proposed articles be made- Strong AI, Weak AI, Strong AI v Weak AI, Artificial general intelligence, Synthetic intelligence... even if each is a stub each can then be expanded along the lines that they deserve without interfering with one another using cross referencing/links to extablish their proper relationships.
In short- no objection, per se, I simply don't think it addresses the whole issue. Darker Dreams 17:20, 17 January 2007 (UTC)
Perhaps a more on-point distribution would be Strong v Weak AI, Synthetic v Artificial intellegince, and General v Specific AI. However, there is in this article enough for a (short) article on Strong AI by itself. Darker Dreams 19:56, 17 January 2007 (UTC)

Article renamed

This name change is a first step towards unravelling this topic. Some restructuring is foreseen. --moxon 13:19, 19 January 2007 (UTC)

Trying to figure out if this comment is a statement that you have it in hand, or that other hands are specifically invited. Darker Dreams 05:06, 23 January 2007 (UTC)
Please feel free to modify, thanks. I have no specific plan... --moxon 10:45, 24 January 2007 (UTC)

Sapience

Sapience != sentience. Sapience is "the ability to act with judgement", as stated through the link. Sentience is being self-aware. This should be fixed (or simply included for clarity). —The preceding unsigned comment was added by AeoniosHaplo (talkcontribs) 11:52, 9 April 2007 (UTC).

Computer implementation of brain

A few points worth adding

(1) The parralel vs speed issue is a red herring, because computers can be designed to operate in parallel, in the same way as the brain. Given a sufficiently large number of transistors, one could create a simulation of a brain which simulated all neurons in parallel, and operated incredibly fast compared to a human brain. (The number of transistors would be vast, however).

(2) If one accepts that it is possible to simulate the operation of a single human cell to a high degree of accuracy, then one is forced to accept that it is possible in principle to create a strong AI via simulation of a human from conception, at a cellular level.

(3) Though the computing power required for such a simulation would be huge, it is likely that a highly detailed model may not be required, and it would need to be done only once, since the resulting artificial person could be copied/probed/optimized as required. This makes the possibility somewhat more feasible. It might take a million processor supercomputer 20 years to generate a simulated brain, but it might then be possible to reduce the complexity required by a factor of a million.

(4) Using a distributed.net/SETI@home style architechture, a million CPU grid supercomputer isn't as unlikely as it might seem.

Pog 15:00, 25 July 2007 (UTC)

Metaphysics

This article is hopelessly confused. The difference between weak and strong AI has nothing to do with what the system can do; it is strictly a metaphysical distinction. Consider Searle's Chinese Room. It really does converse in Chinese -- it's defined that way. But Searle argues that it doesn't "really" understand Chinese, despite being able to speak it. The distinction has nothing to do with capability, only with nature or essence or intrinsic qualities or some such. Of course the confusion comes because such distinctions are meaningless, incoherent, and/or uninteresting, and so people tend to misunderstand Searle to be saying something more sensible. Searle's notion of "understanding" is, despite his protestations, dualistic -- a ghost in the machine. Any debate about whether something "really" possesses mental properties is dualistic, since it distinguishes the property from its observable effects. When people aren't doing bad philosophy, they readily recognize that anyone or anything that can converse in Chinese understands Chinese -- understanding is no more than a capacity to perform.

Unfortunately, other than the one quote from Drew McDermott about airplanes flying, the view that the weak/strong AI distinction is bogus isn't represented -- despite that being the dominant view among cognitive scientists, AI researchers, and philosophers. It's a curiosity of mimetics that this distinction, which Searle introduced specifically to support his own metaphysical position, is widely accepted in the lay community without even understanding it and without awareness that it has been broadly and repeatedly refuted by some of the best minds. -- Jibal 10:17, 17 August 2007 (UTC)

I agree -- see next section ---- CharlesGillingham 21:39, 29 August 2007 (UTC)

Not in citation Tag

I tagged a sentence in the introduction. Either I'm uninformed or the author is confused.

  1. I think that Strong AI and Weak AI are the names of philosophical positions -- not classifications of AI systems or approaches or subfields. Am I wrong?
  2. If I'm right about that, then the remainder of the paragraph is off subject -- renaming the field has no relevance to the debates about strong AI (machines can be minds) or weak AI (machines can act like minds).
  3. In either case, I don't believe the first sentence is correct; I don't believe that the term "artificial intelligence" is considered by most people to mean the same thing as "Strong AI". If I'm wrong then I'd like to see a source. (And I could be -- maybe there's a whole community using the terms this way that I'm unaware of. A source would begin to clear this up.) ---- CharlesGillingham 21:39, 29 August 2007 (UTC)

Searle's Definition of "Strong AI" vs the current usage

Further research has shown I was wrong, above. I took several classes from John Searle in the early eighties and was mostly familiar with the original meaning. The term "Strong AI" is being used all over the internet in a way that doesn't reflect the original meaning.

  • Original meaning: Strong AI -- the philosophical position that "it is possible build a machine with a mind".
  • Current meaning 1: Strong AI -- a computer system that has a mind.
  • Current meaning 2: Strong AI -- the research project of giving a computer system a mind.

The term "Weak AI" has completely changed meanings and the new meaning has no relation to the original meaning. Now it is often used in contrast to Strong AI, as a kind of classification of AI systems (or research).

  • Original meaning: Weak AI -- the weaker philosophical position that "it is only possible to make a machine that acts like it has a mind." (Note that if you believe in Strong AI, you must also believe in Weak AI -- these positions are not opposed to each other. Strong AI is, well, stronger.)
  • Current meaning 1: Weak AI -- a computer system that uses artificial intelligence technology but doesn't have "Strong AI"
  • Current meaning 2: Weak AI -- research in the broader areas of artificial intelligence that isn't directly aimed at creating "Strong AI'"

Anyway, I will continue to research this and eventually will be able to prepare something that makes this more clear and include it in the introduction. The ideal thing to find would be some reference by someone who has been around awhile and has noticed this change in meaning, most ideally Searle himself. ---- CharlesGillingham 17:24, 31 August 2007 (UTC)

I think using the word "mind" is a bad idea, especially since the idea of what a mind is, is core to the weak/strong debate. Do you mean human-like-intelligence, self-reasoning and awareness? consciousness of thought? a concept of the mindful self? or all of the above, it gets all too confusing. It is easier to describe the approaches taken by people that identify with "strong" or "weak" labels (which the article does), rather than argue over semantics. A reference showing historically how these terms were used, and how they are now, would be useful though! MattOates (Ulti) 17:40, 31 August 2007 (UTC)
What we think about the word "mind" is irrelevant. These terms were invented by John Searle, and he likes the word "Mind". The book that introduced these terms is called Minds, Machines and Programs. The difference between accepting the "strong AI" position and only accepting the "weak AI" position actually hinges on some semantics, as most philosophical arguments do. It is confusing, as most philosophical arguments are. In fact, the questions that you are asking are the beginning of your own rebuttal of Searle. (And, in point of fact, I agree with you. I think Searle is hopelessly confused. But as I say, what I think is irrelevant.)
My the point is this: there are two things that could be addressed by an article with the title Strong vs. Weak AI:
  1. A notable debate in the philosophty of AI, started by philosopher John Searle. (Which is, like most philosophical debates, confusing and semantic.)
  2. A way that people currently classify AI systems and work.
To my mind, these are both notable and both need articles. I'm not as sure they belong in the same article. ---- CharlesGillingham 19:16, 31 August 2007 (UTC)

This article has serious problems: expert attention flag

This article does not correctly use the terms Strong AI and Weak AI. I have marked this article as needed expert attention (to warn readers) and have a proposal to fix it below. As noted above, the term Strong AI has two related but very different uses. I have done some research and here's what I've found:


References where Strong AI refers to the argument that "an appropriately powerful AI system would have a real conscious mind"

  • John Searle, "Minds, Brains and Programs" in Behavioral and Brain Sciences, vol. 3. Cambridge University Press 1980. (The paper that introduced the term)

I find it useful to distinguish what I will call "strong" AI from "weak" or "cautious" AI (artificial intelligence). According to weak AI, the principal value of the computer in the study of the mind is that it gives us a very powerful tool. For example, it enables us to formulate and test hypotheses in a more rigorous and precise fashion. But according to strong AI, the computer is not merely a tool in the study of the mind; rather, the appropriately programmed computer really is a mind, in the sense that computers given the right programs can be literally said to understand and have other cognitive states.

  • S. Russell, P. Norvig, "Artificial Intelligence. A Modern Approach.", 2003. (The standard AI textbook)

First, some terminology: the assertion that machines could possibly act intelligently (or, perhaps better, act as if they were intelligent) is called the weak AI hypothesis by philosophers, and the assertion that machines that do so are actually thinking (as opposed to simulating thinking) is called the strong AI hypothesis.

(note also that Norvig and Russell define Weak AI differently than Searle does)

Strong AI An interpretation of artificial intelligence according to which all thinking is computation, from which it follows that conscious thought can be explained in terms of computational principles, and that feelings of conscious awareness are evoked merely by certain computations carried out by the brain or (in principle at least) by a computer. A debatable implication of this view is that a computer that can pass the Turing test must be acknowledged to be conscious.

Weak AI An interpretation of artificial intelligence according to which conscious awareness is a property of certain brain processes, and whereas any physical behaviour can, in principle at least, be simulated by a computer using purely computational procedures, computational simulation cannot in itself evoke conscious awareness.

Strong AI is defined as the view that an appropriately programmed digital computer with the right inputs and outputs, one that satisfies the Turing test, would necessarily have a mind


References where Strong AI refers to an AI system with human level intelligence (or the study and design of systems with human level intelligence)


References which use Strong AI in an unusual way:

  • Strong AI postulate@Everything.com (Here Strong AI refers to the argument that an AI system with human level intelligence is possible. Note that this is what Norvig & Russell call Weak AI.)
  • Everything2 (Here Strong AI refers to an AI system based on reverse engineering human intelligence.)

Rewrite proposal

It seems there should be two articles:

  1. Strong AI vs. Weak AI which would describe the original definitions of the terms, as used by academic researchers, philosophers and cognitive scientists.
  2. Strong AI which would describe artificial general intelligence, as imagined by futurists, science fiction writers and forward thinking researchers.

The first article needs to be written and will begin as a stub. The second article would contain most of the material from the current article. The term Strong AI would need a disambiguation page as well and links into this page would need to be re-pointed to the appropriate articles.

If no one objects, I will make this change early next week. ---- CharlesGillingham 07:09, 18 September 2007 (UTC)

I have revised the title of the new article above. ---- CharlesGillingham 17:27, 27 September 2007 (UTC)

In light of the problems described (in way too much detail) above, I am implementing this fix:

  1. I am (starting to) fix the philosophy of artificial intelligence article so that it covers the philosophical and academic use of the term "strong AI" and "weak AI"
  2. I will remove from this article any discussion of these issues, moving (what I can) over to the philosophy of artificial intelligence, and replacing them with appropriate "See also" and some disambiguation discussion.
  3. I will look through the "What points here" list to redirect anything that intended to point to the philosophical questions to the philosophy of AI.
  4. When all this is done, I will request that the title of this article be changed to simply "Strong AI" or "artificial general intelligence" or something.

Any objections? Is anybody out there? I feel like I'm talking to myself here. ---- CharlesGillingham 01:26, 28 September 2007 (UTC)

I have half-way completed (1) and (2) above, by adding new text here and in philosophy of artificial intelligence. The next step is to start removing and moving things. Please help if you are interested. I've marked the text that I think has to be removed as being "Off-topic" or "dubious" ---- CharlesGillingham 19:04, 29 September 2007 (UTC)

I have removed the off-topic text and placed it at Talk:Strong AI vs. Weak AI/Removed text. I will integrate as much as I can into the philosophy of artificial intelligence ---- CharlesGillingham 11:20, 3 October 2007 (UTC)

Requested move

The following discussion is an archived discussion of the proposal. Please do not modify it. Subsequent comments should be made in a new section on the talk page. No further edits should be made to this section.

The result of the proposal was move Duja 12:34, 11 October 2007 (UTC)


There are two current uses of the term Strong AI and this article has confused them. (One used by futurists like Ray Kurzweil (see Advanced Human Intelligence) and one defined by John Searle (see Minds, Brains and Programs). I have collected (above) quite a few sources that document the these two uses of the term.) Nearly every link that points here intends to point Kurzweill's meaning, so I changed this article to focus on his meaning, and have defined Searle's meaning in the philosophy of artificial intelligence#Strong AI vs. weak AI, where it belongs.

The upshot is that this article is now about just Strong AI, not Searle's distinction of Strong AI vs. Weak AI, so the title is incorrect. If you look at the "What links here" list, you will see that the vast majority of links redirect here from Strong AI. Nothing points here through the redirect at "Weak AI".

This article should be moved to Strong AI ---- CharlesGillingham 10:00, 3 October 2007 (UTC)

The above discussion is preserved as an archive of the proposal. Please do not modify it. Subsequent comments should be made in a new section on this talk page. No further edits should be made to this section.

Removed from Wikiproject philosophy

The philosophical issue is now covered in the article philosophy of artificial intelligence. This article no longer discusses philosophy. ---- CharlesGillingham 23:59, 11 October 2007 (UTC)