Posted by:
Henry Bemis
(
)
Date: July 20, 2023 01:46PM
From the linked article:
"We are being duped into believing these AI tools are far more intelligent than they really are. A tool like ChatGPT has no understanding or knowledge. It merely collates bits of words together based on statistical probabilities to produce useful texts. It is an incredibly helpful assistant."
COMMENT: It depends upon what you mean by "intelligent." AI systems in general are composed of sophisticated algorithms that are designed to address and solve problems, and generate results, that go well beyond the scope of any human capacity. The fact that they have "no understanding" -- as philosopher John Searle repeatedly emphasized -- does not mean that they are not intelligent, it just means that (presumably) they are not conscious agents, as are humans. Chat GPT is much, much more than a hodge podge of "statistical probabilities." It incorporates subtle rules of grammar and the complex semantics of natural language, as well as logical operators and algorithms, within an immense data set. There is no question that this encompass what can legitimately be called non-conscious 'critical thinking.' It is the same principle as a common calculator, only covering a much wider domain.
__________________________________
"But it is not knowledgable, or wise. It has no concept of how any of the words it produces relate to the real world. The fact that it can pass so many forms of assessment merely reflects that those assessments were not designed to test knowledge and understanding but rather to test whether people had collected and memorised information."
COMMENT: When considering and comparing the cognitive capacities of a human being with machine intelligence, what should be emphasized (as the linked articles suggests) is that notwithstanding the computational superiority of computers in a context of highly complex calculations, human beings are NOT machines, and their brains are NOT computers. (Otherwise in principle AI will eventually be able to mimic (and go well beyond) all human capacities, including creativity, and 'wisdom.') The capacity of human beings that sets them apart from machines, and arguably makes them 'superior' to a computer, is the capacity to engage the physical world as a cognitive agent in "real time" and solve unique, unexpected, and unprogrammed, real world problems through the exercise of conscious will. They do this by abstracting (consciously or unconsciously) highly complex data structures relevant to a wide variety of diverse problems and contexts, with the ability to retain such experiences in memory. This is accomplished through the unique cognitive capacities of the human mind, which have no analog with a digital computer, or computational neural network. (This is essentially the so-called 'frame problem' in AI.) The fact that humans can also consciously 'think' about such problems and choose an alternative action not computationally prescribed is also something computer intelligence can't do.
Regarding teaching, I believe that students should be taught the above cognitive distinctions at an early educational level in specific, scientific, terms, in order to displace the popular AI and cognitive neuroscience notion that humans are just cognitive machines, operating as software on brain hardware. As long as students think they are just a machine, it will be natural to look to a better machine to solve their problems. Computers, google, Wikipedia, and now Chat GTP will overpower their place as a tool, and become a substitute for genuine, human critical thinking, that encompasses personal and social values.
I worry about teachers at all levels who themselves do not know or appreciate the difference between (1) personal human critical thinking; that is, applying basic facts, logic and argument to social problems and issues, and (2) adopting the latest politically correct 'critical thinking' paradigm. Their students do not learn how to really think, but only how to extrapolate and apply to their worldview what others have determined is the result of critical thinking. This is precisely what computer programs like Chat GTP now encourage.
Moreover, one can be sure that it is impossible for a computer program like CHAT GPT to be effective without incorporating personal and social values at some level. One can imagine what will happen when CHAT GPT programmers subtly program their preferred social values into the 'logical' results of their algorithms. So, for example, when it is asked, "Describe the latest decision of the US Supreme Court on LGBTQ rights," it comes up with a long narrative describing the opinion and ends with "Such decision was clearly inconsistent and discriminatory." "Thanks," the user responds, "That's all I need to know."