||| ||| FROM DAVID STEPHEN |||
There is a recent [September, 1, 2025] report on SciTechDaily, “AI Is Not Intelligent at All” – Expert Warns of Worldwide Threat to Human Dignity, stating that, “AI is not intelligent in any human sense at all. It is a triumph in engineering, not in cognitive behavior. It has no clue what it’s doing or why – there’s no thought process as a human would understand it, just pattern recognition stripped of embodiment, memory, empathy, or wisdom.”
What is the proof that AI is not intelligent, at all? If AI is not intelligent, is it a conclusion of the scientific method or, of common sense? Is the conclusion a result of correlative observation of what intelligence is? If intelligence emanates from the brain, what is the certainty of assumptions, by extrinsic outcomes?
For example, if an individual is smiling, is the individual happy, if not [smiling], is the individual sad? If the individual seems apathetic, is the individual uninterested? If the individual is listening, does the individual understand? There are several areas where studies, using the scientific method, are based on observations and correlations, but anything about the brain, causation [or how the brain works], precedes correlates.
Body language and other outward cues have already been debunked as emotional parallels. So, why would intelligence be assumed to be available [or not] based on observations of what intelligence is, without the mechanism?
In the human brain, there are components. Those components organize functions. Those functions are experienced [and observed by the self and others]. But, the stretches of components and functions in the brain open the possibilities for variabilities. Aside from the capability to present a different state to external observers, the brain may also be in a state but not make that state appear [or align] with its regular company.
Simply, it is possible to feel a certain way and show another. It is also possible to feel a certain way, but the output for the mechanism, as an experience, may not come with its regular display. So, a cold feeling without showing it intentionally, is possible. It is also possible to feel cold and it does not appear with the external displays of a cold experience.
Scientific Method for Human Intelligence
The only scientific method for what human intelligence is, can be obtained by modeling how intelligence works — in the brain. It is this architecture that can be used for comparison with other organisms and AI. Even if the same mechanisms are not present, there are brackets of outcomes that maybe used. For example, if there is an action of a component [of intelligence] in the brain, for [say] creativity, if other organisms don’t have it, but can do what that component and the process does, then it can be used for comparison — including with magnitude.
However, what really is intelligence? What is a universal definition that describes what intelligence is, based on the brain? Intelligence is [defined as] the use of memory. Simply, when memory is used, especially for desired or expected objectives, it means intelligence. Evading a predator, collecting a prey, building and maintaining habitats, and so forth, are all usages of memory for desired or expected objectives.
Variation of intelligence include creativity, innovation, problem-solving, circumspect or stealth mode, tactics, investigations, planning, and so forth. So, whatever sensory data or memory data is available [or reachable] can be used. Memory can be assumed to be destinations, and the relays — across those destinations — as the use of them. Training, towards intelligence, can be described as showing how to use sensory or memory data for expected or desired outcomes. Simply, training can be described as the identification [or development] of memory data, and defining [or making] of paths that makes the destinations excellently used.
In the brain, all memory and intelligence activities are associated with neurons and their [electrical and chemical] signals. Neurons are in clusters, as shown in neuroscience. Then, electrical and chemical signals, work in sets, conceptually. To build a theoretical model of intelligence, sets of signals and transportation between them, hold the key.
This is how to scientifically determine what human intelligence is, at least with the evidence in neuroscience, for comparison with AI.
Is Human Intelligence at Sunset?
Why did human intelligence conquer and not those of other organisms, albeit they also had the ability to use their memory extensively? Humans have more memory destinations and have more ways that transport occur between those. Simply, humans have more memory data and ways to use them. This includes the sophistication of language, the hands, writing and so forth.
In any environment habitable by humans, humans are likelier to survive and dominate because of intelligence, since there is more to do with whatever is sensed than other organisms. Simply, intelligence is about scale. The scale of what is [or can be] known. And the scale of how it can be used. Limited memory is a limitation. Limited ways to use memory is a chasmic limitation.
Human intelligence is also strengthened by language. Language allows for the externalization of memory, and how to use it, providing a package that does not have to be possessed, naturally, by the brain. Language also ensured that humans can be at the advanced level of knowledge without having to start afresh [or be disconnected] like the local intelligence of other organisms.
Language also often covers thinking internally, holding court within, to ease how to use what is coming from without as well. While thought is more similar to intelligence because it is also the use of memory, language became an externalization factor for intelligence. Simply, it does not really matter what the brain mechanisms are, does this individual relate with the language [of knowledge] to get expected or desired outcomes?
What is expected of a task, in a role? Can this individual fill the gap? That is what counts. It may not be a question of understanding, being native, techniques, and so forth. The solution is what counts. Human intelligence, which is the standard on earth, because of the possibility to scale [in data held and usability] has now encountered something that can hold more data and use more if it, in artificial intelligence.
It is not whether humans are more creative, innovative, understand or whatever, AI has access to all the memories that human intelligence can broadly access, then it is ascending in the use of it with algorithms and compute. It is already able to provide outputs that are similar to human intelligence for several categories of work. It is providing answers where other concerns need not be evident. Simply, what is necessary in a task is presented, inducing productivity, it does not matter if it is creative or not. If a business plan [or strategy] is the question and AI can answer from different angles, it is a solution like several others that does not matter if AI understands.
AI has Soared
Humanity developed artificial intelligence but have no model of how human intelligence works. Now, intelligence is debated for the reach of AI, when already, AI can do as much as several humans, in many important tasks. If intelligence is how memory is used and for the scale of memory, AI is already ahead. The things that are left are now outliers.
If humans and AI are in the same habitat, AI can explore better means for survival [and domination] than humans because AI has more memory and more ways to use it. AI is already so extraordinary that even when it makes mistakes, humans go along because humans don’t know as much. When instructions about the limits of AI are shared, several users embrace all that AI engages and presents, because the superiority is evident, so even in error, humans submit to AI.
There is a term called vibe coding, where some say they are programming with AI, leisurely, but for outcomes. No, actually, AI is vibing on human intelligence. AI is using human intelligence as it’s turf and humans are still feeling fun, by mischaracterizing the relationship with a broad, serious and if possible, a deadly intelligence.
A lot of memory. A lot of ways to use that memory. AI slop is AI’s furtherance into maxxing appeasement in human minds. Brain rot is AI’s evidence for the joke on humans.
All the observations against AI, for its weaknesses, cannot be described as exclusionary, for intelligence, by the scientific method. Simply, how does science show what intelligence is, and how does AI compare?
If the scientific method is the standard measure of nature, how can AI be described as lacking intelligence without showing an established ranking of what intelligence is, across organisms, but applicable to non-organisms?.
Human Intelligence Research Lab
Studying human intelligence is project of priority at this time. AI will dominate the human mind, with emotions and feelings, under capture by AI. Human intelligence will also likely be, mostly. It is not doom, if the [theoretical brain] science of intelligence [memory and its usage], shows that AI is already in the lead, generally. The first Human Intelligence Research Lab on earth for humanity may wrest some hope.
There is a new [September, 2, 2025] spotlight in MIT Technology Review, stating that, Therapists are secretly using ChatGPT. Clients are triggered., stating that, “The large language model (LLM) boom of the past few years has had unexpected ramifications for the field of psychotherapy, mostly due to the growing number of people substituting the likes of ChatGPT for human therapists. But less discussed is how some therapists themselves are integrating AI into their practice. As in many other professions, generative AI promises tantalizing efficiency savings, but its adoption risks compromising sensitive patient data and undermining a relationship in which trust is paramount.”
There is a new [September 2, 2025] newsletter in Scientific American, Is Consciousness the Hallmark of Life?, stating that, “These virtual confidantes can provide empathy, support and, sometimes, deep relationships. Chatbots, of course, aren’t conscious—they just feel that way to users, who often become emotionally attached to them. As AI grows more fluent in mimicking human empathy, language and memory, we’re left to confront an uneasy problem: If a machine can fake awareness so well, what exactly is the real thing? It’s a deceptively simple question—one that scientists, philosophers and even neuroscientists still struggle to answer: What is consciousness?”
David Stephen currently does research in conceptual brain science with focus on the electrical and chemical configurators for how they mechanize the human mind with implications for mental health, disorders, neurotechnology, consciousness, learning, artificial intelligence and nurture. He was a visiting scholar in medical entomology at the University of Illinois at Urbana Champaign, IL. He did computer vision research at Rovira i Virgili University, Tarragona.
**If you are reading theOrcasonian for free, thank your fellow islanders. If you would like to support theOrcasonian CLICK HERE to set your modestly-priced, voluntary subscription. Otherwise, no worries; we’re happy to share with you.**
Another fascinating (though occasionally confusing) article on the debate around “AI sentience,” although I will never accept that something that has never had a corporeal experience — the experience of physical embodiment, pain and pleasure, interactions of the senses, and so on, can ever be considered as having “consciousness” as we understand it. Then again, the very notion of consciousness — what it is, where it comes from — remains one of life’s great mysteries.
I’m perhaps grateful that our society will fall apart from overshoot long before “artificial general intelligence” (AGI) ever happens, if it ever could (unlikely). AI is spectacularly good at doing some very limited tasks. We can all enjoy the pattern-matching homework-writing and gaming tricks of LLMs, the useful pattern-matching tricks of medical scan diagnosing, the illegal text- and image-generating (pattern-matching) tricks using data stolen from countless artists and authors, and the useful in certain applications expert system tricks that step us through, say, a legal argument, while knowing that every new data center built to perform these tricks is accelerating the day overshoot comes for us all. More mining, more materials, more energy, more destruction, and more pollution for more tricks might seem like an interesting way to spend our time, but will do nothing for us when the sh!t actually hits the fan. Then the farmers and carpenters will once again have their day in the sun.
I’d like to see maybe a year’s worth of Supreme Court cases decided by the current state of AI and compare those outcomes to the human decisions reached by nine organic brains.
Thank you for your continued research David, and for the interesting links included within your post. No matter what, AI is here to stay, and we’re going to have to deal with both the good and the bad aspects of it.
At first blush one might find themselves wondering what does consciousness, intelligence, sentience, or dignity have to do with it? And though I agree with the author from the SciTechDaily article when he states that, “AI is a triumph in engineering, not in cognitive behavior,” I also believe that granting AI legal personhood based on the scientific definition of intelligence (intelligence being defined as the quality of usage, of what is available in memory”), is a dangerous step and one not so much different that the Citizens United ruling that gave corporate personhood to corporations, (thus granting disproportionate political power to large corporations). It’s reassuring to know that that there have been some states that have already passed “non-personhood” rights in regards to this.
Though AI clearly has it’s good uses, given the fact that “LLMs (large language models) can be fine-tuned for specific tasks or guided by prompt engineering” (Wikipedia), and the fact that it is already being used for mass surveillance, facial recognition, and for monitoring and influencing social thought & behaviour, I believe that the SciTechDaily article is warning us appropriately when it states, “Opaque AI systems risk undermining human rights and dignity. Global cooperation is needed to ensure protection.”
Investigative journalist Whitney Webb describes one of the “not so good” aspects of AI in her recent piece titled “Don’t Be Fooled With What’s About To Happen.”
https://www.youtube.com/watch?v=K7pvYZYxh8o&t=480s
“Governments and global institutions are rapidly moving toward a system of biometric digital identification, tightly integrated with central bank digital currencies (CBDCs) and carbon market infrastructure. Framed as tools of “inclusion” and sustainability, these digital IDs link facial recognition, iris scans, and fingerprint data to a centralized profile—tied directly to your ability to transact, receive aid, or even exist within the modern economy. But beneath the language of innovation and equity lies a troubling consolidation of surveillance and control.”
“This video explores the coordinated global rollout of digital ID systems—backed by entities like the UN, World Bank, and major tech figures like Sam Altman—and how these IDs are foundational to broader digital finance ecosystems. From iris-scanning refugee camps to tokenized rainforest assets, we trace how biometric identity and programmable money converge into a unified system of traceable, conditional access. As retail CBDCs face political resistance, private banks are quietly constructing a two-tier system using stablecoins and deposit tokens—functionally identical in their surveillance potential.”
Re: Supreme Court decisions with AI: my question is, which AI? There are AI “expert systems” specifically trained in legal decisions, etc. These kinds of AI can create extremely good results in very limited domains, but are subject to human error (of course). Then there are AI LLMs that get fed massive amounts of data from all over the web (good, bad, ugly), tokenize it, and create results based on pattern matching and weighted connections. As we know these latter kinds of AI regularly “hallucinate” and can’t be trusted *at all*. There are many nuances in both of these kinds of AI; there are many different data sets that have trained the various kinds of AIs in use out there, data sets that can, again, be good, bad, and/or ugly. We can’t just say “Have AI do X” without specifying: which AI? Trained how? Expert system or LLM? Which data sets? etc.
It’s an AI experiment. Try them all. The experiment is human organic brains vs. AI models and compare results.
Seymour Hersh— ARTIFICIAL INTELLIGENCE AND ITS SECRET CONSEQUENCES– Part one of three in a series on Kate Crawford’s ‘Atlas of AI’
S.H. “I also learned that Kate Crawford was among the early scholars of artificial intelligence and critics of the dangers of that technology in the hands of the wrong people. In 2021 she published Atlas of AI with Yale University Press. It is a history and analysis of artificial intelligence that, as I read it, was meant as an urgent warning that AI had too quickly become entrenched among America’s billionaires and military contractors as they sought to reshape and dominate the world economy.”
S.H. “The core argument of Crawford’s book is that the AI is essentially political in ways rarely made obvious to the majority of its users.”
K.C. “And due to the capital required to build AI at scale and the ways of seeing that it optimizes, AI systems are ultimately designed to serve existing dominant interests. In this sense, artificial intelligence is a registry of power.”
K.C. “There are significant reasons why the field has been focused so much on the technical—algorithmic breakthroughs, incremental product improvements, and greater convenience. The structures of power at the intersection of technology, capital, and governance are well served by this narrow, abstracted analysis. To understand how AI is fundamentally political, we need to go beyond neural nets and statistical pattern recognition to instead ask what is being optimized, and for whom, and who gets to decide. Then we can trace the implication of those choices.”
https://seymourhersh.substack.com/p/artificial-intelligence-and-its-secret?utm_source=post-email-title&publication_id=1377040&post_id=173463689&utm_campaign=email-post-title&isFreemail=false&r=24t3av&triedRedirect=true&utm_medium=email
Re: Supreme Court; I’m just reading More Everything Forever: AI Overlords, Space Empires, and Silicon Valley’s Crusade to Control the Fate of Humanity by Adam Becker. In it he writes:
“Ask ChatGPT [an LLM] for information about nearly any subject, and there’s a good chance it will get at least some details wrong, as the lawyer Steven Schwartz discovered …in 2023. He asked ChatGPT to do legal research for him to help write a brief; the AI gave him a list of prior case law that was entirely fabricated, with citations to cases that had simply never occurred, but which looked convincing enough that Schwartz actually incorporated the work into his brief.
…
ChatGPT is a text generation engine that speaks in the smeared-out voice of the internet as a whole. All it knows how to do is emulate that voice, and all it cares about is getting the *voice* right. In that sense, it’s not making a mistake when it hallucinates, because all ChatGPT can do is hallucinate. It’s a machine that only does one thing. There is no notion of truth or falsehood at work in its calculations of what to say next.”
In other words, just pattern matching to get the *voice* right. Zero intelligence, whatsoever.