
This artwork accompanied a technical review — “Stochastic Parrots: How Natural Language Processing Research (NLPR) Has Gotten Too Big for Our Own Good.”E. Sanchez & M. Gasser (scienceforthepeople.org)
“More recently, the law of unintended consequences has come to be used as an adage or idiomatic warning that an intervention in a complex system tends to create unanticipated and often undesirable consequences.”Wikipedia/Unintended Consequences)
“Victor Frankenstein builds the creature … after discovering a scientific principle which allows him to create life from non-living matter.”(Wikipedia/Frankenstein’s Monster)
“It is all of us humans who harbor the mysterious but ancient urge to reproduce ourselves in some essential but extraordinary way. Artificial intelligence comes blessed with one of the richest and most diverting histories in science because it addresses itself to something so profound and pervasive in the human spirit.”(P. McCorduck,“History of Artificial Intelligence”)
“Rather than automating jobs that are repetitive and dangerous, there is now the prospect that the first jobs that are disrupted by A.I. will be more analytic; and involve more writing and communication.”(Ethan Mollick,“The Mechanical Professor”)
“But this new crop of generative A.I. technologies seems to possess qualities that are more indelibly human. Call it creative synthesis — the uncanny ability to channel ideas, information and artistic influences to produce original work.”(Derek Thompson, “Your Creativity Won’t Save Your Job from AI”)
“A ‘bot’ —short for robot — is a software program that performs automated, repetitive, pre-defined tasks. Bots typically imitate or replace human user behavior.(Kapersky Total Security)
“Contrary to how it may seem when we observe its output, an LM (language model) is a system for haphazardly stitching together sequences of linguistic forms it has observed in its vast training data, according to probabilistic information about how they combine, but without any reference to meaning: a stochastic parrot.” (Emily Bender Et al., “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”)
“Automated decision-making systems implemented in public life are typically standardized. No algorithmic decision-making system can replace thousands of human deciders. Each of the humans so replaced had her own decision-making criteria; some good, some bad, and some arbitrary. Is such arbitrariness of moral concern?”(Kathleen Creel, Deborah Hellman, “The Algorithmic Leviathan: Arbitrariness, Fairness and Opportunity in Algorithmic Decision-Making Systems”)
“A philosophical zombie, or p-zombie argument, is a thought experiment in philosophy of mind that imagines a hypothetical being that is physically identical to and indistinguishable from a normal person but does not have conscious experience, qualia (which are instances of subjective, conscious experience) or sentience.(“Philosophical Zombie” -Wikipedia)
There’s an old saying:
“Dogs think they’re human; Cats think they’re God”
It seems lately that the newest crop of generative A.I. technologies may be “thinking” (and acting) like they’re human—- perhaps on the way to thinking that they’re God? God help us! A.I., or Artificial Intelligence: What do most of us even know about it? I guess the first thought that many will have is that it has to do with computers. Yes, but, not just regular, old, computers that for some time have become a substantial fabric of our everyday life. No, we’re talking about computers that can actually think. And appear to be able to think somewhat rationally and appropriately on their own.
As a matter of fact, there was a seminal book on this very topic by Pamela McCorduck that was published way back in 1994: “Machines Who Think.” The book was considered to be the first modern history of artificial intelligence. McCorduck was a British-born American journalist and author. A.I .was of great interest to her, even as a young humanities student in the 1960s. At some point, she was fortunate enough to meet and interview some of the “founding fathers” of A.I. They had convened a two-month work session at Dartmouth College in 1956; an assemblage of ten scholars with varying backgrounds in mathematics, neurology and computer science who spent that summer sharing information and ideas about a potential new area of technology.
This group — with representation from Harvard, MIT, Dartmouth, IBM, the Rand Corporation, Bell Labs and Carnegie Tech — had received a Rockefeller Foundation grant of $7,500. They began their meetings with “the conjecture that every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it.” (McCorduck, “History…”) It was the first time the term “artificial intelligence” had been used officially. Through the years, McCorduck began to see artificial intelligence as the “scientific apotheosis of mankind’s endless fascination with artifacts that think.”
To be sure, since the earliest times, humans have always had a fascination with “the creation of man-made, rather than begotten intelligence.” (McCorduck, History). Greek mythology is full of references to artificial intelligence, or “automatons,” that were put together to carry out some task that the gods themselves found burdensome. Chinese and Egyptian engineers also built automatons.
In the 13th century, Ramon Lull, a Spanish mystic, set out to convert Muslims to Catholicism. He didn’t succeed at that, but the time he spent with this group lead him to become interested in an Arabic thinking machine called “zairia”. He then attempted to construct a similar machine of his own. He called it Ars Magna (great work of art). The aim of the Ars Magna was:
“to bring reason to bear on all subjects, and thereby arrive at truth without the trouble of thinking. Be that as it may, Lull’s scheme seems to me remarkable not for its grandiose claims, but because without hesitation it presupposes that human thought could be mechanized.”(McCorduck,” History.”)
Charles Babbage, an English polymath, had vast knowledge in a number of fields. He was a mathematician, philosopher, inventor and mechanical engineer. He is considered by many to be the “father of the computer,” having invented the Difference Engine, which was the first mechanical computer. This led him to draw up plans for his Analytical Engine. And, though he died before the complete and successful engineering of his machines, many of his ideas led to the conception of computing.
Again, throughout human history, there has always been an interest in the creation of an artificial “human.” Be it the golem of Hebrew folklore that began as a lump of clay, which was then formed into a figure and brought to life by means of a charm or sacred word. Or, the mythological character Prometheus who fashioned humans out of clay and gave them fire. Or, in imaginative literature, the story of Dr. Frankenstein building a creature with artificial intelligence in his laboratory.
Alan Turing, the English mathematician, computer scientist, logistician and cryptanalyst, was a champion of machine intelligence and had been highly influential in the development of theoretical computer science. Also, he had proposed the Turing Machine in 1937, which can be considered as a model of a general-purpose computer. (Of course, Turing’s impressive intelligence was brought out in the 2014 movie, “The Imitation Game,” starring Benedict Cumberbatch, about the cracking of Nazi codes during WW II.)
Modern A.I. research began in the mid-1950s, and the very first generation of AI researchers were convinced that “artificial general intelligence” (the ability of an intelligent agent to understand or learn any intellectual task that a human being can) was possible and that it would exist in just a few decades. A.I. pioneer Herbert A. Simon even wrote in 1965: “Machines will be capable within twenty years of doing any work a man can do.” (“Artificial General Intelligence, Wikipedia)
Predictions such as this were the inspiration for Stanley Kubrick and Arthur C. Clarke’s character HAL 9000, who embodied what A.I .researchers believed they could create by 2001. In 2001: Space Odyssey, HAL 9000 was made as realistic as possible according to the consensus predictions of the time. A.I. pioneer Marvin Minsky, who worked as a consultant on the movie, had predicted in 1967 that: “Within a generation…the problem of creating artificial intelligence will substantially be resolved.” Unfortunately, Minsky and other researchers grossly under-estimated the difficulty of attaining that end, and it became clear in both the early 70s and again in the early 90s, when the promise of greater things did not materialize. (“Artificial General Intelligence, Wikipedia)
The A.I. field started out with:
“grand dreams of human-level artificial intelligence and, during the last half-century, enthusiasm for these grand A.I. dreams — both with the A.I. profession and in society at large — has risen and fallen repeatedly; each time with a similar pattern of high hopes and media hype followed by overall disappointment.” (Ben Goertzel, “Human-level artificial general intelligence and the possibility of a technological singularity,” Science Direct.)
Essentially, hopes for achieving the original vision of “human-level A.I.” became diminished over the years, due to:
* “overoptimistic promises by early AI researchers, followed by failures to deliver on these promises;
* a deeper understanding of the underlying computational and conceptual difficulties involved in various mental operations that humans, in everyday life, consider trivial and simple.”(Goertzel, “Human-level artificial…”)
So why, you might reasonably ask, am I bringing up artificial intelligence at this particular moment? Well, because since November 30, when a company called Open AI (an artificial intelligence lab) introduced an updated version of their Language Model (LM) — ChatGPT — artificial intelligence is once again being talked about. Open AI even invited users to work with the program and provide them with feedback. And, after only one week, more than a million people had played around with it.
We’ve been exposed to virtual assistants like Siri and Alexa, which also use A.I., for some time now. But, they weren’t particularly helpful and often made fun of. However, we’ve reached a turning point with artificial intelligence, and perhaps now is the time to make sure that we use these tools both safely and ethically.
Basically, this new system, commonly referred to as GPT-3, (or, Generative Pre-Trained Transformer) had spent many months:
“learning the ins and outs of natural language by analyzing thousands of digital books, the length and breadth of Wikipedia, and nearly a trillion words posted to blogs, social media and the rest of the internet.” (Cade Metz, “Meet GPT-3. It Has Learned to Code — and Blog and Argue,” NYT, 11/24,2020)
In fact, housed in a building complex in suburban Iowa:
“lies a wonder of modern technology: 285,000 CPU cores yoked together in one giant supercomputer, powered by solar arrays and cooled by industrial fans. The machines never sleep: Every second of every day, they churn through innumerable calculations, using state-of-the-art techniques in machine intelligence that go by names like ‘stochastic gradient descent’ and convolutional neural networks.’ The whole system is believed to be one of the most powerful supercomputers on the planet.” (S. Johnson, “A.I. is Mastering…”)
GPT-3 belongs to a category of deep learning known as a large language model. It consists of:
“a complex neural net that has been trained on a titanic of data set of text —roughly 700 gigabytes of data drawn from across the web, including Wikipedia, supplemented with a large collection of text from digitized books.” (S. Johnson, “A.I is Mastering Language…”)
The key, new development for GPT-3 is that its enhanced computational power and new mathematical techniques have enabled it to ingest far larger data sets than its predecessors.
GPT-3, or more formally, ChatGPT, interacts in a “conversational way. The dialogue format makes it possible for ChatGPT to answer follow-up questions, admit its mistakes, challenge incorrect premises and reject inappropriate requests.” The interface is quite simple: you write questions and statements to ChatGPT, and it spits out coherent, though occasionally hilariously incorrect, answers. (“Open AI, ChatGPT: Optimizing Language Models for Dialogue.”)
And, with all the data to which it has immediate access, output from this language model, in most cases, sounds very much like it’s coming from a real human. This ability could obviously have a profound impact in many areas. For instance, if you apply its capabilities to academics, students may begin to use ChatGPT to generate term papers and other assignments.
Ben Thompson, business, technology and media analyst, notes:
“The obvious analogy to what ChatGPT means for homework is the calculator: instead of doing the tedious math calculations students could simply punch in the relevant numbers and get the right answer, every time; teachers adjusted by making the students show their work.” (Ben Thompson, “AI Homework,” 12/5/22, Stratechery)
But, there is a difference when it comes to language. Calculators are deterministic devices, which will always give you the same answer. On the other hand, A.I. output is probabilistic: “ChatGPT doesn’t have an internal record of right or wrong, but rather a statistical model about what bits of language go together under different contexts.” (Thompson, “AI Homework.”)
The New York Times decided to do a test of ChatGPT, to better understand what it could do and to see if people could tell the difference between the bot’s writing and a child’s. Using real essay prompts from the National Assessment of Educational Progress, they asked the bot to produce essays based on those prompts — telling it to write like a student of the appropriate age. Then, what the bot wrote was put side-by-side with sample answers written by real children.
The Times then approached some experts on children’s writing to take their variation of the Turing test, (which is a test of a machine’s ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human):
“They were a fourth-grade teacher; a professional writing tutor; a Stanford education professor; and Judy Blume, the beloved children’s author. None of them could tell every time whether a child or a bot wrote the essay.”(Claire Miller, et al. “Did a Fourth Grader Write This? Or the New Chatbot?”, NYT, 12/26/22)
According to Heidrich Nichols, an educational consultant, “with apps like ChatGPT, “our measures of literary understanding or even writing and mathematical ability no longer mean what they once did.” (H. Nichols, “Making New Tech Tools Work for Your Classroom,” Edutopia, 1/13/23)
For some, the approach to this seemingly inevitable problem would be one of trying to identify assignments generated by artificial intelligences, such as ChatGPT. In fact, Edward Tian, senior at Princeton created GPTZero, “a program that promises to quickly detect A.I.-generated text.” (Huang)
While some educators may think that this would be the time to put stricter bans on phones in classrooms or having IT block the sites altogether (this has been the case in NYÇ and Seattle), Nichols thinks that we must ask:
“How can we embrace technological innovation and use these tools to amplify student voice and enhance learning?” Nichols goes on to say that:
“Computers, phones and apps are here to stay; so, have honest conversations with students about time management, focus and integrity. Teach students how to use their phones and computers to connect with a plethora of learning resources available online. Having an encyclopedia in your hand is a beautiful thing! …Allow students to learn the self-management necessary to navigate our digitally centered world.” (Nichols)
In fact, across the country, many university professors, department chairs and administrators are already:
“starting to overhaul classrooms in response to ChatGPT, prompting a potentially huge shift in teaching and learning.
Some professors are redesigning their courses entirely, making changes that include more oral exams, group work and handwritten assessments in lieu of typed ones.” (Huang)
Anatony Aumann, a professor at Northern Michigan University:
“decided to transform essay writing for his courses this semester. He plans to require students to write first drafts in the classroom, using browsers that monitor and restrict computer activity. In later drafts, students must explain each revision. Mr. Aumann, who may forego essays in subsequent semesters, also plans to weave ChatGPT into lessons by asking students to evaluate the chatbot’s responses.” (Huang)
In addition to the effect that artificial intelligence can have on education, there are some other things to worry about. So, while the ChatGPT system is:
‘theoretically designed not to cross some moral red lines —- it’s adamant that Hitler was bad — it’s not difficult to trick A.I. into sharing advice on how to engage in all sorts of evil and nefarious actions. …The system, like other
A.I. models, can also say biased and offensive things. …an earlier version of GPT generated extremely Islamophobiccontent, and produced some pretty concerning talking points about the treatment of Uyghur Muslims in China.” (Rebecca Heilweil, “AI is Finally Good at Stuff, and That’s a Problem,” Vox, 12/7/22)
When ChatGPT was first introduced in 2020, there were critics who felt that GPT-3 was “unsafe, pointing to sexist, racist and otherwise toxic language when asked to discuss women, Black people, Jews and the holocaust.” (Cade Metz, “Meet GPT-3. It Has Learned to Code
— and Blog and Argue.” NYT, 11/24/2)
Essentially with the GPT language model culling information from thousands of digital books and nearly a trillion words from the internet, it has picked up everyday language that is inherently biased and often hateful. And that is particularly so on the internet. And so,
“because GPT-3 learns from such language, it, too, can show bias and hate. And because it learns from internet texts that associate atheism with the words ‘cool’ and ‘correct” and that pairs Islam with ‘terrorism,’ GPT-3 does the same thing.” (Metz, “Meet GPT-3 …”)
So, while there are concerns with the current capabilities of GPT-3 as far as the inherent bias that exists in this A.I. model — as well as worries about how education will be affected by it, and what jobs in the workplace will be lost to it — I feel that there has already been a lot of thinking by A.I .experts about how to mitigate most of this. Indeed, there were many (including Oxford University researchers) who wondered if occupations such as telemarketing, hand sewing, brokerage clerks and other jobs that involve repetitive and unimaginative work would be adversely affected. In contrast, jobs deemed most resilient to disruption were thought to be those that included many artistic professions such as illustrating and writing.
Now, to the contrary, the actual technological capabilities in current A.I. products do precisely what the Oxford researchers considered nearly impossible:
“They mimic creativity. Language-learning models such as GPT-3 now answer questions and write articles with astonishingly humanlike precision and flair. Image-generators such as DALL-E 2 transform text prompts into gorgeous — or, if you’d prefer, hideously tacky — images.” (Derek Thompson, “Your Creativity Won’t Save your Job from AI,” The Atlantic, 12/22/22)
Well, the one thing we can see so far is that, like humans, A.I. products have their strengths, their weaknesses, and, are far from perfect. Even Open AI, the creator ,of Chat-GPT, knows that they haven’t solved all the glitches in their language model, and that’s why they encourage users to get back to them with any concerns:
“We know that many limitations remain as discussed above and we plan to make regular model updates to improve in such areas. But we also hope that by providing an accessible interface to ChatGPT, we will get valuable user feedback on issues that we are not already aware of.” (Open AI Blog, 11/30/22)
So, we know that language models (like ChatGPT) can do many things that humans do: generate tweets, write poetry, make jokes, summarize emails,
write computer software, generate structured code, answer trivia questions, translate languages, produce sophisticated legal documents, etc.
But — and here’s the ultimate question for me —just how human is A.I., at the moment, and how human can it become?
Some critics argue that the GPT-3 software is only capable of:
“blind mimicry —it’s imitating the syntactic patterns of human language but is incapable of generating its own ideas or making complex decisions, a fundamental limitation that will keep the L.L.M. approach from ever maturing into anything resembling human intelligence.” (Steven Johnson, “AI is Mastering Language. Should We Trust What it Says?”, NYT, 4/15/22)
Steven Johnson, a popular science author, has written that “A.I. has a long history of creating the illusion of intelligence or understanding without actually delivering the goods. Johnson and other critics see “large language models as just ‘stochastic parrots’ — that is, the software using randomization to merely remix human-authored sentences.”
Now, if you’re not quite familiar with the term — and I certainly wasn’t before my eldest son tossed it out recently during a conversation about this very essay you are reading — essentially, we’re talking about “stochastic” as something “randomly determined and having a random probability distribution or pattern that may be analyzed statistically but may not be predicted precisely.” (Oxford Language Dictionary). And parrots, of course are vocal learners who grasp sounds and imitate them.
Critics of large language models like Chat-GPT, argue that there must be a curating and careful documentation of datasets rather than ingesting everything on the web. The wisdom of “there’s no data like good data” is being replaced by “there’s no data like more data.” Not only is there morelikelihood of bias and other harmful ideologies being distributed and reinforced, “but there are also realistic concerns regarding the environmental and financial costs of unleashing ever-larger amounts of data without initially considering all of its impact on society.” (Emily Bender, et al, “On the Dangers of Stochastic Parrots: Can Language Models Be Too Big?”)
But, let’s get back to the feasibility of A.I .achieving sentience — that is, the ability to have feelings, to experience sensations and emotions. Erik J. Larson, computer scientist and tech entrepreneur, published “The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do” in 2021. He argues that: “A.I. hype is both bad science and bad for science; … a culture of invention thrives on exploring unknowns, not overselling existing methods. … Inductive A.I .will continue to improve at narrow tasks, but if we want to make real progress, we will need to start by more fully appreciating the only true intelligence we know — our own.” (E. Larson, “The Myth of Artificial Intelligence: Why Computers Can’t Think the Way We Do.”)
A review of Larson’s book put it in these terms:
“A tech entrepreneur and pioneering research scientist working at the forefront of natural language processing, Erik Larson takes us on a tour of the landscape to show how far we are from superintelligence, and what it would take to get there. Ever since Alan Turing, A.I. enthusiasts have equated artificial intelligence with human intelligence. This is a profound mistake. A.I. works on inductive reasoning, crunching data sets to predict outcomes. But humans don’t correlate data sets; we make conjectures informed by context and experience. Human intelligence is a web of best guesses, given what we know about the world. We haven’t a clue how to program this kind of intuitive reasoning, known as abduction. Yet it is the heart of common sense. That’s why Alexa can’t understand what you are asking, and why A.I. can only take us so far.” (Amazon.com: Books)
There was an example in June of last year where an A.I. researcher from Google, Blake Lemoine, claimed in an interview that the Company’s version of artificial intelligence was “sentient.” And further, he said he had evidence that Google and its technology engaged in “religious discrimination.” Google responded, stating that while “its systems imitated conversational exchanges and could riff on different topics, it did not have consciousness.” Lemoine was fired for his actions. (N. Grant and C. Metz, “Google Sidelines Engineer Who Claims Its A.I. is Sentient,” NYT, 6/12/22)
For his part, Lemoine told the Washington Post, in the interview that caused him to lose his job: “I know a person when I talk to it. It doesn’t matter whether they have a brain made of meat in their head. Or, if they have a billion lines of code.” (Grant/Metz)
It seems that the consensus among most knowledgeable A.I. experts and researchers is that, currently, artificial intelligence has not achieved sentience. Dr. Alison Gopnike, a psychology professor who is part of the A.I. research group at the University of California, Berkeley, stated that:
“The computational capacities of current A.I., like the large language models, don’t make it any more likely that they are sentient than rocks or other machines are.” She also went on to say: “We call it ‘artificial intelligence,’ but a better name might be ‘extracting statistical patterns from large data sets.’” (C.Metz, “A.I. is Not Sentient Why Do People Say It Is?” NYT 8/11/22).
Finally, Gopnike — who specializes in child development — concludes with: “These things are not even in the same ballpark as the mind of the average two-year old. In terms of at least some kinds of intelligence, they are probably somewhere between a slime mold and my two-year old grandson.” (C. Metz, “A.I. is Not Sentient…”)
Well, I guess we know where she stands on the issue of A.I currently possessing human sentience.
And Professor Colin Allen at the University of Pittsburgh, who explores cognitive skills in both animals and machines, supports these critical opinions by offering: “The dialogue generated by large language models does not provide evidence that even very primitive animals likely possess.” (C. Metz, “A.I. is Not Sentient…)
Steven Johnson, the popular science author, says that at this point in the evolution of artificial intelligence, it’s important to note that with software such as GPT-3, it is not yet:
“self-aware or sentient. L.L.Ms are not conscious — there’s no internal ‘theater of the mind,’ where the software experiences thinking in the way sentient organisms like humans do.” (S. Johnson, (“A.I. is Mastering Language…”)
Melanie Mitchell, Santa Fe Institute scientists, in a column last year, went to the heart of the matter:
“The crux of the problem, in my view, is that understanding language requires understanding the world, and a machine exposed only to language cannot gain such an understanding.” (S. Johnson, “A.I. is Mastering…”)
When asked if humans will ever create “sentient artificial intelligence,” computer scientist Yejin Choi says that she is skeptical. (Choi, btw, is a 2022 recipient of the prestigious MacArthur’s “genius” grant who has been doing groundbreaking research on developing common sense and ethical reasoning in A.I.). Given her current standing in the A.I. community, perhaps her opinions should be taken very seriously. She feels that while A.I. is very good at processing lots of data and can, say, beat most people in a game of chess, she notes that “what’s easy for machines can be hard for humans, and vice versa.” She opines that “You’d be surprised how A.I. struggles with common sense.” (D. Marchese, “An A.I. Pioneer on What We Should Really Fear,” NYT, 12/21/22)
Choi goes on to explain what she means by “common sense” in this context:
“A way of describing it is that common sense is the dark matter of intelligence. Normal matter is what we see, what we can interact with. We thought for a long time that that’s what there is in the physical world — and just that. It turns out that’s only 5 percent of the universe. Ninety-five percent is dark matter and dark energy, but it’s invisible and not directly measurable.” (D. Marchese, “An A.I. Pioneer…”)
So, as far as A.I. currently possessing self-awareness, consciousness and sentience that is akin to the level of human intelligence, we’re not there yet. Nor, does the present technology possess “qualia” (which indicates instances of subjective conscious experience.). And, there is no “internal theater of the mind” either. Essentially, when it comes to the sentience that defines human beings, for the current A.I. there is no there there.
Listen, I’m all for genuine progress, technological or otherwise. Constant innovation, discovery and the evolution of knowledge is how we have not only progressed, but survived, as human beings. I guess what concerns me at this point is whether we (or, those truly in the know) can be certain about what this new technology will unleash. By setting loose the powerful and far-reaching new capabilities of large language models like ChatGPT, will we be opening a Pandora’s Box or awakening a sleeping giant? Perhaps letting the genie out of the bottle, with no chance of putting it back? What I’m really saying is that until we can ascertain or reasonably predict what all of this will bring, maybe A.I. researchers might consider putting into place a plan B, a back-up, a fail-safe. (And, unbeknownst to the general public, perhaps they have.) At any rate, it would seem wise to proceed with a bit more caution and to tread a tad more lightly. Ah, the law of unintended consequences.
We’ve all heard the phrase about “imitation being the sincerest form of flattery.” Well, to that I’ll add the words of Edmund Burke, the Anglo-Irish statesman, who wrote: “Flattery corrupts both the receiver and the giver.” I’ve thought about what that statement might mean in the context of an ever-more sophisticated A.I. It seems to me that technologists are attempting to create something that will ultimately be a very close imitation of a human being. So, if A.I. scientists seek to bring something into existence with the sentience of actual humans, are they playing god? And, should they?
Around 850 BC, Homer told us about Hephaestus, the god of fire and the divine smith, who was crippled and had to fashion “attendants” to help him walk and assist him in his forge. Homer described those so-called attendants:
“These are golden in appearance like living young women. There is intelligence in their hearts, and there is speech in them and strength, and from the immortal gods, they have learned how to do things.” (McCorduck)
Pamela McCorduck feels that:
“for humans to behave like gods — because godlike it is to imbue the inanimate with animation — is hubris indeed.” (McCorduck)
If religion is to be considered at all in these calculations, revisiting the Ten Commandments, particularly the Second Commandment, would yield this admonition:
“Thou shalt not make unto thee any graven image or any likeness of any thing that is in heaven above or that is in the earth beneath, or that is in the water under the earth. Thou shalt not bow down thyself to them nor serve them, for the Lord thy God am a jealous God.” (McCorduck)
So, here’s a question for you: There is science, say in the form of A.I. technology, and there is religion. Do they, can they, coexist? Well, they actually do coexist, as two different ways to look at the world. Victor Stenger, Emeritus Professor of Physics, at the University of Hawaii, tells us that:
“Science is based on observation and reasoning from observation. Religion assumes that human beings can access a deeper level of information that is not available by either observation or reason.” (Ineos, R. Longden, publisher)
Sometimes, when we talk about religion, I feel that we are really referring to that which is spiritual. So, in my view, it might have more meaning to contrast science not with God, or with religion per se, but with spirituality.
To me, spirituality involves an inward journey, a solitary experience, an attempt to find a sense of both peace and purpose. Above all, it is personal and not institutional. And, I think it relates to the dark matter of intelligence that A.I. computer scientist Choi mentioned. Spirituality seems to speak to the intuitional side of humans — essentially what makes a human human; and that is an intelligence characterized by emotion, consciousness, unspoken, implicit knowledge of things —and, a hint of mystery. Not always easy to describe, but you’ll know it when you experience it.
As Choi explained in further detail regarding dark matter:
“We know it exists, because if it doesn’t, then the normal matter doesn’t make sense. So, we know it’s there, and we know there’s a lot of it. We’re coming to that realization with common sense. It’s the unspoken, implicit knowledge that you and I have. It’s so obvious that we often don’t even talk about it. For example, how many eyes does a horse have? Two. We don’t talk about it, but everyone knows it. We don’t know the exact fraction of knowledge that you and I have that we didn’t talk about — but still know —- but my speculation is that there’s a lot.” (D. Marchese)
In a research paper on artificial Intelligence from 2007, the author stated that there were several factors why the estimation and expectation for achieving “human-level A.I.” had been reduced; and one of them concerned what was missing in the then-current A.I. research:
“A deeper understanding of the underlying computational and conceptual difficulties involved in various mental operations that humans, in everyday life, consider trivial and simple.” (Ben Goertzel, “Human-level artificial general intelligence and the possibility of a technological singularity,” ScienceDirect, 110/10/07)
I’ll finish with this: In addition to introducing the latest version of ChatGPT, Open AI has also recently provided a new tool to try and determine whether a hunk of text that you provide is A.I.-generated or not. This is what happened with an initial test:
“In one example, given the first lines of the Book of Genesis, the software concluded that it was likely to be A.I.- generated. God, the first A.I.” (Ian Bogost, “ChatGPT is About to Dump More Work on Everyone,” The Atlantic, 2/3/23)
Thinking back to Robert Frost’s “Mending Wall,” I’m reminded of the phrase: “Something there is that doesn’t love a wall.” I’ve always thought what the great poet meant is that there was some unidentified force, some power, that not only saw no need for a wall, but would do everything it could to prevent one from surviving and remaining intact. Some mysterious element behind the scenes that intercedes to alter the dynamics of a situation, just enough. Do I think it’s God, or Nature? Yes, probably, take your pick, don’t know for sure. But, whatever the unknown force is that keeps knocking down walls, I feel there is a similar force engaged to keep scientists from ever achieving human sentience with artificial intelligence. Perhaps, regarding the latest advances in artificial intelligence — and what’s to come in the near future — maybe something there is that doesn’t love A.I.…
———————————————————————————————