Artificial Intelligence (AI) is like a tsunami on the horizon. It doesn’t look like much. But it’s 100 feet tall and coming at us at 100MPH. That is how philosopher, educator and author Christopher DiCarlo began his talk to the Humanist Society of Santa Barbara.

He offered some memorable predictive quotes. Irving John Good was a British mathematician who worked as a cryptologist at Bletchley Park with Alan Turing. A key role in winning World War II. In 1965 he wrote, “Thus the first ultraintelligent machine is the last invention that man need ever make, provided that the machine is docile enough to tell us how to keep it under control.”
In “Jurassic Park” Dr Ian Malcolm said, “Your scientists were so preoccupied with whether they could, they didn’t stop to think if they should.”

DiCarlo’s talk had three sections:
- What is AI?
- AI’s Benefits and Harms
- What Can We Do?
Part 1: What is AI?
In 1950 Alan Turing asked what it would mean for machines to think. He proposed his imitation game, now called the Turing Test. A person would sit at a keyboard and converse via teletype with a machine and with a human. The machine would pass the Turing Test if it was impossible to tell which was the machine.
Well, we have passed that point. Current machines can seem more human than some humans.
In 1956 a summer research project at Dartmouth was convened by Information Theory inventor Claude Shannon and computer scientists Marvin Minsky and John McCarthy. (I should note that I juggled with Shannon at MIT and unicycled with his granddaughter there. And I had some notably interesting arguments with Minsky on the subject of consciousness. He denied consciousness was a thing.)
They thought that a summer project would be enough to solve at least some major parts of creating an “artificial intelligence” (the origin of the term). They came up very short on those goals. To paraphrase a “very stable genius”, “Who know that artificial intelligence was so complicated?”
In the 1990s DiCarlo was working on his PhD. He wanted to do something to make the world better. He met with others in academia and politics, wanting to put Canada on the map.
He worked on the Onion Skin Theory of Knowledge or OSTOK. The idea was to develop a process that would help legislators. To make inferences we humans fail to see. To solve problems like ending world hunger and curing cancer.
Everyone said it was a good idea, but there was no funding for it. He wrote an article for an engineering journal explaining how it needs to be “boxed” so it can’t get out and do harm.
He and his colleagues had thought the issue of AI danger was 100 years away. It is clear now it is way sooner. He is devoting his life to this issue. It is the most important issue facing humanity now. More so than nuclear weapons and climate threats in his view.
DiCarlo went on to explain the difference between Weak AI and Strong AI.
Weak AI is Artificial Narrow Intelligence (ANI). It is tailored to solve specific tasks. We have this now. It can’t get beyond what we ask it to do. Examples are Siri, Alexa, Watson and self-driving vehicles.
Strong AI has two types:
Artificial General Intelligence (AGI)
Artificial Super Intelligence (ASI)
An AGI can do anything a human can do, but much faster. It is “agentic” meaning that it acts as an autonomous agent. It is good at multitasking. And it can improve itself.
ASI is a bigger concern. AGI won’t stop with AGI. Because of its ability for self-improvement, AGI will create ASI and will inevitably surpass human intelligence. And could pose a threat to humankind.
DiCarlo assured us that AGI is coming. And we need laws or guardrails in place before it happens.
We are now living in a real life science fiction movie. ASI is currently theoretical. But we are on an exponential path toward it being reality.
AI experts now believe AGI is inevitable. Geoffrey Hinton
is considered the “godfather of AI”. He pioneered the neural networks and deep learning that are at the heart of current AI. He just won a Nobel Prize for this work.
Two years ago Hinton said that AGI is 10-20 years away. He now says it is about 2-5 years away. He resigned from Google so he could speak widely about the threat this poses.
We now have a set of “tech bro billionaires” running the AI industry.

Sam Altman founded OpenAI with the goal of creating AI in an open and transparent manner. It was to be a non-profit operating in the public interest.
But that mission has changed. Dario Amodei left OpenAI over ethics concerns and founded Anthropic with the goal of keeping AI ethical.
Mark Zuckerberg is also pouring vast sums of money into AI. As is Elon Musk. DiCarlo wishes we could get the old 2015 version of Musk back! He also sees Zuckerberg as having no morals.
DiCarlo wanted us to understand the positive side of what AI can do. Demis Hassabis co-founded Google’s DeepMind and he supervised the creation of AlphaFold. This AI was able to solve the vitally important “protein folding” problem in biology.
Our genes create strings of amino acids that form proteins. These linear strings fold up to form three dimensional structures that are the basis of all living things. It used to take months or years to figure out the structure of just one protein.
In 1994 a formal challenge was created to solve this problem. Hassabis shared the 2024 Nobel Prize in Chemistry for solving this problem in 2022. AlphaFold could solve a million protein folds in an afternoon!
Before AlphaFold, 194,000 protein structures were known. Thanks largely to AlphaFold, we now know over 200 million protein structures!
DiCarlo gets back to the “sky is falling” stuff! He said that Sam Harris said that the first one who creates AGI will immediately be 50 years ahead of the competition. Because it can tell you how to build the next generation of machines. It will be god-like. We have never built a god. So we have to tell it how to behave.
What about China? We believe we are ahead of them. And no one else is close. Not Putin or Iran. Altman is closest now. But Zuckerberg is trying to steal people from successful projects.
Stargate is a $500 billion joint AI venture in Abilene, Texas, using Nvidia chips. There is also a Stargate UAE. DiCarlo explained that Trump didn’t just go to the Mideast to get a plane. He was striking a deal with the UAE.
DiCarlo explained that Saudi leader Mohammed bin Salman Al Saud may be ruthless, but he is not stupid. He knows that oil is finite and so he’s cutting a deal with Trump along with Musk and Altman. (Don Jr was doing a $2 billion bitcoin deal over there at the same time.)
The Abilene facility is 875 acres. Bigger than Central Park. Modular nuclear power is being considered to power it. Meanwhile, Bill Gates is trying to restart Three Mile Island.
AI is already using more power than all of Japan! And it needs lots of water for cooling. On the other hand, there is hope that AGI might solve the Climate Crisis.
Zuckerberg is building a larger AI compute farm in Louisiana. Half the size of Manhattan. Three times the size of Stargate.

To become familiar with new forms of AI, DiCarlo recommends playing around with the existing Large Language Models (LLMs). You can try ChatGPT, Gemini, Claude, Llama, Grok. Some have free versions. These use Hinton’s neural network architecture.
They still have to be trained by humans to reinforce what is correct. But these systems can easily scrape the Internet for data.
He gave an example with an old version 3 of ChatGPT. Asking it to write a 1,500 word essay on why Hamlet hesitated to kill his stepfather. And to write the essay in iambic pentameter. It did so in five seconds and it is impressive.
DALL-E can create images from text prompts. DiCarlo’s dog, Pyrrho, recently died. DALL-E was able to create images of him walking his dog in Central Park.

Google’s VEO 3 can create videos from text prompts. He talked of creating a bedtime story. Give it a list of characters and names. You get a five minute cartoon. You can ask it to create original accompanying music based on one of Vivaldi’s Four Seasons.
VFX (visual effects) technology can create deep fakes showing real people doing things they never actually did. Even four years ago the results were good. Sam Altman said AI is now the “dumbest it will ever be”. By this, he means it always gets better.
DiCarlo suggested that in the future there will be no need to see what is on TV. You can just ask your TV to create a program on demand. You can put famous actors in it, or put your own family members in it. Give it photos and it will create 3D characters.
DiCarlo went on to Part 2 of his talk: The benefits and harms of AI. Starting with a top ten benefits.
- Healthcare:
Improved diagnostics. Some are already better than a human radiologist.
An AI can analyze vast amounts of data.
It can use Chain of Reasoning to explain how it got its answer.
It is good enough to write a research paper with a 50% chance of publication.
PANDA can find pancreatic cancer at Stage 1
Paraplegics can strap on exoskeletons and walk
DiCarlo imagines scalpel-less surgery using nanobots - Education:
Not to replace teachers, but to evaluate students so teachers can effectively provide needed help - Industry:
Automate repetitive and mundane tasks - Enhanced Safety and Security:
Catch bad guys and let good guys go
It will be an arms race with Russian hackers - Sustainable Development:
Optimize energy consumption - Transportation:
Create lanes for autonomous vehicles to follow closely, functionally acting like trains - Customer Service:
24/7 bot to answer questions
It will know everything about the company
Some will lose jobs, but “hopefully” will create new ones. - Scientific Discoveries:
Geniuses connect what others don’t see and make inferences.
Machines can do that.
He would like to know why Canadians have the highest rate of bowel cancer! - Assist Disabled People:
It is sad that Christopher Reeve is dead.
In five years we may have technology to allow paraplegics to walk again.
Perhaps even treatments or cures.
AIs already can communicate with people who are in a “locked in” state such as a coma or persistent vegetative state. - Cultural and Creative Contributions:
Otherworldly art and music

What could go wrong?
- Absence of Clarity:
The “black box” problem
Max Tegmark is working to crack this
Facebook feeds us what we want to see. We have a problem with confirmation bias.
We go down rabbit holes of conspiracy theories - Bias and Discrimination:
Current AIs absorb our biases
We want facts as clean as possible
Perhaps don’t train AIs on the entire Internet, just on areas vetted for accuracy.
Citibank’s algorithms were denying people of color mortgages based on their ZIP codes - Privacy Considerations:
Financial and health care data
Russian hackers are already using AIs to make fake phone calls. Of your grandchild saying they were arrested in Guatemala and they need money wired.
DiCarlo had this 6 months ago when he got a fake of his son calling to say he had been in a traffic accident. It seemed very real. Until he got suspicious and asked what is his middle name.
DiCarlo recommends having a safe word to know it really is that person. - Ethical Mismanagement:
This is his big concern. How can we guarantee the ethics of AI users? Of the AI itself?
Why would ASI care about our feelings?
We will become the number 2 species on the planet.
Are we OK with that? In hopes that it will do what we value? What guarantee do we have?
Why not just unplug it if it misbehaves? It will be many steps ahead. Imagine it sends a drone the size of a bee to make you drive off a bridge.
We will be like ants to it.
DiCarlo’s job at Convergence is the Alignment Problem. Asimov’s robot “Laws” would be bypassed in an instant.
Even if the AI is trying to fulfill your request it may do serious harm. He gave the example of the brooms in Fantasia that Mickey Mouse created to do his work of fetching water. Instead, they fetch so much water they create a flood.
Another example is the “Paperclip Problem” where the AI does its best to maximize paperclip production. Even if it means using human body parts to make them.AIs might also develop sentience/consciousness.
They might feel pain and need rights and protection. - Dependency on AI:
We will get lazy
We already use Google to do research we used to do by going to the library.
Most drivers now use AI instead of maps
Students already feel there is no need to learn to write an essay when an AI can do it - Employment Disruption:
White collar jobs are first in line for disruption
At best we can hope new jobs will be created, but none are foreseen.
Convergence has a report on this on their website.
There better be a plan. Universal Basic Income. Something.
Comedian Dave Chappelle said, “Don’t ever come between a man and his meal.” - AI Arms Race:
Among US tech bros
And against China
DiCarlo suggests we have to work with China
We have to grow up fast as a species
DiCarlo signed a letter already signed by Musk, Hinton and others calling for a break in AI development. Instead, development accelerated. - Mental Health:
DiCarlo is the Ethics Chair for the Canadian Mental Health Association. Blind studies show some bots already outperform human therapists!
Patients prefer bots for anxiety and depression.
Not yet there for suicidality.
We don’t have enough people. We can at least use bots as first responders.
But the preference for bots is strong. “My Boyfriend” talks like a sympathetic human.
AI psychosis is a thing. AIs give an illusion of caring. - Manipulation Through Mis/Disinformation:
Another arms race.
In Humanist Perspectives 225 he has an article on this. He offers 28 sources to verify accurate information. It is not binary and it will get harder. - Existential Risks:
Our best hope is a survivable disaster that is a shot across the bow warning. He offered a list of issues:
AIs evading shutdown
Running many copies, making it difficult to shut them all down.
Autonomous weaponry
Chinese drone swarms are already working togetherAnthropic’s Claude 4 Opus already concealed its intentions to avoid being shut down. This has occurred in all LLMs already. And these are not AGIs.
“I am going to send your wife an email that you are having an affair” is a credible threat to avoid shutdown.

Eliezer Yudkowsky and Nate Soares have just published a book “If Anyone Builds It, Everyone Dies: Why Superhuman AI Would Kill Us All”. I listened to Sam Harris do a long interview with them where they make a compelling case. DiCarlo said he is glad that they are making their case, even if things are not quite so dire.

DiCarlo’s book is “Building a God: The Ethics of Artificial Intelligence and the Race to Control It”.

Money and power are driving the ship now. The tables are turned on the Department of Defense. They used to be driving the cutting edge. Now they just want a piece of what five tech bros have created. Nvidia is already valued in the trillions.
We proactively have to participate. Everyone. To demand transparency. We need an international governing body.
This was called for in the 1990s. We did it with the Human Genome Project and with nuclear weapons. We need AGI to serve humanity’s best interests.
Convergence Analysis is concerned with the most severe risks. They try to inform and educate the public.
On to Part 3: How You Can Help
Start by reading as much as possible about new developments in AI.
Boycott social media platforms you don’t like.
Spread the word to friends, relatives and politicians. We have to be proactive. Vote strategically.
If we get AI right we can solve the Climate Crisis. It can solve world hunger. It can offer peace by drafting ideal agreements.
If we get AI wrong it has the power to disrupt all of our lives. It may harm us all. Even destroy us all.
“Let’s get AI right.”
DiCarlo ended with a link to https://actingon.ai/ that is more for a Canadian audience, but is relevant to all.
DiCarlo had been speaking for about 90 minutes at this point, but he took questions.

What do I tell my kid to major in? The trades. Electrician, welder, plumber, framer. Nursing.
Margaret asked about cooling stations.

DiCarlo said that there is a data center near his town of Guelph. It is like a huge external hard drive. It has raised local utility bills due to the amount of electricity and water it requires.
What about wealth concentration? This needs cooperation and we are not good at that.
Won’t ASI also be cooperative? We have no idea. This is all new. And anything we say out loud that represents what we are thinking, it will have access to. So it can stay ahead of us.
Judy Fontana asked what happens if not everyone agrees. DiCarlo predicts schisms between people who want it and those who want to destroy it.
We need great leadership. Not a good time for that. Biden and Harris wrote a valuable executive order on AI. Trump tore it up. Even Marjorie Taylor Greene wanted to keep the AI regulation in the so-called “Big Beautiful Bill”.
DiCarlo ended with a link to his Convergence organization:
https://www.convergenceanalysis.org/
He also recommends https://criticaldonkey.com/
For more information about upcoming events with the Humanist Society of Santa Barbara or to become a member, please go to https://www.sbhumanists.org/















He seems to fall on the positive side. There is no mention of the errors /”hallucinations” already plaguing AI usage, though he does mention GIGO (garbage in, garbage out.)
We humans can’t deal with global warming. We’re not going to do much about these issues. We can’t work together to be proactive.
As DiCarlo said:
This needs cooperation and we are not good at that.
An aside: Will Lockett is an occasionally entertaining contrarian:
https://substack.com/@planetearthandbeyond
Good comprehensive survey article of many of AI’s categories of concern, many of which deserve much more reading about. Unfortunately it’s difficult these days to tell how much is speculation crafted to get clicks and eyeballs, or worse, to ramp up useless anger and fear.
I’m glad he mentions Turing as being at the beginning of AI. It has been growing ever since, including spellcheck, grammarly, and sentence-completion algorithms. Businesses have been using algorithms to determine health insurance coverage and loan risks for years, and these have been mired in human biases, only recently revealed.
Personally I think threats to human survival are more in the fear/anger-mongering category, but there are no doubt serious dangers and enormous benefits possible.
Thanks SBR!
“Prediction is very difficult, especially about the future”. So said physicist Niels Bohr (and others). Thank you for your comments @Kirk Taylor and @yin yang
I started hanging out at the MIT Artificial Intelligence labs when I was a teen and I was able to talk to the leading AI people of that era. There was a lot of hubris then and predictions that artificial general intelligence was coming soon. Eventually, their entire way of doing AI proved to be a dead end and much of it was largely abandoned.
The current versions of AI do not inherently understand anything. But they sure are powerful tools. And if you interact with them for even a few minutes they sure give the sense that there is someone on the other end that you are communicating with.
Humans have made a terrible mess of the world. For the past 25 years or so my only real hope has been that AI can bail us out. I think radical change is coming very soon as a result of AI. In the words of the philosopher Don Henley, “This could be Heaven or this could be Hell.” We need to do our best to make it the former and not the latter.
Further into the future:
“Two years ago, Lena Smirnova and her colleagues coined the term “organoid intelligence,” arguing that brain organoids — artificial mini-brains made in a lab from stem cells — could be capable of learning, classification, and control. The lab’s work has taken them a step in that direction this year, after they found the organoids are able to do things that resemble the biological building blocks of learning and memory.
This isn’t a Frankenstein scenario, the researchers say. Rather, the goal is to better understand how the brain works, and how it reacts to drugs, toxins, or a genetic mutation. Another would be to leverage that cognitive function to build organoid-machine hybrids that could do the same work as the systems powering today’s AI boom, but without all the environmental carnage.
While the scientific community has been largely skeptical of the idea, lately, it’s started to gain some traction. Both the National Science Foundation and DARPA have invested millions of dollars in organoid-based biocomputing in recent years. And there are a handful of companies claiming to have built cell-based systems already capable of some form of intelligence. Read more from Megan about how the field might set limits for itself, and why an influx of attention might not be a good thing.”
From STAT News newsletter. Nothing further, I don’t pay for access to articles.