Artificial intelligence is getting smart quickly. The technology’s recent breakthroughs, manifesting in the impressive capabilities of applications like ChatGPT, have sparked fear that A.I. may soon take over humanity—and not in a good way. Last year, a Google engineer claimed the company’s A.I. chatbot LaMDA was so intelligent that it had become “sentient.” This year, alarmed by the potential danger of A.I., a group of more than 1,000 tech entrepreneurs and academics, including Elon Musk, in March called for a six-month pause of training A.I. systems more advanced than OpenAI’s GPT-4, the newest language model powering ChatGPT.
While large language model (LLM) applications, such as ChatGPT and Google’s Bard, have shown the potential to outperform humans in many tasks and replace jobs, they are by no means the same as the human brain because the underlying learning mechanisms are different, David Ferrucci, a computer scientist and early pioneer of commercial A.I. application, told Observer.
Ferrucci is best known as the creator of IBM Watson. Developed in the late 2000s to answer questions on the television quiz show Jeopardy!, the computer system eventually beat human contestants in the game in 2011.
“When the Jeopardy! challenge was proposed in early 2007, I was the only one in IBM Research, even in the academic community, who thought it could be done and basically signed up to pursue it,” Ferrucci told Observer in an interview.
At its core, IBM Watson is a machine learning-based system that learned how to answer Jeopardy! questions by digesting large amounts of data from previous shows. It came out at a time when deep learning, a subset of artificial intelligence, was starting to take hold. Before that, computer systems relied heavily on human programming and supervision.
In 2012, shortly after Watson’s blockbuster success, Ferrucci left IBM after 18 years to lead A.I. research for Bridgewater Associates, the world’s largest hedge fund. For most of the past decade, Ferrucci’s work has focused on developing hybrid A.I., which seeks to combine data-driven machine learning with logical reasoning—in other words, to train algorithms to “think” more like humans.
In 2015, Bridgewater seed-funded an internal project led by Ferrucci that eventually spun off as an independent company called Elemental Cognition. Elemental Cognition’s hybrid A.I. applications can be used in investment management, logistics planning, and drug discovery, according to its website. In February, the startup signed on Bridgewater as a client.
In an interview with Observer earlier this month, Ferruci discussed the different learning processes of ChatGPT and the human brain, the necessity for hybrid A.I., and why he thinks the proposal for a six-month A.I. pause is more symbolic than practical.
The following transcript has been edited for clarity.
What exactly is hybrid A.I.?
Hybrid AI is combining a data-driven, inductive process with a logic-driven process. Machine learning is a data-driven process. It will only get better with more and more training data available. But to communicate with humans, you also need logic and reasoning.
Human cognition sort of works the same way, as explained in Daniel Kahneman’s book Thinking, Fast and Slow. The human brain works by thinking fast and slow at the same time. To achieve precise, reliable decision-making, you need the best of both worlds.
How is fast thinking different than slow thinking? Why do we need both?
Fast thinking is when we extrapolate from our experience, or data, and then generalize. Generalization can be wrong, however, because it’s based on superficial features that might correlate in the data, but aren’t really causative—this is the foundation of prejudicial thinking.
Slow thinking is formulating a model of how I think things work: What are my values? What are my assumptions? What are my inference rules? And what is my logic for drawing a conclusion?
When we talk about A.I. today, we tend to automatically think of machine learning, which, as you said, is a data-driven process. Are there any real-world examples of purely logic-driven A.I.?
Yes, logic-driven A.I. has been assimilated into many real-world applications. Formal representations of problem-solving logic, like rule-based systems or constraint-solving and optimization systems, are being used for resource management, scheduling, planning, control and execution applications.
But we don’t think of them as A.I. anymore, largely because, with the big data and machine learning revolution, A.I. became strongly associated with machine learning systems.
Where do LLMs like GPT and LaMDA stand on the fast/slow thinking spectrum? Are they really close to human intelligence, like a Google engineer claimed last year?
LLMs produce large data structures that capture the statistical probabilities of certain sequences of words following other sequences of words. What ChatGPT does is statistical predictions based on the superficial features of language. With enough training data and really powerful machine learning techniques, these models can mimic fluent-sounding language.
That’s not logical reasoning. It is hard to argue that a big table for probabilities is sentient. I would say not. However, one interesting thing about human cognition is that we conflate coherent-sounding text with facts. We are like, that sounds really good, it must be true. But truth requires deeper understanding and analysis beyond the superficial features of language.
Are you nervous about A.I. eventually outsmarting humans?
A.I. can perform certain tasks better than humans. It has been true for years. Today, as data and training techniques improve, it’s easier and easier to train A.I. systems to do more human tasks. I think that is very significant. But I don’t think A.I. is going to take over. There’s no independent entity that wants to conquer you. However, A.I. can be easily abused. I think that’s a real concern.
Elemental Cognition recently signed Bridgewater as a client, who is also an early investor in your company. How can hybrid A.I. help investment managers better understand the economy and markets?
Understanding the economy comes in two forms: identifying patterns in data and interpreting those patterns in order to understand what’s going on.
In investment management, the ultimate goal is to make accurate predictions by looking at economic indicators, such as interest rates and stock prices. Data has a lot to tell you. If you could see patterns in the data, that’s really powerful. And if you can interpret the patterns and make sense of what’s going on in the economy, then you have another perspective. It’s almost like you could do checks and balances: here are the correlations shown in the data, and here’s my understanding of how things work. Do they agree or not?
What do you think of the proposal to pause A.I. training for six months?
I don’t think that’s practical to start with, because large language models are not a secret. There will always be companies working on them. We are going to continue to see a lot of experiments. I don’t think it makes sense to stop that experimentation.
But I do think it makes sense to take a step back and think hard about this. Policymakers need to start thinking about how to regulate A.I. because it can be abused in a number of ways. We are likely to see regulation being developed and applied.