Whether it’s being theorized or possibly, maybe actualized, artificial general intelligence, or AGI, has become a frequent topic of conversation in a world where people are now routinely talking with machines. But there’s an inherent problem with the term AGI—one rooted in perception. For starters, assigning “intelligence” to a system instantly anthropomorphizes it, adding to the perception that there’s the semblance of a human mind operating behind the scenes. This notion of a mind deepens the perception that there’s some single entity manipulating all of this human-grade thinking.
This problematic perception is compounded by the fact that large language models (LLMs) like ChatGPT, Bard, Claude and others make a mockery of the Turing test. They seem very human indeed, and it’s not surprising that people have turned to LLMs as therapists, friends and lovers (sometimes with disastrous results). Does the humanness of their predictive abilities amount to some kind of general intelligence?
By some estimates, the critical aspects of AGI have already been achieved by the LLMs mentioned above. A recent article in Noema by Blaise Agüera Y Arcas (vice president and fellow at Google Research) and Peter Norvig (a computer scientist at the Stanford Institute for Human-Centered A.I.) argues that “today’s frontier models perform competently even on novel tasks they were not trained for, crossing a threshold that previous generations of A.I. and supervised deep learning systems never managed. Decades from now, they will be recognized as the first true examples of AGI.”
For others, including OpenAI, AGI is still out in front of us. “We believe our research will eventually lead to artificial general intelligence,” their research page proclaims, “a system that can solve human-level problems.”
Whether nascent forms of AGI are already here or are still a few years away, it’s likely that businesses attempting to harness these powerful technologies might create a miniature version of AGI. Businesses need technology ecosystems that can mimic human intelligence with the cognitive flexibility to solve increasingly complex problems. This ecosystem needs to orchestrate using existing software, understand routine tasks, contextualize massive amounts of data, learn new skills, and work across a wide range of domains. LLMs on their own can only perform a fraction of this work—they seem most useful as part of a conversational interface that lets people talk to technology ecosystems. There are strategies being used right now by leading enterprise companies to move in this direction toward something we might call organizational AGI.
Organizational AGI? (eyeroll)
There are legitimate reasons to be wary of yet another unsolicited tidbit in the A.I. terms slush pile. Regardless of what we choose to call the eventual outcome of these activities, there are currently organizations using LLMs as an interface layer. They are creating ecosystems where users can converse with software through channels like rich web chat (RCW), obscuring machinations happening behind the scenes. This is difficult work, but the payoff is huge: rather than pogo-sticking between apps to get something done on a computer, customers and employees can ask technology to run tasks for them. There’s the immediate and tangible benefit of people eliminating tedious tasks from their lives. Then there’s the long term benefit of a burgeoning ecosystem where employees and customers are interacting with digital teammates that can perform automations leveraging all forms of data across an organization. This is an ecosystem that starts to take the form of a digital twin.
McKinsey describes a digital twin as “a virtual replica of a physical object, person, or process that can be used to simulate its behavior to better understand how it works in real life.” They elaborate to say that a digital twin within an ecosystem similar to what I’ve described can become an enterprise metaverse, “a digital and often immersive environment that replicates and connects every aspect of an organization to optimize simulations, scenario planning and decision making.”
With respect to what I said earlier about anthropomorphizing technology, the digital teammates within this kind of ecosystem are an abstraction, but I think of them as intelligent digital workers, or IDWs. IDWs are analogous to a collection of skills. These skills come from shared libraries, and skills can be adapted and reused in multitudes of ways. Skills are able to take advantage of all the information piled up inside the organization, with LLMs mining unstructured data, like emails and recorded calls.
This data becomes more meaningful thanks to graph technology, which is adept at creating indexes of skills, systems and data sources. Graph goes beyond mere listing and includes how these elements relate to and interact with each other. One of the core strengths of graph technology is its ability to represent and analyze relationships. For a network of IDWs, understanding how different components are interlinked is crucial for efficient orchestration and data flow.
Generative tools like LLMs and graph technology can work together in tandem, to propel the journey toward digital twinhood, or organizational AGI. Twins can encompass all aspects of the business, including events, data, assets, locations, personnel and customers. Digital twins are likely to be low-fidelity at first, offering a limited view of the organization. As more interactions and processes take place within the org, however, the fidelity of the digital twin becomes higher. An organization’s technology ecosystem not only understands the current state of the organization. It can also adapt and respond to new challenges autonomously.
In this sense every part of an organization represents an intelligent awareness that comes together around common goals. In my mind, it mirrors the nervous system of a cephalopod. As Peter Godfrey-Smith writes in his book, Other Minds (2016, Farrar, Straus and Giroux), “in an octopus, the majority of neurons are in the arms themselves—nearly twice as many in total as in the central brain. The arms have their own sensors and controllers. They have not only the sense of touch but also the capacity to sense chemicals—to smell or taste. Each sucker on an octopus’s arm may have 10,000 neurons to handle taste and touch. Even an arm that has been surgically removed can perform various basic motions, such as reaching and grasping.”
Does this sound messy?
A world teeming with self-aware brands would be quite hectic. According to Gartner, by 2025, generative A.I. will be a workforce partner within 90 percent of companies worldwide. This doesn’t mean that all of these companies will be surging toward organizational AGI, however. Generative A.I., and LLMs in particular, can’t meet an organization’s automation needs on its own. Giving an entire workforce access to GPTs or Copilot won’t move the needle much in terms of efficiency. It might help people write better emails faster, but it takes a great deal of work to make LLMs reliable resources for user queries.
Their hallucinations have been well documented and training them to provide trustworthy information is a herculean effort. Jeff McMillan, chief analytics and data officer at Morgan Stanley (MS), told me it took his team nine months to train GPT-4 on more than 100,000 internal documents. This work began before the launch of ChatGPT, and Morgan Stanley had the advantage of working directly with people at OpenAI. They were able to create a personal assistant that the investment bank’s advisors can chat with, tapping into a large portion of its collective knowledge. “Now you’re talking about wiring it up to every system,” he said, with regards to creating the kinds of ecosystems required for organizational A.I. “I don’t know if that’s five years or three years or 20 years, but what I’m confident of is that that is where this is going.”
Companies like Morgan Stanley that are already laying the groundwork for so-called organizational AGI have a massive advantage over competitors that are still trying to decide how to integrate LLMs and adjacent technologies into their operations. So rather than a world awash in self-aware organizations, there will likely be a few market leaders in each industry.
This relates to broader AGI in the sense that these intelligent organizations are going to have to interact with other intelligent organizations. It’s hard to envision exactly what depth of information sharing will occur between these elite orgs, but over time, these interactions might play a role in bringing about AGI or singularity, as it’s also called.
Ben Goertzel, the founder of SingularityNET and the person often credited with creating the term, makes a compelling case that AGI should be decentralized, relying on open-source development as well as decentralized hosting and mechanisms for interconnect A.I. systems to learn from and teach on another.
SingularityNET’s DeAGI Manifesto states, “There is a broad desire for AGI to be ethical and beneficial for all humanity; the most straightforward way to achieve this seems to be for AGI to ‘grow up’ in the context of serving and being guided by all humanity, or as good an approximation as can be mustered.”
Having AGI manifest in part from the aggressive activities of for-profit enterprises is dicey. As Goertzel pointed out, “You get into questions [about] who owns and controls these potentially spooky and configurable human-like robot assistants … and to what extent is their fundamental motivation to help people as opposed to sell people stuff or brainwash people into some corporate government media advertising order.”
There’s a strong case to be made that an allegiance to profit will be the undoing of the promise for humanity at large that these technologies afford. Weirdly, the skynet scenario in Terminator—where a system becomes self-aware, determines humanity is a grave threat, and exterminates all life—assumes that the system, isolated to a single company, has been programmed to have a survival instinct. It would have to be told that survival at all costs is its bottom line, which suggests we should be extra cautious developing these systems within environments where profit above all else is the dictum.
Maybe the most important thing is keeping this technology in the hands of humans and pushing forward the idea that the myriad technologies associated with A.I. should only be used in ways that are beneficial to humanity as a whole, that don’t exploit marginalized groups, and that aren’t propagating synthesized bias at scale.
Whatever it is, it’s ultimately about humans
When I broached some of these ideas about organizational AGI to Jaron Lanier, co-creator of VR technology as we know it and Microsoft’s Octopus (Office of the Chief Technology Officer Prime Unifying Scientist), he told me my vocabulary was nonsensical and that my thinking wasn’t compatible with his perception of technology. Regardless, it felt like we agreed on core aspects of these technologies.
“I don’t think of A.I. as creating new entities. I think of it as a collaboration between people,” Lanier said. “That’s the only way to think about using it well…to me it’s all a form of collaboration. The sooner we see that, the sooner we can design useful systems…to me there’s only people.”
In that sense, AGI is yet another tool, way down the spectrum from the rocks our ancestors used to smash tree nuts. It’s a manifestation of our ingenuity and our desires. Are we going to use it to smash every tree nut on the face of the earth, or are we going to use it to find ways to grow enough tree nuts for everyone to enjoy? The trajectories we set in these early moments are of grave importance.
“We’re in the anthropocene. We’re in an era where our actions are affecting everything in our biological environment,” Blaise Aguera Y Arcas, the Noeme article author, told me. “The Earth is finite and without the kind of solidarity where we start to think about the whole thing as our body, as it were, we’re kind of screwed.”
Josh Tyson is the co-author of Age of Invisible Machines, a book about conversational A.I., and Director of Creative Content at OneReach.ai. He co-hosts two podcasts: Invisible Machines and N9K.