When it comes to making bold predictions that pay off, there is probably no bigger of a whale in tech circles than Deepwater Asset Management’s Gene Munster. A regular fixture on Bloomberg, CNBC and Silicon Valley power confabs, Munster first made a name for himself as the tech analyst who predicted that Apple would become the world’s first publicly-traded trillion dollar company—many years and hundreds of billions of dollars of market cap—before it finally crossed the heralded threshold in August 2018. Since then, Munster’s bold calls on companies such as Tesla, Alphabet, Meta (META) and other members of the so-called “Magnificent Seven” have become a lodestar for everyone from institutional players on Wall Street to day traders on Main Street looking to cash in on the tech boom.
As of late, the “Alien Founder” investor has been sounding off on the massive disruption and upheaval that is coming in the wake of the artificial intelligence (A.I.) revolution. Observer sat down with Munster for an expansive interview in which we talked not only about the euphoria that the emergence of A.I. is having on equity markets but how the transformative impact of this new technology will repercuss across society, negatively affecting our mental health, and decimating certain professions (like Munster’s own job as a stock picker) while making those of creatives like writers and editors immensely more valuable.
The following conversation has been edited for length and clarity.
Observer: Gene, obviously A.I. has dominated headlines the last couple of years. But just before we get into it—it’s said that there’s no such thing as a ‘dumb question,’ but there may indeed be such a thing as a ‘dumb analogy,’ and maybe I’m about to make one. (Laugh) But let’s go back to, say 1995 or 1996, and we’re on our computers, using AltaVista to do search and almost no one had any idea that there was going to be a Facebook or an eBay or any of these other things that would eventually be so transformative in terms of the future of the Internet. And I feel like maybe that is where we are with A.I. right now—like we’re just in the foothills and we have no idea about what’s to come.
Gene Munster: I think you nailed it. We are in 1996. It’s not 1993 when the talk of the Internet was just kind of bubbling up; we’re a little bit further along now with A.I. There are products out there like we had in 1996. You mentioned AltaVista and there was the Netscape browser. There was a kind of shimmering of what the Internet could potentially be. And we’re seeing this ‘shimmering’ with ChatGPT today; I think that your analogy is accurate. And if I kind of play that forward from how equity markets are going to view this, I think we’re still in front of a three- to five-year bull market that’s going to end in a just spectacular bursting of a bubble—but we’re still a few years away from that.
Are you linking this ‘bubble bursting’ with A.I. or are you saying that there will be a big bust in general?
The bust is going to be A.I.-driven. We are still in the foothills like you talked about, but as soon as A.I. becomes more tangible for everyday people in a year or two—that’s when I think that we reach a state of A.I. euphoria. And while there is a lot of hype related to A.I., there’s also a lot of skepticism about whether A.I. is going to be as truly transformative as people think it will be. And it’s that level of skepticism that is still priced in, which gives me some confidence that the market will go much higher from here.
So, when are you predicting that the A.I.-driven tech bubble will finally burst?
I think tech stocks have the potential to double or triple over the next three to five years, but eventually that bubble will burst—which is really exciting and also scary because you don’t know when it will happen.
If we look at the evolution of tech cycles over the last 50 years, we saw the move from supercomputers to personal computing. And then came the Internet. And now we’re in another cycle dominated by A.I., and it seems that there’s usually a trend in which incumbent players are always caught off-guard; they struggle to adapt to new emerging technologies or trends, and eventually they’re supplanted by new incoming forces. But I think that there is a sense that with A.I. it’s a bit different this time around, and that some of the incumbents such as Alphabet/Google (GOOGL) and Meta/Facebook, are actually leaning into A.I. It seems that they’ve learned from the lessons of the past tech waves. And they’re really positioning themselves to be players in the A.I. revolution.
That’s well put. I think we saw in 2000 a changing of the guard, and that was in part a wakeup call to tech companies more broadly. Around 2000, Microsoft (MSFT) was a company that got kind of left behind, in part related to their slowness in embracing the Internet as well as being slow to move to mobile. But the bigger issue is that these big tech companies face a very simple problem around growth. When a company has hundreds of billion in revenue like Apple, it’s just mathematically harder to grow. You have to really anchor the company on paradigm shifts.
Look, we can debate how much of an impact A.I. is going to have in the next 10, 50, 100 years, but there is one piece that I don’t think can be debated, which is that the biggest tech companies have identified A.I. as the biggest shift in tech over the last 50 years.
Is there going to be a tech company that’s a prominent name that we all know—a household name today in tech—that’s going to go the way of BlackBerry?
The most at-risk among the so-called “Magnificent Seven” is Tesla if they don’t make the pivot to autonomy. (Editor’s Note: The ‘Magnificent Seven’ refers to Alphabet, Meta, Amazon, Microsoft, Tesla, Apple and Nvidia.)
But as far as A.I.-related risk, the second most at-risk company is Microsoft because its whole A.I. initiative is dependent on a relationship with OpenAI in which they have 49 percent ownership of a revenue stream up to a certain point—it’s not an equity stake, as the press often gets wrong. Microsoft’s future is in the hands of OpenAI. It’s just that simple.
As far as A.I.-risk, I’d also add Apple, as it seems that the early stages of what they’re going to be doing in A.I. will be similar to Microsoft, depending on third parties like Google and OpenAI. I suspect that soon there will an announcement in which Apple agrees to license its A.I. from one of those two companies doing foundational A.I., although I think it’s going to be increasingly important that Apple eventually build its own models really to have some independence when it comes to the A.I. arms race.
And what about xAI, Elon Musk’s A.I. venture?
So, the one waiting in the wings is Elon’s company, xAI. He’s also building a foundation model just like OpenAI and Anthropic. And then there is Lamarr, which is an open-source A.I., but there’s room for really only one or two more foundational A.I. plays—which all the rest of A.I. is going to be powered on top of—and these are going to be massive moneymakers.
And the growth of these new A.I. foundational plays have been phenomenal.
The speed of growth of OpenAI, for example, has been breathtaking. About a year ago, they were doing a run rate of $100 million in revenue. 12 months later, they had a $2 billion run rate.
What could go wrong? Is there any way A.I. doesn’t live up to the hype?
The power requirements for A.I. is going to be an increasingly important topic. We talk about this three to five-year window. But I think within three years the power requirements needed for A.I. might hit a ceiling. Power is the one piece that gives me some pause about everything that I’ve talked about and the pace of innovation. If you look at where the power consumption is now in the U.S. and the rate of growth projected for A.I., we’re at about a three-year window until there’s going to be some metering back. This is a major deal. These A.I. machines take a lot of energy.
Switching gears, how will A.I. fundamentally change our day-to-day lives as this all evolves?
It’s moving fast. It’s as if an infant is becoming a teenager in a year. And this is another reason why I think that we’re going to hit this euphoric phase—it’s this ability of a machine to think like a human. We’re going to see dramatic changes. We’ll see breakthroughs when it comes to science and finding cures to diseases that have perplexed scientists for 100 years. We’ll see asset managers like myself potentially being pushed out of our jobs. You’re going to see a massive disruption.
Now some of this is just my own speculation, but there’s one piece that I don’t think can be debated or speculated in terms of the impact of A.I. on our lives—and that’s the dark side of A.I. which is going to make it much more difficult for us to step away from our tech. It’s going to drive an increase in mental illness. There will be more tribalism. I think that, because of A.I. deepfakes, it’s going to be harder for people to trust one another.
Obviously, my editors in New York are going to want to know what’s going to happen to journalists, columnists and writers like me? (Laugh)
I think that the three things that humans can do that machines will never be able to do are foster creativity, community and empathy. The creativity piece can be debated as machines can exert a form of creativity today, but it needs a certain “spark” from a human. When it comes to empathy, part of being empathetic is knowing that there’s another human behind that page that you’re reading.
There’s something about people wanting to connect with something that has another beating heart. It’s the same reason why robotic pets have never taken off globally, although oddly they’re popular in Japan.
And I think that, when it comes to empathy, there is this concept of a machine that can fake empathy. But it doesn’t work. To really feel empathy, it needs to come, almost by definition, from another human.
But when it comes to journalists—there’s something about the art of writing in that people want to know that there’s a human being behind those words. Of course, how journalists write is going to become different as they will rely increasingly on generative tools to write more quickly. But for the reader, just knowing that there’s a human behind each story—that someone looked at it, crafted it, and then used their creativity to tell that story—all of that involves empathy. And that process builds community. And the A.I. world is going to need more of this type of skill set.
The job of a writer or editor is going to be very different, but it still is going to be very relevant in an A.I.-dominant world because these people do the three things that machines will never do: combine creativity, community and empathy.