Jim Rickards: Will AI Destroy the World?
The James Altucher ShowDecember 10, 202400:48:5444.78 MB

Jim Rickards: Will AI Destroy the World?

A Note from James: Money GPT. I mean, we've all heard about the incredible potential of AI, and I’ve shared my optimism about its future in many episodes. But today, we have Jim Rickards back on the show, and he’s here to offer a more skeptical perspective. You might remember our earlier discussion where Jim laid out a masterclass on the economy, its history, and what might unfold over the next few years. Now, he’s back with insights from his new book, Money GPT, diving into what we should watch out for when it comes to AI and its impact on the economy. Let’s get into this compelling discussion with Jim Rickards. Episode Description: In this episode, James Altucher welcomes back bestselling author Jim Rickards to discuss his latest book, Money GPT. Jim delves into the transformative power of AI, highlighting both its immense benefits and the potential risks it poses, particularly to the global economy and financial markets. Drawing on his experience building AI models for the CIA, Jim explains how AI is reshaping industries and warns of its unintended consequences. The conversation spans the accelerating role of AI in finance, its vulnerabilities, and its parallels with nuclear decision-making processes. Whether you're optimistic or cautious about AI, this episode will challenge your perspective with fresh insights and historical context. What You’ll Learn: How AI is amplifying financial market volatility and increasing systemic risks. The concept of "cybernetics" as a solution to mitigate market crashes. The differences between AI's success in music and its limitations in writing. Why AI’s self-referential feedback loops could worsen over time. The parallels between AI in finance and its potential misuse in nuclear decision-making. Timestamped Chapters: [00:01:30] Introduction: Revisiting AI and its role in the economy. [00:03:24] The dual nature of AI: Power and risk. [00:06:50] GPT breakthroughs and the future of language models. [00:11:34] Why AI excels in music but struggles with writing. [00:19:46] The rise of passive investing and its dangers. [00:23:14] Cybernetics: A strategy to stabilize financial markets. [00:39:15] The risks of removing humans from critical decision-making chains. [00:47:53] Will AI replace jobs faster than it creates new ones? Additional Resources: Jim Rickards’ book, Money GPT. Related episode: The History and Future of the Economy with Jim Rickards. See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.
A Note from James:

Money GPT. I mean, we've all heard about the incredible potential of AI, and I’ve shared my optimism about its future in many episodes. But today, we have Jim Rickards back on the show, and he’s here to offer a more skeptical perspective. You might remember our earlier discussion where Jim laid out a masterclass on the economy, its history, and what might unfold over the next few years. Now, he’s back with insights from his new book, Money GPT, diving into what we should watch out for when it comes to AI and its impact on the economy. Let’s get into this compelling discussion with Jim Rickards.

Episode Description:

In this episode, James Altucher welcomes back bestselling author Jim Rickards to discuss his latest book, Money GPT. Jim delves into the transformative power of AI, highlighting both its immense benefits and the potential risks it poses, particularly to the global economy and financial markets. Drawing on his experience building AI models for the CIA, Jim explains how AI is reshaping industries and warns of its unintended consequences. The conversation spans the accelerating role of AI in finance, its vulnerabilities, and its parallels with nuclear decision-making processes. Whether you're optimistic or cautious about AI, this episode will challenge your perspective with fresh insights and historical context.

What You’ll Learn:
  • How AI is amplifying financial market volatility and increasing systemic risks.
  • The concept of "cybernetics" as a solution to mitigate market crashes.
  • The differences between AI's success in music and its limitations in writing.
  • Why AI’s self-referential feedback loops could worsen over time.
  • The parallels between AI in finance and its potential misuse in nuclear decision-making.
  • Timestamped Chapters:
    • [00:01:30] Introduction: Revisiting AI and its role in the economy.
    • [00:03:24] The dual nature of AI: Power and risk.
    • [00:06:50] GPT breakthroughs and the future of language models.
    • [00:11:34] Why AI excels in music but struggles with writing.
    • [00:19:46] The rise of passive investing and its dangers.
    • [00:23:14] Cybernetics: A strategy to stabilize financial markets.
    • [00:39:15] The risks of removing humans from critical decision-making chains.
    • [00:47:53] Will AI replace jobs faster than it creates new ones?
    Additional Resources:

    See Privacy Policy at https://art19.com/privacy and California Privacy Notice at https://art19.com/privacy#do-not-sell-my-info.

    [00:00:06] [SPEAKER_02]: Money GPT. I mean, we've heard about all the great things AI is going to do. And in general, I am extremely optimistic about AI and its future, as you probably tell from the many podcasts I've done on it. But our next guest, Jim Rickards, might have a more skeptical view. He just wrote the book, Money GPT. And we, a few weeks ago, you might remember, Jim, a few weeks ago, we talked an amazing discussion about the economy, everything that's going on in the economy, going from the economy to the world.

    [00:00:36] [SPEAKER_02]: And the history on to what's going to happen over the next four years. And now Jim is back to talk about what should we be careful of in terms of AI and the economy? A very interesting discussion. So here's Jim Rickards, the author of the new book, Money GPT.

    [00:00:57] [SPEAKER_01]: This isn't your average business podcast, and he's not your average host. This is the James Altucher Show.

    [00:01:12] [SPEAKER_02]: I do want to circle back to AI, which is the original reason we wanted to talk. But I feel like when you were writing Money GPT, you started off writing something else. You started off writing about money and the risks in the system. And it seemed like what occurred to you, I'm just sort of imagining as I'm reading this, how it's written and how it's structured.

    [00:01:35] [SPEAKER_02]: It seemed like AI started to become, to really explode upwards and become a factor in every single industry. And the risks of AI, you sort of ran with that and how it could happen in the financial system.

    [00:01:48] [SPEAKER_02]: But it seemed like you started off writing almost like about the financial system first, and then, oh, hey, this AI stuff is going to be dangerous. And then the book kind of transformed.

    [00:01:59] [SPEAKER_00]: Well, that's true. Five chapters, an introduction, conclusion, five chapters. I did want to stay in my lane. And let me be clear. This book does not bash AI. AI is powerful. It's huge. It's here. It's not a book. Oh, gee, guess what's coming? No, it's here. You open your refrigerator. There's a light that says, please change the water filter. Mine doesn't say please. It just says change the water filter. That's AI.

    [00:02:26] [SPEAKER_00]: I mean, there's a chip in there somewhere that's calculating moisture and has a clock on it. It's telling me what to do. It's on the dashboard of your car. It's on your iPhone or Android, whatever. It's everywhere and it's here. So start there. Number two, it can do enormous amounts of good. And I'll give you an example.

    [00:02:42] [SPEAKER_00]: There have already been some diseases where they've come up with treatments as a result of AI with large language models and deep, deep layered algorithms finding molecular combinations that can treat certain diseases. Now, can a human do that? Yeah.

    [00:03:00] [SPEAKER_00]: But if you had a room, an auditorium full of geniuses working 24 hours a day, they would not be able to do it as quickly as a supercomputer with NVIDIA chips and the right algorithms and a large enough training set.

    [00:03:14] [SPEAKER_00]: So there's good stuff happening all over. And I just gave you one example.

    [00:03:19] [SPEAKER_00]: And I get it. I built AI models for the CIA. We built predictive analytics that would predict terrorist attacks based on capital markets information.

    [00:03:31] [SPEAKER_00]: They didn't say I didn't need me to jump out of a helicopter with a knife between my teeth.

    [00:03:37] [SPEAKER_00]: I was doing counterterrorism in terms of financial warfare. So, yeah, those models are powerful.

    [00:03:44] [SPEAKER_00]: But I'm not a doctor and that wasn't my area of expertise. So I did. So you're right, James.

    [00:03:50] [SPEAKER_00]: I stayed in finance and national security because that's where my background, my training, my experience is.

    [00:03:57] [SPEAKER_00]: And I was kind of staying in my lane. I thought I had the most to offer there.

    [00:04:00] [SPEAKER_00]: But I do point out, you know, AI as a distinct scientific discipline has been around since the 1950s.

    [00:04:08] [SPEAKER_00]: Going back to Alan Turing, who actually he's famous because of cracking the code on the ultra machine.

    [00:04:15] [SPEAKER_00]: The Enigma code, rather. But he was a pioneer in AI and wrote about it in the early 1950s.

    [00:04:23] [SPEAKER_00]: But it goes back to Aristotle. I mean, the father of deductive logic and logic generally.

    [00:04:29] [SPEAKER_00]: I mean, Plato might have been the father of philosophy, but Aristotle was the father of logic.

    [00:04:33] [SPEAKER_00]: And computer code is just logic. I mean, it's encoded. It's certain languages. It's digitized. It's electronic.

    [00:04:39] [SPEAKER_00]: But it's still a logical process laid out by Aristotle.

    [00:04:44] [SPEAKER_00]: A lot of antecedents. My favorite, Mary Shelley's Frankenstein.

    [00:04:49] [SPEAKER_00]: Frankenstein's creature was AI. It was constructed artificially and it learned French and read Shakespeare.

    [00:04:56] [SPEAKER_00]: And people always think of Boris Karloff.

    [00:04:58] [SPEAKER_00]: But the Frankenstein in the book is, or the creature in the book actually got a pretty good self-education.

    [00:05:05] [SPEAKER_00]: So yeah, there's all that.

    [00:05:07] [SPEAKER_00]: But so what's new?

    [00:05:09] [SPEAKER_00]: What's new is that the processing power, because of supercomputers and because of chips, in particular NVIDIA,

    [00:05:17] [SPEAKER_00]: but yeah, AMD and Intel, I mean, you know more about this than I do, have some very powerful chips.

    [00:05:22] [SPEAKER_00]: So the processing power is greater. The computing power is greater and faster.

    [00:05:27] [SPEAKER_00]: The large language models, people have been working on those for a while.

    [00:05:31] [SPEAKER_00]: But what's new is the training set, the amount of material that you where you can turn the large language model loose on a training set.

    [00:05:42] [SPEAKER_00]: And it can make associations.

    [00:05:44] [SPEAKER_00]: And this was what GPT was all about, GPT-4, of course, late 2022, OpenAI.

    [00:05:50] [SPEAKER_00]: This was, you know, the breakthrough.

    [00:05:52] [SPEAKER_00]: The headline news, you know, that thing got, I believe, 200 million downloads in about 90 days.

    [00:05:59] [SPEAKER_00]: Fastest take-up of any app in history, faster than Instagram or TikTok or any of the others.

    [00:06:05] [SPEAKER_00]: Again, from this company, OpenAI.

    [00:06:07] [SPEAKER_00]: So that's the revolution.

    [00:06:09] [SPEAKER_00]: And it does have the power I described.

    [00:06:13] [SPEAKER_00]: So what does that mean for...

    [00:06:15] [SPEAKER_02]: Do you think, just out of curiosity, and even though this is not really the top of your book,

    [00:06:20] [SPEAKER_02]: it seems like, you know, vision, like speech recognition had sort of been quote-unquote solved in the 90s.

    [00:06:27] [SPEAKER_02]: And, you know, then just as processors got faster, speech recognition got better.

    [00:06:32] [SPEAKER_02]: Computer vision was sort of solved in the early 00s.

    [00:06:35] [SPEAKER_02]: And by solved, I mean like good enough to recognize a stop sign.

    [00:06:38] [SPEAKER_02]: And then, again, as processors got faster, it got better.

    [00:06:42] [SPEAKER_02]: But language really seemed to require...

    [00:06:46] [SPEAKER_02]: It was sort of the first leap in the software that I had seen in like 30 years with the, you know,

    [00:06:52] [SPEAKER_02]: the generative adversarial networks that was happening.

    [00:06:58] [SPEAKER_02]: So that seemed...

    [00:06:59] [SPEAKER_02]: That, the data, and the speed all seemed to converge at the same time.

    [00:07:02] [SPEAKER_02]: You know, language was like a harder problem than vision, which was interesting.

    [00:07:05] [SPEAKER_00]: Right.

    [00:07:06] [SPEAKER_00]: Right.

    [00:07:06] [SPEAKER_00]: Well, you're exactly right.

    [00:07:07] [SPEAKER_00]: But then from that, I guess it's a good way to start the conversation.

    [00:07:11] [SPEAKER_00]: From there, you have to say, okay, well, how does it actually work when you turn a large language model loose with the processing power we described in a training set?

    [00:07:20] [SPEAKER_00]: And you can buy the internet.

    [00:07:21] [SPEAKER_00]: I mean, you can buy the whole internet.

    [00:07:23] [SPEAKER_00]: It's billions of pages.

    [00:07:25] [SPEAKER_00]: And most users, they'll do a subset, maybe only, you know, 250 terabytes or, you know, whatever.

    [00:07:32] [SPEAKER_00]: But because you have a specialized function or application, but you can pretty much buy the internet.

    [00:07:38] [SPEAKER_00]: So the training set is enormous.

    [00:07:42] [SPEAKER_00]: But I guess the first point that we make is it's not intelligent.

    [00:07:45] [SPEAKER_00]: There's nothing about artificial intelligence that's intelligent.

    [00:07:49] [SPEAKER_00]: It's not a brain.

    [00:07:50] [SPEAKER_00]: It's not a simulation of the human brain.

    [00:07:51] [SPEAKER_00]: Human brain works completely differently.

    [00:07:55] [SPEAKER_00]: So it's math.

    [00:07:56] [SPEAKER_00]: It's just math.

    [00:07:57] [SPEAKER_00]: Now, it's advanced.

    [00:07:59] [SPEAKER_00]: It's new branches, mathematics, applied mathematics, faster process, all the things we talked about.

    [00:08:04] [SPEAKER_00]: But it's not actually intelligent.

    [00:08:06] [SPEAKER_00]: It just creates associations.

    [00:08:08] [SPEAKER_00]: So it goes into a cloud.

    [00:08:10] [SPEAKER_00]: And it breaks words into clouds where they appear in some proximity to each other.

    [00:08:15] [SPEAKER_00]: They don't call them words.

    [00:08:16] [SPEAKER_00]: They call them tokens.

    [00:08:17] [SPEAKER_00]: There's token A close to token B, you know, 95% of the time.

    [00:08:21] [SPEAKER_00]: Well, maybe if I, okay, so if I use A, I better use B in the same sentence.

    [00:08:25] [SPEAKER_00]: So like extra innings and a baseball game.

    [00:08:28] [SPEAKER_00]: So that's how it works.

    [00:08:31] [SPEAKER_00]: And I did a, not a contest, but it was an example.

    [00:08:35] [SPEAKER_00]: I mentioned this in the book.

    [00:08:38] [SPEAKER_00]: Foreign Policy published two essays.

    [00:08:40] [SPEAKER_00]: They asked this bright high school student, high school senior, to write an essay about 900 words on the causes of the war in Ukraine.

    [00:08:50] [SPEAKER_00]: And they went to GPT, GPT-4, and they gave it the same prompt.

    [00:08:55] [SPEAKER_00]: Causes of the war in Ukraine.

    [00:08:56] [SPEAKER_00]: And they produced two essays, about the same length.

    [00:09:00] [SPEAKER_00]: But they were published anonymously.

    [00:09:02] [SPEAKER_00]: And you were supposed to read them and see if you could pick the GPT-generated, the robot, the robot version, basically.

    [00:09:10] [SPEAKER_00]: I picked it in one sentence.

    [00:09:12] [SPEAKER_00]: I got one sentence.

    [00:09:12] [SPEAKER_00]: I said, that's the robot.

    [00:09:14] [SPEAKER_00]: You know, I read the rest of it.

    [00:09:15] [SPEAKER_00]: I read the student's essay, et cetera.

    [00:09:17] [SPEAKER_00]: And I was right.

    [00:09:18] [SPEAKER_00]: But the way I was able to pick it was the overuse, starting with the first sentence of cliches.

    [00:09:26] [SPEAKER_00]: Now, I don't like cliches.

    [00:09:28] [SPEAKER_00]: I use them.

    [00:09:28] [SPEAKER_00]: Every writer does.

    [00:09:29] [SPEAKER_00]: You know, every now and then it's helpful.

    [00:09:30] [SPEAKER_00]: It's a bit of a crutch.

    [00:09:31] [SPEAKER_00]: But, you know, sometimes they can make a point.

    [00:09:33] [SPEAKER_00]: But this thing was laden with cliches.

    [00:09:36] [SPEAKER_00]: And, you know, one after the other.

    [00:09:38] [SPEAKER_00]: I said, only a robot would do that.

    [00:09:40] [SPEAKER_00]: Because a robot, going through billions of pages, as we described, would encounter these cliches.

    [00:09:46] [SPEAKER_00]: And it would say to itself, huh, that must be how people write.

    [00:09:51] [SPEAKER_00]: So let's use that.

    [00:09:52] [SPEAKER_00]: But it was the overuse.

    [00:09:54] [SPEAKER_00]: It was precisely the overuse of something that a computer could spot but a good writer would never do.

    [00:09:59] [SPEAKER_00]: That said, that's the robot.

    [00:10:01] [SPEAKER_00]: And I was right.

    [00:10:03] [SPEAKER_02]: Which is really interesting.

    [00:10:05] [SPEAKER_02]: Let me just ask you briefly about this.

    [00:10:07] [SPEAKER_02]: So music, AI has been able to come up with sonatas that experts can't tell.

    [00:10:13] [SPEAKER_02]: Is that a Mozart sonata or not?

    [00:10:15] [SPEAKER_02]: Like they fool the experts of Mozart sonatas.

    [00:10:18] [SPEAKER_02]: Hey, this could have been composed by Mozart.

    [00:10:20] [SPEAKER_02]: And I'm wondering why is an AI as sophisticated with writing as it is for music?

    [00:10:26] [SPEAKER_02]: But it makes sense in exactly what you just said.

    [00:10:29] [SPEAKER_02]: Music cliches are okay.

    [00:10:31] [SPEAKER_02]: Because that's like an art form might be the sonata form.

    [00:10:35] [SPEAKER_02]: Which is basically a cliche of like a certain type of music over and over and over again.

    [00:10:40] [SPEAKER_02]: And so you could, you know, if you fit inside a form, it's almost mathematical.

    [00:10:45] [SPEAKER_02]: You could say, okay, it's a cliche.

    [00:10:48] [SPEAKER_02]: Sonata is a cliche.

    [00:10:49] [SPEAKER_02]: But that's okay because people are artistic within that cliche.

    [00:10:53] [SPEAKER_02]: Writing is different.

    [00:10:54] [SPEAKER_02]: You know, cliches are punished correctly, I think.

    [00:10:57] [SPEAKER_02]: But it's not the same thing.

    [00:10:59] [SPEAKER_00]: I think that's exactly right.

    [00:11:01] [SPEAKER_00]: And you're right that music has forms.

    [00:11:04] [SPEAKER_00]: I mean, I guess you can go kind of off the rail.

    [00:11:06] [SPEAKER_00]: Who knows if it's music?

    [00:11:07] [SPEAKER_00]: But yeah, music has forms.

    [00:11:09] [SPEAKER_00]: You're exactly right.

    [00:11:10] [SPEAKER_00]: But every expert on music would say it's behind it is math.

    [00:11:14] [SPEAKER_00]: It's mathematically driven.

    [00:11:15] [SPEAKER_00]: It's scales.

    [00:11:16] [SPEAKER_00]: It's relationships.

    [00:11:18] [SPEAKER_00]: It's progressions.

    [00:11:19] [SPEAKER_00]: It's modulating bridges.

    [00:11:22] [SPEAKER_00]: It's, you know, various kinds of non-imitated polyphony.

    [00:11:26] [SPEAKER_00]: There's a lot to music.

    [00:11:28] [SPEAKER_00]: But it kind of can all be, and I love it, and I'm a fan.

    [00:11:32] [SPEAKER_00]: And I got 6,000 songs on my iPhone.

    [00:11:34] [SPEAKER_00]: But it is math.

    [00:11:35] [SPEAKER_00]: So the combination of the fact that there's mathematical scaling relationships behind it

    [00:11:40] [SPEAKER_00]: and it is form-driven lends itself brilliantly to computer programming

    [00:11:44] [SPEAKER_00]: and computer development.

    [00:11:46] [SPEAKER_00]: And you're right.

    [00:11:46] [SPEAKER_00]: Writing, who knows?

    [00:11:49] [SPEAKER_00]: You can create formulas, but they're easy to spot, I guess is my point.

    [00:11:55] [SPEAKER_02]: Take a quick break.

    [00:11:56] [SPEAKER_02]: If you like this episode, I'd really, really appreciate it.

    [00:11:59] [SPEAKER_02]: It would mean so much to me.

    [00:12:00] [SPEAKER_02]: Please share it with your friends and subscribe to the podcast.

    [00:12:04] [SPEAKER_02]: Email me at alcatra at gmail.com and tell me why you subscribed.

    [00:12:08] [SPEAKER_02]: Thanks.

    [00:12:18] [SPEAKER_02]: I don't see, in all these new versions of AI, I don't really see the writing getting better.

    [00:12:22] [SPEAKER_02]: I could always tell this feels weak as writing as opposed to a real good writer.

    [00:12:28] [SPEAKER_00]: Well, you're right.

    [00:12:29] [SPEAKER_00]: By the way, the latest research says not only are you right, but not only is the writing not getting better,

    [00:12:34] [SPEAKER_00]: it's getting worse.

    [00:12:35] [SPEAKER_00]: And here's why.

    [00:12:36] [SPEAKER_00]: Because as more material is generated by AI and by GPT,

    [00:12:42] [SPEAKER_00]: as that begins to populate the training set,

    [00:12:46] [SPEAKER_00]: and now I come in with a new program and I'm looking at the training set,

    [00:12:50] [SPEAKER_00]: it's kind of polluted with a bunch of GPT.

    [00:12:52] [SPEAKER_00]: And the point is it's diluting itself.

    [00:12:55] [SPEAKER_00]: And so we're going to get a Kamala Harris campaign speech.

    [00:12:59] [SPEAKER_00]: I mean, we're basically taking away the creativity, taking away the originality,

    [00:13:05] [SPEAKER_00]: not only using cliches, which is a bad start,

    [00:13:07] [SPEAKER_00]: but diluting itself because it's going to be in a feedback group.

    [00:13:10] [SPEAKER_00]: It's going to be self-referential.

    [00:13:12] [SPEAKER_00]: And that's already turning up.

    [00:13:14] [SPEAKER_00]: I've read some papers.

    [00:13:15] [SPEAKER_00]: There are scientists who have been able to quantify that and point to that as a problem.

    [00:13:19] [SPEAKER_00]: But that's to take everything we just said, and that's what's going on,

    [00:13:23] [SPEAKER_00]: and relate that to capital markets.

    [00:13:26] [SPEAKER_00]: And what's the danger?

    [00:13:27] [SPEAKER_00]: Well, a couple of things.

    [00:13:28] [SPEAKER_00]: Number one, there are some hedge funds,

    [00:13:30] [SPEAKER_00]: I've talked to some of the managers, who are using AI to pick their stocks.

    [00:13:34] [SPEAKER_00]: It's not like, oh, we're using it as a research tool.

    [00:13:36] [SPEAKER_00]: We want it to read a million, you know, 10Ks or footnotes and financials.

    [00:13:39] [SPEAKER_00]: It's like, no, do all that.

    [00:13:41] [SPEAKER_00]: Yes, please, but pick the stocks.

    [00:13:42] [SPEAKER_00]: And they're just, they're robust.

    [00:13:46] [SPEAKER_00]: But they're having some success.

    [00:13:48] [SPEAKER_00]: It's actually, they've been outperforming some managers lately.

    [00:13:52] [SPEAKER_00]: Now, you and I both know the problem with that,

    [00:13:54] [SPEAKER_00]: which is if you have that kind of success, it's really an arbitrage.

    [00:13:57] [SPEAKER_00]: More people jump in, they'll jump in, everyone will do the same thing.

    [00:14:00] [SPEAKER_00]: The returns will get reduced and all that.

    [00:14:02] [SPEAKER_00]: That's all fairly predictable.

    [00:14:04] [SPEAKER_00]: But what should not be overlooked is that the programs are all the same.

    [00:14:08] [SPEAKER_00]: I don't care who's developing them.

    [00:14:10] [SPEAKER_00]: I don't care.

    [00:14:12] [SPEAKER_00]: You know, there are certainly differences, but they all work about the same.

    [00:14:15] [SPEAKER_00]: And the reason is when you're working in AI, you're trying to imitate the human brain.

    [00:14:21] [SPEAKER_00]: It's not a brain, but it's, you know, mathematical imitation, if you want to put it that way.

    [00:14:27] [SPEAKER_00]: There are certain aspects of human nature that won't change.

    [00:14:31] [SPEAKER_00]: So, simple example.

    [00:14:34] [SPEAKER_00]: So, let's say a stock market crash begins.

    [00:14:36] [SPEAKER_00]: It doesn't matter why.

    [00:14:37] [SPEAKER_00]: They happen all the time.

    [00:14:39] [SPEAKER_00]: There's always a reason.

    [00:14:40] [SPEAKER_00]: But that's not the point.

    [00:14:42] [SPEAKER_00]: So, March 2020, the stock market fell 30% in 30 days.

    [00:14:46] [SPEAKER_00]: It was not the biggest crash in history, but it was the fastest crash of that magnitude in history.

    [00:14:52] [SPEAKER_00]: To go 30% in 30 days was light speed.

    [00:14:56] [SPEAKER_00]: And what happens in a situation like that?

    [00:14:58] [SPEAKER_00]: Well, some people are kind of watching, oh, it's going down, it's going down.

    [00:15:01] [SPEAKER_00]: It'll go back, whatever.

    [00:15:03] [SPEAKER_00]: It doesn't.

    [00:15:03] [SPEAKER_00]: And all of a sudden, they're like, they hit the panic button.

    [00:15:05] [SPEAKER_00]: They sell everything, go to cash, move to the sidelines, and wait it out.

    [00:15:10] [SPEAKER_00]: And they wait it out.

    [00:15:11] [SPEAKER_00]: And then when things seem to stabilize, find the bottom, turn around.

    [00:15:14] [SPEAKER_00]: Okay, tiptoe back in and build up your portfolio again.

    [00:15:19] [SPEAKER_00]: This lends itself to something called the fallacy of composition, which sounds fancy, but it's a pretty straightforward phenomenon.

    [00:15:29] [SPEAKER_00]: So, you're at a baseball game.

    [00:15:32] [SPEAKER_00]: And let's say you can't see, because the guy in front of you, he's got a big hat or he's too tall, whatever.

    [00:15:37] [SPEAKER_00]: And so, you stand up and say, well, this is great.

    [00:15:40] [SPEAKER_00]: Now I can see the game.

    [00:15:41] [SPEAKER_00]: I get a really good view.

    [00:15:43] [SPEAKER_00]: But what happens next?

    [00:15:44] [SPEAKER_00]: The person behind you stands up.

    [00:15:46] [SPEAKER_00]: And the person behind her stands up.

    [00:15:49] [SPEAKER_00]: And sooner than later, the entire stadium is on their feet.

    [00:15:52] [SPEAKER_00]: Nobody's better off because you're all standing on your feet.

    [00:15:56] [SPEAKER_00]: Sorry.

    [00:15:57] [SPEAKER_00]: Nobody's better off because you all have the same lousy view.

    [00:15:59] [SPEAKER_00]: And everybody's worse off because you're all on your feet.

    [00:16:02] [SPEAKER_00]: So, there's an example of a strategy that works brilliantly at the individual level.

    [00:16:07] [SPEAKER_00]: The guy stands up.

    [00:16:08] [SPEAKER_00]: He does get a better view.

    [00:16:09] [SPEAKER_00]: And it's cost-free.

    [00:16:10] [SPEAKER_00]: But it's catastrophic at scale.

    [00:16:12] [SPEAKER_00]: It doesn't continue.

    [00:16:14] [SPEAKER_00]: It basically reverses itself and ends up being enormously costly or dangerous.

    [00:16:19] [SPEAKER_00]: The same thing is true in stock markets.

    [00:16:21] [SPEAKER_00]: So, the strategy I described, sell everything, go to cash, move to the sidelines, wait it out, tiptoe back in.

    [00:16:27] [SPEAKER_00]: That can be a very good strategy for an individual.

    [00:16:30] [SPEAKER_00]: But what happens if everyone does it?

    [00:16:32] [SPEAKER_00]: Well, we know what happens.

    [00:16:33] [SPEAKER_00]: Everybody's a seller.

    [00:16:34] [SPEAKER_00]: Nobody's a buyer.

    [00:16:35] [SPEAKER_00]: The market goes straight down.

    [00:16:36] [SPEAKER_00]: You blow through the circuit breakers.

    [00:16:38] [SPEAKER_00]: You go through the floor.

    [00:16:40] [SPEAKER_00]: And eventually, they have to close the markets.

    [00:16:42] [SPEAKER_00]: That's what happens.

    [00:16:43] [SPEAKER_00]: That is what we came very close to closing every market in the world with long-term capital management, by the way.

    [00:16:49] [SPEAKER_00]: People don't know that, but we were just hours away.

    [00:16:51] [SPEAKER_00]: We were trying to close the deal before Tokyo opened.

    [00:16:55] [SPEAKER_00]: So, that is what happens.

    [00:16:58] [SPEAKER_00]: And why is that?

    [00:16:59] [SPEAKER_00]: Because all the programs are the same.

    [00:17:00] [SPEAKER_00]: Now, in the old days, maybe not much later than the 90s, but certainly the 80s or earlier, there was a guy on the floor of the New York Stock Exchange.

    [00:17:10] [SPEAKER_00]: There was a position.

    [00:17:12] [SPEAKER_00]: You were called a specialist.

    [00:17:14] [SPEAKER_00]: And the specialist had a privilege and a duty.

    [00:17:17] [SPEAKER_00]: The privilege was you got to be the market maker in a particular stock.

    [00:17:21] [SPEAKER_00]: You were the market maker in IBM or ExxonMobil or whatever it might be.

    [00:17:25] [SPEAKER_00]: And you also got to see the back of the book.

    [00:17:27] [SPEAKER_00]: You didn't just see the best bid offer.

    [00:17:29] [SPEAKER_00]: You saw all the bids and offers behind it.

    [00:17:31] [SPEAKER_00]: You knew kind of where things were going.

    [00:17:33] [SPEAKER_00]: Your duty, however, was to stand up to the market.

    [00:17:36] [SPEAKER_00]: So, if everybody was a seller, you had to be a buyer.

    [00:17:39] [SPEAKER_00]: You had to mitigate that damage a little bit.

    [00:17:41] [SPEAKER_00]: And vice versa, if there was a buying frenzy, you wanted to be a seller and just kind of damp it down a little bit.

    [00:17:46] [SPEAKER_00]: Try to equilibrate up to a point.

    [00:17:50] [SPEAKER_00]: That system is long gone.

    [00:17:51] [SPEAKER_00]: A friend of mine is the director of floor operations at the New York Stock Exchange.

    [00:17:54] [SPEAKER_00]: I was down on the floor with him not long ago.

    [00:17:57] [SPEAKER_00]: And he turned to me and said, Jim, don't think for a minute that there's any liquidity on this floor.

    [00:18:04] [SPEAKER_00]: There isn't.

    [00:18:05] [SPEAKER_00]: I mean, we've got nice jackets.

    [00:18:07] [SPEAKER_00]: We do our jobs.

    [00:18:08] [SPEAKER_00]: We still make markets.

    [00:18:09] [SPEAKER_00]: But there's no liquidity here.

    [00:18:10] [SPEAKER_00]: And he was right.

    [00:18:12] [SPEAKER_00]: And, of course, I understood that.

    [00:18:13] [SPEAKER_00]: So, in one of my other books, I talk about the rise of passive investing, index investing, ETFs, and the decline of active investing.

    [00:18:22] [SPEAKER_00]: It's another phenomenon.

    [00:18:23] [SPEAKER_00]: But that's highly prevalent today.

    [00:18:26] [SPEAKER_00]: So, now, let's take this toxic mix.

    [00:18:28] [SPEAKER_00]: You have almost everyone's a passive investor.

    [00:18:32] [SPEAKER_00]: All the big portfolios are.

    [00:18:34] [SPEAKER_00]: Relatively few active investors left.

    [00:18:36] [SPEAKER_00]: Everything's automated.

    [00:18:37] [SPEAKER_00]: And all the automation is the same.

    [00:18:39] [SPEAKER_00]: There's no diversity in any of these programs.

    [00:18:40] [SPEAKER_02]: I just want to add a little bit to that because what you underline is the massive scam of the financial markets.

    [00:18:47] [SPEAKER_02]: Every big hedge fund out there owns Microsoft, Google, Exxon, Procter & Gamble.

    [00:18:53] [SPEAKER_02]: They all own the same things.

    [00:18:55] [SPEAKER_02]: But instead of an ETF, they're charging you 2% of all the assets and 20% of all the profits.

    [00:19:02] [SPEAKER_02]: Like it's a giant scam and they all own the same thing.

    [00:19:04] [SPEAKER_00]: You're exactly right.

    [00:19:05] [SPEAKER_00]: And the smaller version of what you described, you correctly described the giant scam, you know, upper middle class individual, retired, near retirement couple, whatever.

    [00:19:15] [SPEAKER_00]: They walk into a financial advisor and they sit down.

    [00:19:17] [SPEAKER_00]: It's a pleasant office.

    [00:19:19] [SPEAKER_00]: The person behind the desk appears very professional.

    [00:19:21] [SPEAKER_00]: And what does that consultation look like?

    [00:19:23] [SPEAKER_00]: Well, the guy said, well, okay, let's get started.

    [00:19:26] [SPEAKER_00]: How old are you?

    [00:19:26] [SPEAKER_00]: Okay, got it.

    [00:19:27] [SPEAKER_00]: Married.

    [00:19:28] [SPEAKER_00]: You know, kids.

    [00:19:29] [SPEAKER_00]: Yes, no.

    [00:19:30] [SPEAKER_00]: Net worth.

    [00:19:31] [SPEAKER_00]: Give me your portfolio specifications.

    [00:19:33] [SPEAKER_00]: Thank you very much.

    [00:19:34] [SPEAKER_00]: What are your goals?

    [00:19:35] [SPEAKER_00]: You know, do you want to own a vineyard, sell around the world, pay for your grandkids, whatever.

    [00:19:40] [SPEAKER_00]: They ask you all these questions and they put it into a system.

    [00:19:43] [SPEAKER_00]: All the systems are the same.

    [00:19:45] [SPEAKER_00]: They all give the same answer.

    [00:19:46] [SPEAKER_00]: They all give you, you know, the 60-40, but we'll turn it to 80-20 when you get a little bit older.

    [00:19:51] [SPEAKER_00]: You know, here's your ladder of municipal securities.

    [00:19:55] [SPEAKER_00]: It's all the same.

    [00:19:57] [SPEAKER_00]: And I'm not saying it's horrible.

    [00:19:59] [SPEAKER_00]: I'm just saying don't think for a minute that you're getting customized individual investment advice.

    [00:20:04] [SPEAKER_00]: I tell people getting in the business.

    [00:20:07] [SPEAKER_00]: You learn to play golf and, you know, give an attractive image because you don't need to know anything about finance.

    [00:20:13] [SPEAKER_00]: It's all on a computer and they're all the same.

    [00:20:15] [SPEAKER_00]: And that's the point.

    [00:20:18] [SPEAKER_00]: There's nothing in any of the programs that says, huh, you know, maybe this is a good time to buy.

    [00:20:23] [SPEAKER_00]: Maybe I ought to dip my toe.

    [00:20:25] [SPEAKER_00]: Maybe I ought to buy the dip.

    [00:20:27] [SPEAKER_00]: You know, the kinds of things that humans do don't exist in programs, in these programs.

    [00:20:32] [SPEAKER_00]: And they may be non-programmable.

    [00:20:34] [SPEAKER_00]: And I'll get to that in a second.

    [00:20:36] [SPEAKER_00]: But the point is, it's – the idea of panic is as old as human nature.

    [00:20:43] [SPEAKER_00]: I could give you examples.

    [00:20:45] [SPEAKER_00]: I'm sure you're familiar with, you know, financial panics in the mid-14th century, you know, 1350s.

    [00:20:51] [SPEAKER_00]: The House of Barty and the House of Pruszy in Milan went bankrupt.

    [00:20:54] [SPEAKER_00]: There was a financial panic all the way to London.

    [00:20:56] [SPEAKER_00]: So that's not new.

    [00:20:57] [SPEAKER_00]: So what is new is the fact that we've combined it with AI.

    [00:21:01] [SPEAKER_00]: It's amplified it, accelerated it.

    [00:21:04] [SPEAKER_00]: We've outsourced it, and we don't understand it.

    [00:21:07] [SPEAKER_00]: We over-rely on it.

    [00:21:09] [SPEAKER_00]: This will happen very, very quickly.

    [00:21:12] [SPEAKER_00]: So there are – so that's the danger.

    [00:21:15] [SPEAKER_00]: Then I don't –

    [00:21:17] [SPEAKER_02]: But let me ask you this because I – you know, and I'm thinking of the examples in your book.

    [00:21:21] [SPEAKER_02]: And you can explain to me how – like 1987 was a weird – I don't want to say it was AI, but like the same thing was sort of happening where the kind of insurance products that were being sold on the stock market triggered ultimately the 1987 collapse in October of 1987.

    [00:21:39] [SPEAKER_02]: Right.

    [00:21:40] [SPEAKER_02]: How is it really different now?

    [00:21:42] [SPEAKER_02]: Now we're just using a more sophisticated version of that.

    [00:21:44] [SPEAKER_02]: But how is it really different now, the way that software could trigger a panic?

    [00:21:50] [SPEAKER_00]: Because at least, you know, the Brady Commission did it after action report.

    [00:21:55] [SPEAKER_00]: They figured out what we just talked about, and they made recommendations.

    [00:21:59] [SPEAKER_00]: And by the way, a friend of mine was the head of that.

    [00:22:01] [SPEAKER_00]: I mentioned David Mullins.

    [00:22:03] [SPEAKER_00]: He was a good friend of Nick Brady, and he actually did that report.

    [00:22:07] [SPEAKER_00]: But that's when they introduced circuit breakers.

    [00:22:09] [SPEAKER_00]: They didn't have circuit breakers prior to 1987.

    [00:22:12] [SPEAKER_00]: And the circuit breaker is – it was viewed as a solution to what happened on October of 1987.

    [00:22:19] [SPEAKER_00]: But it's a blunt instrument.

    [00:22:21] [SPEAKER_00]: And in the book, I talk about – you know, I never like to give these warnings or dangers.

    [00:22:25] [SPEAKER_00]: I have this scenario in the book without giving some solutions.

    [00:22:29] [SPEAKER_00]: And one of the solutions I suggest is cybernetics.

    [00:22:34] [SPEAKER_00]: And cybernetics is a Greek word, but the origin actually means the helmsman who's steering the boat.

    [00:22:40] [SPEAKER_00]: And just to give an example, James, so let's say you're driving on ice.

    [00:22:44] [SPEAKER_00]: And I have a house in the mountains, so I drive on ice and snow my share of the time.

    [00:22:50] [SPEAKER_00]: If you slam on the brakes on ice, the car doesn't stop.

    [00:22:54] [SPEAKER_00]: It keeps going.

    [00:22:55] [SPEAKER_00]: And it'll probably spin out of control and maybe run off the road.

    [00:23:00] [SPEAKER_00]: If you're on ice and you need to slow the car down, you don't slam the brakes.

    [00:23:04] [SPEAKER_00]: You tap the brakes.

    [00:23:05] [SPEAKER_00]: Tap, tap, tap, tap.

    [00:23:06] [SPEAKER_00]: Keep two hands on the wheel.

    [00:23:08] [SPEAKER_00]: Keep it under control and eventually bring it to a halt.

    [00:23:11] [SPEAKER_00]: That's an example of cybernetics.

    [00:23:13] [SPEAKER_00]: Now, how would you apply that to the stock market?

    [00:23:14] [SPEAKER_00]: So let's say stocks are down 5%.

    [00:23:17] [SPEAKER_00]: That's not an all-time wipeout or crash or anything of the kind.

    [00:23:20] [SPEAKER_00]: But at 5%, instead of waiting to get down 10 or 20 and hit a circuit breaker, you can begin to throttle the buy orders.

    [00:23:28] [SPEAKER_00]: You could say – sorry, the sell orders.

    [00:23:30] [SPEAKER_00]: So anyone who's putting in a sell order, you say, okay, you just put in a sell order for 1,000 shares.

    [00:23:34] [SPEAKER_00]: We're going to execute 500.

    [00:23:37] [SPEAKER_00]: The other 500, you'll stay in the queue.

    [00:23:40] [SPEAKER_00]: Don't lose your place.

    [00:23:41] [SPEAKER_00]: But we're only going to execute 500 out of that 1,000 shares.

    [00:23:46] [SPEAKER_00]: Down another 5%, we're only going to execute 100.

    [00:23:49] [SPEAKER_00]: You're getting a 10% fill.

    [00:23:51] [SPEAKER_00]: And then down 15%, maybe it's zero, and then you are at a circuit breaker level.

    [00:23:55] [SPEAKER_00]: But the point is, that's an example of tapping the brakes as opposed to slamming the brakes.

    [00:24:00] [SPEAKER_00]: And people are highly adaptive.

    [00:24:02] [SPEAKER_00]: They'll get it, and they'll maybe start to slow things down themselves.

    [00:24:05] [SPEAKER_00]: And you'll have time to have a reasoned discussion or talk to a financial partner or an investment advisor.

    [00:24:14] [SPEAKER_00]: Regulators could say something or somebody can offer some confidence.

    [00:24:18] [SPEAKER_00]: In other words, it's not foolproof by any means, but it's a way to mitigate – if the danger of AI is not the behavior, because that behavior is as old as civilization.

    [00:24:29] [SPEAKER_00]: But if the danger of AI is it's all the same, it's an accelerant and an amplifier, then what you need is a cybernetic approach that will kind of damp that down a little bit.

    [00:24:39] [SPEAKER_02]: Well, and this is related to AI.

    [00:24:41] [SPEAKER_02]: You were at long-term capital.

    [00:24:42] [SPEAKER_02]: Long-term capital and hedge funds like that were almost designed to also counteract crashes.

    [00:24:50] [SPEAKER_02]: Long-term capital was a countertrend sort of hedge fund.

    [00:24:54] [SPEAKER_02]: Things go down too much, you'd start buying for the dead cat bounce.

    [00:24:59] [SPEAKER_02]: You would – it's like – I forgot the guy, Scholz, I think it was, said, we're standing there picking up nickels that are easy nickels to get.

    [00:25:10] [SPEAKER_02]: And so it was like this – it was designed to sort of be the buyer of last resort in crashes.

    [00:25:15] [SPEAKER_02]: And that was a quant finance idea that you could pick up – your probability of success goes higher if things are way down and you're just buying for a small bounce.

    [00:25:23] [SPEAKER_00]: Right.

    [00:25:24] [SPEAKER_00]: That's right.

    [00:25:25] [SPEAKER_00]: I shared an office with Meyer Schultz for six years.

    [00:25:28] [SPEAKER_00]: He would, on a quiet day, he would come into my office.

    [00:25:30] [SPEAKER_00]: He would get – he was a very nice guy.

    [00:25:33] [SPEAKER_00]: I would ask a dumb question.

    [00:25:34] [SPEAKER_00]: He would give me like a two-hour tutorial at the whiteboard.

    [00:25:37] [SPEAKER_00]: So I see options in my sleep.

    [00:25:39] [SPEAKER_00]: I see them everywhere.

    [00:25:40] [SPEAKER_00]: It's one of the things I learned from him.

    [00:25:42] [SPEAKER_00]: And you're right.

    [00:25:42] [SPEAKER_00]: He did say we're picking up nickels, but the full expression was we're picking up nickels in front of a bulldozer.

    [00:25:49] [SPEAKER_00]: In other words, a steamroller.

    [00:25:51] [SPEAKER_00]: He said, basically, pick fast because you're going to get run over if you don't.

    [00:25:55] [SPEAKER_00]: Basically, what you described is correct.

    [00:25:56] [SPEAKER_00]: It was arbitrage.

    [00:25:58] [SPEAKER_00]: So we were betting the spreads.

    [00:26:00] [SPEAKER_00]: But there were always two sides of the trade.

    [00:26:02] [SPEAKER_00]: So that kind of arbitrage, you would have two instruments that, in principle, should trade alike.

    [00:26:08] [SPEAKER_00]: An off-the-run 10-year note and a newly-issued 10-year note.

    [00:26:12] [SPEAKER_00]: There will be a slight price discrepancy.

    [00:26:14] [SPEAKER_00]: But they're going to end up in the same place.

    [00:26:16] [SPEAKER_00]: And they will.

    [00:26:17] [SPEAKER_00]: At maturity, they'll pay off at par.

    [00:26:18] [SPEAKER_00]: So when the spread got like this, you would sell the one that was rich, buy the one that was cheap, and just sit there.

    [00:26:25] [SPEAKER_00]: You could go play golf and wait for that to converge, which it would at maturity, and make the money.

    [00:26:32] [SPEAKER_00]: And the spreads were small.

    [00:26:33] [SPEAKER_00]: But with leverage, you could get the 30%, 40% returns that we were doing.

    [00:26:37] [SPEAKER_00]: We tripled investors' money in four years.

    [00:26:39] [SPEAKER_00]: The problem is, so the spread's like this.

    [00:26:43] [SPEAKER_00]: Sell the rich, buy the cheap.

    [00:26:45] [SPEAKER_00]: And then it goes like this.

    [00:26:47] [SPEAKER_00]: Now, that's called losing money.

    [00:26:50] [SPEAKER_00]: But their mindset, I'm not pointing fingers.

    [00:26:53] [SPEAKER_00]: I'm just telling you what it was.

    [00:26:55] [SPEAKER_00]: I was the lawyer.

    [00:26:55] [SPEAKER_00]: I was not the head of the risk committee.

    [00:26:58] [SPEAKER_00]: They would say, well, the trade just got better.

    [00:27:00] [SPEAKER_00]: Most people would say, wait a second.

    [00:27:01] [SPEAKER_00]: You guys just lost money.

    [00:27:02] [SPEAKER_00]: They'd say, yeah, but the trade got better.

    [00:27:04] [SPEAKER_00]: This got richer.

    [00:27:05] [SPEAKER_00]: This got cheaper.

    [00:27:05] [SPEAKER_00]: So sell the rich, buy the...

    [00:27:07] [SPEAKER_00]: And then it goes like this.

    [00:27:08] [SPEAKER_00]: Then it goes like this.

    [00:27:10] [SPEAKER_00]: Now, in theory, all those trades at any level will converge apart at maturity.

    [00:27:15] [SPEAKER_00]: But the question is, can you, and I'm sure you've heard the remark, can you remain solvent

    [00:27:20] [SPEAKER_00]: longer than the markets can remain irrational?

    [00:27:23] [SPEAKER_00]: And the answer is, maybe not.

    [00:27:25] [SPEAKER_00]: Maybe $4 billion of capital is not enough to withstand a thing where normal spreads were

    [00:27:33] [SPEAKER_00]: 5% and these spreads were 80%.

    [00:27:38] [SPEAKER_00]: And that was what happened.

    [00:27:40] [SPEAKER_00]: But the basic idea you described, whether you want to say contrary or own the spread

    [00:27:47] [SPEAKER_00]: or different ways to describe it, but that is what we were doing.

    [00:27:51] [SPEAKER_00]: We were doing it with a lot of leverage.

    [00:27:53] [SPEAKER_00]: Leverage was not the main culprit.

    [00:27:55] [SPEAKER_00]: The culprit was not having a good stop loss.

    [00:27:57] [SPEAKER_00]: And every trader will say, there comes a point where you stop telling yourself you're smarter

    [00:28:02] [SPEAKER_00]: than the market and just get out, take your losses, and live to fight another day.

    [00:28:06] [SPEAKER_00]: And we did not do that.

    [00:28:07] [SPEAKER_00]: We just kept going.

    [00:28:09] [SPEAKER_00]: But there came a time when the market became completely liquid and you couldn't get out.

    [00:28:13] [SPEAKER_00]: By the way, it gets back to your mark-to-market point, which is it wants to sell a little bit

    [00:28:16] [SPEAKER_00]: and just kind of leg out of the trade.

    [00:28:17] [SPEAKER_00]: The minute you do that, at least in the hedge fund world, you have to mark the entire book.

    [00:28:22] [SPEAKER_00]: So selling a 5% slice means you've got to mark the other 95% to the new market because

    [00:28:29] [SPEAKER_00]: you just sold something.

    [00:28:31] [SPEAKER_00]: And then that was enough to wipe out your capital and then you were done.

    [00:28:34] [SPEAKER_00]: So that was almost all you ran for doing nothing.

    [00:28:37] [SPEAKER_00]: And then kind of how it ended up.

    [00:28:40] [SPEAKER_00]: But getting back to AI.

    [00:28:44] [SPEAKER_00]: So the other solution, in addition to cybernetics, is have a portfolio that does not – or a slice

    [00:28:52] [SPEAKER_00]: of a portfolio that does not include digital assets.

    [00:28:55] [SPEAKER_00]: I'm sorry, stocks.

    [00:28:56] [SPEAKER_00]: I understand cryptocurrencies.

    [00:28:57] [SPEAKER_00]: But stocks, bonds, most commodities, futures, they're all digital assets, meaning they don't

    [00:29:03] [SPEAKER_00]: trade physically.

    [00:29:04] [SPEAKER_00]: The last paper, Treasury Note, was issued in 1979.

    [00:29:07] [SPEAKER_00]: These are things that are digital.

    [00:29:09] [SPEAKER_00]: It's the ledgers controlled by the Fed and the Treasury.

    [00:29:11] [SPEAKER_00]: But what's not?

    [00:29:13] [SPEAKER_00]: Gold, silver, fine art, land, natural resources.

    [00:29:18] [SPEAKER_00]: There is a set of assets, portfolio that you can put in your portfolio that are not digital,

    [00:29:26] [SPEAKER_00]: that are not correlated with everything we're talking about, that are not prisoners of AI

    [00:29:31] [SPEAKER_00]: and are going to be very robust to the kind of meltdowns that we're talking about.

    [00:29:36] [SPEAKER_00]: And then toward the end of the book, we get into nuclear warfighting.

    [00:29:40] [SPEAKER_00]: Maybe we save that for another day.

    [00:29:43] [SPEAKER_00]: But I make the point that I started studying nuclear warfighting in the late 60s.

    [00:29:49] [SPEAKER_00]: The top works on that were actually written in the late 50s and the early 60s.

    [00:29:53] [SPEAKER_00]: The scholars were Albert Roberto Wollstetter, Herman Kahn, Henry Kissinger, Paul Knitza,

    [00:30:01] [SPEAKER_00]: and others.

    [00:30:01] [SPEAKER_00]: But the leader was Herman Kahn.

    [00:30:04] [SPEAKER_00]: And he described how nuclear wars happen.

    [00:30:08] [SPEAKER_00]: And what he said was, nobody wakes up, looks out the window, says, oh, nice day.

    [00:30:12] [SPEAKER_00]: I think I'll start a nuclear war.

    [00:30:14] [SPEAKER_00]: What happens is it happens through escalation.

    [00:30:18] [SPEAKER_00]: You have two antagonists.

    [00:30:20] [SPEAKER_00]: One does something provocative.

    [00:30:21] [SPEAKER_00]: The other one responds.

    [00:30:23] [SPEAKER_00]: The next one raises the ante.

    [00:30:24] [SPEAKER_00]: And then you keep going.

    [00:30:26] [SPEAKER_00]: And before long, you are climbing a ladder.

    [00:30:30] [SPEAKER_00]: And that's how Kahn described it, a ladder that leads to nuclear annihilation

    [00:30:35] [SPEAKER_00]: and the end of civilization and perhaps life on Earth.

    [00:30:39] [SPEAKER_00]: And he had like 54 steps in a very, very detailed research.

    [00:30:43] [SPEAKER_00]: But that was the idea.

    [00:30:44] [SPEAKER_00]: And he was right.

    [00:30:46] [SPEAKER_00]: And nuclear warfighting experts and policymakers have borne that in mind ever since.

    [00:30:52] [SPEAKER_02]: This evolved into the mutual assured destruction, right?

    [00:30:55] [SPEAKER_02]: So that nobody, the game theory of this is that once it starts escalating,

    [00:31:01] [SPEAKER_02]: the one reason they might stop is because both sides don't want to die at the same time.

    [00:31:07] [SPEAKER_02]: And that's what happens if there's a nuclear war.

    [00:31:09] [SPEAKER_00]: That's right.

    [00:31:09] [SPEAKER_00]: We call it two.

    [00:31:10] [SPEAKER_00]: There's no way to win.

    [00:31:11] [SPEAKER_00]: That's correct.

    [00:31:11] [SPEAKER_00]: We call it two scorpions in a bottle.

    [00:31:13] [SPEAKER_00]: If you have two scorpions in a bottle, if one strikes the other, the victim will die,

    [00:31:16] [SPEAKER_00]: but has just enough strength left to strike back and they both die.

    [00:31:19] [SPEAKER_00]: So you're exactly right.

    [00:31:21] [SPEAKER_00]: So mutual assured destruction was the structure to prevent a nuclear war.

    [00:31:27] [SPEAKER_00]: But the escalatory ladder was the dynamic by which a nuclear war might happen.

    [00:31:33] [SPEAKER_00]: So they're slightly different doctrines, but they're both the keystones, if you will,

    [00:31:38] [SPEAKER_00]: of nuclear warfighting.

    [00:31:40] [SPEAKER_00]: So what Khan said is if you're going up this ladder, you need – and by the way,

    [00:31:44] [SPEAKER_00]: we are in Ukraine and the Middle East.

    [00:31:47] [SPEAKER_00]: Ukraine, you know, so MI6 and CIA sponsored a coup in 2014.

    [00:31:52] [SPEAKER_00]: They remove a duly elected president.

    [00:31:54] [SPEAKER_00]: Putin takes Crimea.

    [00:31:56] [SPEAKER_00]: The U.S. arms Ukraine.

    [00:31:58] [SPEAKER_00]: Putin invades the Donbass.

    [00:32:00] [SPEAKER_00]: U.S. gives them Bradley fighting vehicles and Patriot missiles and attack its missiles.

    [00:32:04] [SPEAKER_00]: You know, Putin brings in hypersonics.

    [00:32:07] [SPEAKER_00]: I mean, we're on an escalatory path between nuclear powers.

    [00:32:10] [SPEAKER_00]: Same thing in the Middle East.

    [00:32:11] [SPEAKER_00]: Again, don't have to take sides.

    [00:32:13] [SPEAKER_00]: We can take that to the bar.

    [00:32:14] [SPEAKER_00]: But the point is Israel has escalated.

    [00:32:18] [SPEAKER_00]: Hezbollah struck back.

    [00:32:20] [SPEAKER_00]: Israel hits Hezbollah.

    [00:32:21] [SPEAKER_00]: The Houthis jump in.

    [00:32:21] [SPEAKER_00]: The Iranians fire.

    [00:32:22] [SPEAKER_00]: The Israelis shoot back at the Iranians.

    [00:32:25] [SPEAKER_00]: You're on that escalatory path with nuclear powers all around.

    [00:32:28] [SPEAKER_00]: Russia, U.S., Israel's nuclear power, and Iran's working on it.

    [00:32:33] [SPEAKER_00]: So we're on that ladder.

    [00:32:35] [SPEAKER_00]: So what did Herman Kahn say?

    [00:32:37] [SPEAKER_00]: He said you need to do three things.

    [00:32:38] [SPEAKER_00]: Number one, realize you're on the ladder.

    [00:32:41] [SPEAKER_00]: Stop telling yourself that this is diplomatic business as usual.

    [00:32:45] [SPEAKER_00]: You're on a path to nuclear annihilation.

    [00:32:48] [SPEAKER_00]: Number two, take a beat.

    [00:32:49] [SPEAKER_00]: Just stop.

    [00:32:50] [SPEAKER_00]: And then number three, climb back down.

    [00:32:53] [SPEAKER_00]: Come down the ladder.

    [00:32:54] [SPEAKER_00]: Deescalate.

    [00:32:55] [SPEAKER_00]: And that is exactly what happened, what we saw in the Cuban Missile Crisis.

    [00:32:59] [SPEAKER_00]: There were two cases, actually three I talk about in the book in the 1980s, where we were on the ladder.

    [00:33:05] [SPEAKER_00]: No time to go into it in complete detail.

    [00:33:07] [SPEAKER_00]: But the KGB was actually using a primitive form of AI.

    [00:33:11] [SPEAKER_00]: They had a system called V-Ryan.

    [00:33:13] [SPEAKER_00]: And they had another system codenamed OKO.

    [00:33:17] [SPEAKER_00]: But V-Ryan computed relative power of the Soviet Union, basically Russia, and the United States.

    [00:33:23] [SPEAKER_00]: And they sit with a lot of inputs.

    [00:33:26] [SPEAKER_00]: And they stipulated that, you know, the U.S. is stronger.

    [00:33:28] [SPEAKER_00]: But what they looked at, again, was the spread.

    [00:33:31] [SPEAKER_00]: The hypothesis was the U.S. is stronger.

    [00:33:33] [SPEAKER_00]: But if it keeps getting stronger, as the spread widens, the probability of a first strike by the U.S. goes up.

    [00:33:39] [SPEAKER_00]: Because the U.S. will say, hey, we're strong enough.

    [00:33:42] [SPEAKER_00]: We're ready.

    [00:33:42] [SPEAKER_00]: We can withstand this.

    [00:33:43] [SPEAKER_00]: Now's the time.

    [00:33:45] [SPEAKER_00]: And the other thing about nuclear warfighting is if you think the other side is going to strike, you strike first.

    [00:33:52] [SPEAKER_00]: Because there's always an advantage in being the first one to strike.

    [00:33:55] [SPEAKER_00]: You wouldn't do it unless you thought the other guy was going to shoot.

    [00:33:57] [SPEAKER_00]: But if you did, you would shoot first.

    [00:33:59] [SPEAKER_00]: Well, that spread widened to what the KGB deemed to be very dangerous levels by the early 80s.

    [00:34:06] [SPEAKER_00]: And they feared.

    [00:34:08] [SPEAKER_00]: And this is when Yuri Andropov was head of the KGB.

    [00:34:12] [SPEAKER_00]: But Chernenko, well, Brezhnev was still around.

    [00:34:14] [SPEAKER_00]: They later got Chernenko as sort of the Soviet prototype for Joe Biden.

    [00:34:18] [SPEAKER_00]: But Brezhnev was still there.

    [00:34:23] [SPEAKER_00]: And they concluded that the U.S. was getting ready to launch a nuclear war.

    [00:34:27] [SPEAKER_00]: Well, along comes U.S. and NATO, U.S., pardon NATO, and they were conducting a war game in 1983 called Abel Archer 83.

    [00:34:37] [SPEAKER_00]: And the simulation was a nuclear attack.

    [00:34:40] [SPEAKER_00]: We weren't going to attack.

    [00:34:41] [SPEAKER_00]: We weren't actually going to attack, but we were playing a war game with a nuclear attack.

    [00:34:46] [SPEAKER_00]: But the KGB interpreted that as getting ready for an actual nuclear attack.

    [00:34:50] [SPEAKER_00]: They said they're using the war game as a front, but they're actually going to shoot at us based partly on this V. Ryan AI output.

    [00:34:57] [SPEAKER_00]: Well, there was a lieutenant general, Perutz, in the U.S. Army who observed all this, could see it happening on both sides.

    [00:35:04] [SPEAKER_00]: The Soviets were fueling their bombers and getting their missiles in launch position.

    [00:35:09] [SPEAKER_00]: And he made a decision, contrary to orders, you know, to de-escalate the war game.

    [00:35:16] [SPEAKER_00]: He said, time out, stop doing this, you know, scale it down, hit the pause button, whatever.

    [00:35:22] [SPEAKER_00]: The Soviets picked up on that.

    [00:35:24] [SPEAKER_00]: They said, okay, maybe it's okay.

    [00:35:26] [SPEAKER_00]: And then they put the bombers back in the hangars, whatever.

    [00:35:28] [SPEAKER_00]: And we narrowly avoided nuclear war.

    [00:35:33] [SPEAKER_00]: There was another case where a system picked up five incoming U.S. missiles, nuclear missiles attacking Russia.

    [00:35:41] [SPEAKER_00]: And the guy who got that, Lieutenant Colonel Stanislav Petrov, saw the signal and it said launch.

    [00:35:50] [SPEAKER_00]: It actually said launch.

    [00:35:51] [SPEAKER_00]: And his job was to call the superiors, give them the situation, at which point the Soviets had launch on warning.

    [00:35:58] [SPEAKER_00]: So they'd been warned.

    [00:35:59] [SPEAKER_00]: So they might have launched first, again, very close to a nuclear war.

    [00:36:02] [SPEAKER_00]: But he had worked on the system and he knew it had some flaws.

    [00:36:04] [SPEAKER_00]: And he looked at it and he saw there were five missiles coming in.

    [00:36:09] [SPEAKER_00]: And he said, they wouldn't attack with five missiles.

    [00:36:12] [SPEAKER_00]: They might attack with 200 missiles, but not with five.

    [00:36:15] [SPEAKER_00]: And his inference was it was a false alarm.

    [00:36:18] [SPEAKER_00]: Turns out it was.

    [00:36:19] [SPEAKER_00]: It was the sun hitting a cloud at a certain angle and the system picking it up as an incoming missile, but it wasn't.

    [00:36:24] [SPEAKER_00]: And he was later called the man who saved the world.

    [00:36:26] [SPEAKER_00]: Now, here's the point.

    [00:36:28] [SPEAKER_00]: Lieutenant General Perutz and Lieutenant Colonel Petrov disobeyed orders, failed to alert their superiors, bet their lives and their countries that these either had to deescalade or the missiles were not real and saved the world.

    [00:36:48] [SPEAKER_00]: Now, how did they do that?

    [00:36:50] [SPEAKER_00]: It was intuition, gut instinct, a smart guess, if you will.

    [00:36:56] [SPEAKER_00]: But they risked a lot to do it.

    [00:36:58] [SPEAKER_00]: Computers cannot do that.

    [00:37:01] [SPEAKER_00]: Computers can do inductive logic, which is you have a large amount of observations and you infer that that's the way the world is because you see it every place.

    [00:37:09] [SPEAKER_00]: Deductive logic is major premise, minor premise, conclusion.

    [00:37:13] [SPEAKER_00]: It works.

    [00:37:15] [SPEAKER_00]: Both of them have limitations.

    [00:37:17] [SPEAKER_00]: The limitation of inductive logic is you assume that all swans are white, but there's a black one down in Adelaide, Australia.

    [00:37:23] [SPEAKER_00]: I've been there.

    [00:37:24] [SPEAKER_00]: I've seen them.

    [00:37:25] [SPEAKER_00]: The fall on deductive logic, which is if the major premise is incorrect, the conclusion is wrong, but the logic is flawless.

    [00:37:32] [SPEAKER_00]: But computer engineers can deal with that and kind of do the workarounds.

    [00:37:35] [SPEAKER_00]: They've never been able to program abductive logic.

    [00:37:39] [SPEAKER_00]: This comes from Charles Sanders Peirce, 19th century philosopher, perhaps the greatest philosopher since Aristotle, father of semiotics, which is the keystone of philosophy today.

    [00:37:50] [SPEAKER_00]: He was 100 years ahead of his time, but the problem with that is it takes 100 years to figure out he was right.

    [00:37:55] [SPEAKER_00]: But he identified this what's called abductive logic.

    [00:37:59] [SPEAKER_00]: But in plain English, James, it's common sense.

    [00:38:02] [SPEAKER_00]: And so computers can't do common sense.

    [00:38:06] [SPEAKER_00]: They've never been able to program.

    [00:38:07] [SPEAKER_00]: So my point is, if you put AI in the nuclear kill chain, you're not going to get the Perutz and the Petroffs who say time out.

    [00:38:16] [SPEAKER_00]: You're just going to keep going up the escalatory ladder.

    [00:38:18] [SPEAKER_00]: There's no empathy.

    [00:38:19] [SPEAKER_00]: There's no sympathy.

    [00:38:20] [SPEAKER_00]: There's no gut instinct.

    [00:38:22] [SPEAKER_00]: And you're going to lead yourself to nuclear war.

    [00:38:40] [SPEAKER_02]: But here's my question, and this is related to the financial markets as well.

    [00:38:43] [SPEAKER_02]: I feel like for the past four or five decades, there's always been some form of AI that's been capable of either triggering a panic on the financial side or triggering a signal to, hey, missiles are coming our way.

    [00:38:57] [SPEAKER_02]: You guys should launch.

    [00:38:58] [SPEAKER_02]: And so we've had this capability for a long time.

    [00:39:01] [SPEAKER_02]: It might have been much simpler than it is today, but we've had this capability for a long time.

    [00:39:05] [SPEAKER_02]: And there's always humans in the middle.

    [00:39:09] [SPEAKER_02]: So you can argue maybe we're going to – AI is so smart now that the kind of marketing of AI is going to convince our leaders to say, hey, let's just put AI in charge of launching a nuclear war.

    [00:39:22] [SPEAKER_02]: But I don't know if that will ever happen because back in the 90s, I was offered a job by Lincoln Laboratories, which was a spin out of MIT.

    [00:39:29] [SPEAKER_02]: I was offered a job, hey, can you program this radar to recognize if objects in space that are heading towards us are space junk or missiles?

    [00:39:39] [SPEAKER_02]: So like this was in the early 90s, late 80s.

    [00:39:42] [SPEAKER_02]: We've had this capability.

    [00:39:44] [SPEAKER_02]: What makes it different now that this is actually a danger?

    [00:39:47] [SPEAKER_00]: Well, first of all, you're right.

    [00:39:48] [SPEAKER_00]: We have had the capability in some form.

    [00:39:50] [SPEAKER_00]: It's faster, more accelerated, more turbocharged, if you want to use that word.

    [00:39:55] [SPEAKER_00]: But the capability has been around.

    [00:39:57] [SPEAKER_00]: But you also – you made my point, which is you said there were humans in the chain that could exercise the qualities I'm talking about, which are empathy, sympathy, intuition, guesswork, and gut instinct.

    [00:40:11] [SPEAKER_00]: And that's exactly the point.

    [00:40:13] [SPEAKER_00]: When you take the human out of the process and this pure AI, which is where we're going and where we already are in some ways, then you create these dangers because you lose that mitigating factor.

    [00:40:24] [SPEAKER_00]: And that's when you have a fund or a fund manager or a trading system or a stock exchange and you delegate all the duties to AI and you take the human market maker, if you will, out of the equation.

    [00:40:37] [SPEAKER_00]: Or in nuclear war fighting, if you put AI in the kill chain and replace the human – you can still have a human around.

    [00:40:44] [SPEAKER_00]: But what human general is going to overrule what the AI is telling them to do?

    [00:40:50] [SPEAKER_00]: Probably none.

    [00:40:50] [SPEAKER_00]: So you think that will happen?

    [00:40:52] [SPEAKER_00]: You think that the marketing of AI is such – go ahead.

    [00:40:56] [SPEAKER_00]: Sorry.

    [00:40:57] [SPEAKER_00]: It is happening.

    [00:40:58] [SPEAKER_00]: And the Chinese are doing it.

    [00:41:00] [SPEAKER_00]: And my warning is don't do it.

    [00:41:02] [SPEAKER_00]: I'm agreeing with you, John.

    [00:41:03] [SPEAKER_00]: I'm saying don't put AI in the nuclear kill chain.

    [00:41:06] [SPEAKER_00]: If you want it over here on the side of some kind of analyst or whatever, fine.

    [00:41:10] [SPEAKER_00]: Don't put it in the decision-making process, what they call the kill chain.

    [00:41:13] [SPEAKER_00]: And when it comes to banking and capital markets, don't put it solely in charge of portfolio management because it will sell everything.

    [00:41:21] [SPEAKER_00]: It will blow through every circuit breaker and it will close the market.

    [00:41:25] [SPEAKER_00]: So that's the warning for investors.

    [00:41:28] [SPEAKER_00]: I'm not saying don't be an investor.

    [00:41:30] [SPEAKER_00]: Of course we are.

    [00:41:31] [SPEAKER_00]: I'm not saying don't have any stocks.

    [00:41:32] [SPEAKER_00]: I am saying be ready for something like this.

    [00:41:34] [SPEAKER_00]: It could happen tomorrow and insulate yourself.

    [00:41:37] [SPEAKER_02]: Because quant finance has existed forever.

    [00:41:40] [SPEAKER_02]: And potentially if let run amok, it could have done this already.

    [00:41:46] [SPEAKER_02]: Probably the flash crash in what it was, 2012 was a result of quant finance gone crazy.

    [00:41:54] [SPEAKER_02]: But at the same time, it hasn't happened really that we kind of trust AI too much.

    [00:41:59] [SPEAKER_02]: But you're saying we're getting to that point or it is already at that point.

    [00:42:04] [SPEAKER_00]: That's exactly what I'm saying.

    [00:42:05] [SPEAKER_00]: That's what the book is about.

    [00:42:06] [SPEAKER_00]: There's more in the book.

    [00:42:07] [SPEAKER_00]: I talk about bias and censorship.

    [00:42:10] [SPEAKER_00]: And it ends on a more positive note, James, which is beyond AI, beyond what we have today in GPT and so forth, there's a version of it called superintelligence.

    [00:42:27] [SPEAKER_00]: Or superintelligence is the word.

    [00:42:28] [SPEAKER_00]: This is an AI system that's actually smarter than humans.

    [00:42:32] [SPEAKER_00]: It's not like as smart or close to smart.

    [00:42:34] [SPEAKER_00]: It's like way past us.

    [00:42:36] [SPEAKER_00]: And one analogy is we're the gorillas and they're the humans.

    [00:42:41] [SPEAKER_00]: That's our position relative to this form of AI.

    [00:42:44] [SPEAKER_00]: And they've already done experiments where they said, okay, AI, give us your ideas for making the world a better place.

    [00:42:52] [SPEAKER_00]: And that's the prompt.

    [00:42:54] [SPEAKER_00]: And the answer comes back, yeah, we got it.

    [00:42:56] [SPEAKER_00]: Kill all the humans.

    [00:42:57] [SPEAKER_00]: Because humans mess things up and have all kinds of problems.

    [00:42:59] [SPEAKER_00]: And kill all the humans.

    [00:43:00] [SPEAKER_00]: Okay, thank you very much.

    [00:43:01] [SPEAKER_00]: But the question is do they get to the point where the AI machines, the superintelligent machines are talking to each other.

    [00:43:10] [SPEAKER_00]: And then you create something called the singleton.

    [00:43:13] [SPEAKER_00]: Singleton is a technical term of art, but it means basically the one computer that absorbs all the other computers, takes over the other computers, if you want to think of it that way, and rules the world.

    [00:43:22] [SPEAKER_00]: And could decide one scenario is what if the singleton decided that the most important task was to manufacture paperclips.

    [00:43:32] [SPEAKER_00]: And it was going to distant galaxies to gather materials to make paperclips.

    [00:43:37] [SPEAKER_00]: That's all it did.

    [00:43:38] [SPEAKER_00]: And that humans got killed along the way.

    [00:43:43] [SPEAKER_00]: The answer is that because of what I talked about earlier, because of the inability of computers to use abductive logic, plus other just physical constraints.

    [00:43:54] [SPEAKER_00]: How much electricity is it?

    [00:43:55] [SPEAKER_00]: How fast can these things actually process?

    [00:43:58] [SPEAKER_00]: The answer is very fast, but are there limits?

    [00:44:01] [SPEAKER_00]: But more to the point, if you're at the limits of the training set, and the training set is polluted with new material that was produced by other AI using prior training sets, et cetera, that you actually run out of materials, you run out of physical constraints.

    [00:44:20] [SPEAKER_00]: But more importantly, in terms of taking over the world, in terms of being a real singleton, you are limited by your ability to use common sense.

    [00:44:30] [SPEAKER_00]: That's the easiest way to describe it.

    [00:44:32] [SPEAKER_02]: That is a scary thought when the AI is creating the AI and loses track of the fact that, hey, it's in the service of humanity.

    [00:44:40] [SPEAKER_00]: Right.

    [00:44:41] [SPEAKER_02]: That is the scary part.

    [00:44:42] [SPEAKER_00]: Well, again, I hope your viewers, first of all, great conversation.

    [00:44:47] [SPEAKER_00]: I very much enjoyed this.

    [00:44:48] [SPEAKER_02]: And Jim, I have one more question.

    [00:44:50] [SPEAKER_02]: Do you have time for a few minutes?

    [00:44:52] [SPEAKER_02]: One more question.

    [00:44:53] [SPEAKER_02]: This is an economics one relating to AI.

    [00:44:55] [SPEAKER_02]: Everybody's worried that AI is going to replace jobs.

    [00:44:58] [SPEAKER_02]: In some industries, let's just say logo designer, AI can do logo design, 50 logo designs a minute.

    [00:45:06] [SPEAKER_02]: You don't have to be – and it knows all the tools.

    [00:45:10] [SPEAKER_02]: And coding is another one.

    [00:45:12] [SPEAKER_02]: It can write software.

    [00:45:12] [SPEAKER_02]: Maybe not so good yet, but it's going to get better.

    [00:45:14] [SPEAKER_02]: And there's various industries that people are worried the industries are going to be wiped out.

    [00:45:19] [SPEAKER_02]: Could jobs get wiped out faster than they are recreated in newer industries that will develop?

    [00:45:26] [SPEAKER_02]: And could this affect the economy in a diverse way?

    [00:45:28] [SPEAKER_00]: The answer is yes.

    [00:45:31] [SPEAKER_00]: Historically, it's been the case that we don't make buggy whips and buggies anymore, but we have United Auto Workers.

    [00:45:36] [SPEAKER_00]: In other words, jobs get wiped out all the time, but new technology creates new opportunities, and people may need training.

    [00:45:43] [SPEAKER_00]: And it's a process over time, but we create more jobs on that, even though all jobs disappear.

    [00:45:49] [SPEAKER_00]: Is AI different?

    [00:45:50] [SPEAKER_00]: That's really the question.

    [00:45:51] [SPEAKER_00]: We'll get rid of so many jobs, and we just won't need that many programmers and developers to keep the AI system up and running.

    [00:45:57] [SPEAKER_00]: That's entirely possible, but thank goodness Mark Zuckerberg has the answer, which he gave in a Harvard commencement speech.

    [00:46:03] [SPEAKER_00]: And he said, guarantee basic income.

    [00:46:06] [SPEAKER_00]: Just give people a check to do nothing.

    [00:46:09] [SPEAKER_00]: And maybe smoke marijuana while you're at it.

    [00:46:11] [SPEAKER_00]: I mean, the elites have a plan to basically make us all into zombies.

    [00:46:16] [SPEAKER_00]: You know, free drugs, free checks.

    [00:46:19] [SPEAKER_00]: Don't worry about a job.

    [00:46:21] [SPEAKER_00]: You'll have a roof over your head and food, and me and a few of my buddies will run the world.

    [00:46:26] [SPEAKER_00]: So yeah, that's a real prospect.

    [00:46:29] [SPEAKER_00]: I mean, Bill Gates is pretty blatant about it.

    [00:46:30] [SPEAKER_00]: He thinks the population of the Earth should be 3 billion.

    [00:46:33] [SPEAKER_00]: It's like, okay, you've got to get rid of 5 billion people.

    [00:46:36] [SPEAKER_00]: How are you going to do that?

    [00:46:37] [SPEAKER_00]: Well, maybe Moderna will help.

    [00:46:38] [SPEAKER_00]: So yeah, I agree it's a problem.

    [00:46:43] [SPEAKER_00]: I'm just dealing with my slice of it, which is, you know, again, capital markets, national security.

    [00:46:48] [SPEAKER_00]: I do talk about bias and censorship and supercomputing.

    [00:46:52] [SPEAKER_00]: There's a lot there.

    [00:46:53] [SPEAKER_00]: And again, hopefully people get a lot out of it.

    [00:46:56] [SPEAKER_02]: But could the jobless recoveries that we saw in the early 00s and in the early 10s,

    [00:47:01] [SPEAKER_02]: could these jobless recoveries be the result of like the internet and the early 00s was able to eliminate wide numbers of jobs?

    [00:47:09] [SPEAKER_02]: Maybe again in the 10s, new technology or social media was able to do that.

    [00:47:13] [SPEAKER_02]: But AI, will AI be a whole other beast that is unanticipated that is just going to wipe out entire industries?

    [00:47:19] [SPEAKER_02]: And like all technologies, like the buggy whips were replaced by car workers, but that was fairly quick and seamless.

    [00:47:28] [SPEAKER_02]: Typewriters were replaced by computers, fairly quick and seamless.

    [00:47:32] [SPEAKER_02]: Will this be not so quick where AI creates new industries, you know, but it creates them too late for people who lose their jobs, you know, en masse because of AI?

    [00:47:45] [SPEAKER_00]: I think that's entirely possible, but I also think that Mark Zuckerberg, Bill Gates and others have the answer.

    [00:47:53] [SPEAKER_00]: And people at the BARD Institute and elsewhere, my friend Stephanie Kelton, she's the big brand of modern monetary theory.

    [00:48:01] [SPEAKER_00]: They have the answer, which is unlimited debt, unlimited money and basic guaranteed income.

    [00:48:06] [SPEAKER_00]: Here's your check.

    [00:48:07] [SPEAKER_02]: Until people lose faith in the currencies.

    [00:48:10] [SPEAKER_02]: Well, that's why.

    [00:48:11] [SPEAKER_02]: Good reason to have gold.

    [00:48:14] [SPEAKER_02]: All right.

    [00:48:15] [SPEAKER_02]: Well, Jim, on that, thank you so much.

    [00:48:17] [SPEAKER_02]: Always a pleasure talking to you and Money, GPT, AI and the Threat to the Global Economy.

    [00:48:24] [SPEAKER_02]: Great book, really fascinating and a very unique take on AI and what's happening as well as a good summary of what's current, the current state of AI.

    [00:48:32] [SPEAKER_02]: Thanks once again, Jim, for all the download.

    [00:48:35] [SPEAKER_02]: I'm still absorbing so many of the things you said.

    [00:48:36] [SPEAKER_02]: So I really appreciate you coming on.

    [00:48:38] [SPEAKER_02]: Thank you.

    economic impact,jim rickards,ai benefits,AI,money gpt,ai risks,financial system,ai skepticism,