Here’s my conversation with
— election analyst, author, poker player and all-round statistical wizard. We explore everything from polling accuracy, the importance of prediction markets like Polymarket, solutions to political polarization, and of course, his latest predictions on Trump vs Harris. We also explore his new book “On the Edge”, which examines the different attitudes to risk across the various political communities.Check out Nate’s blog here on Substack, Silver Bulletin.
Watch on YouTube. Listen on Apple Podcasts, Spotify, or any other podcast platform. Read the full transcript below, and follow me on Twitter for updates on future episodes.
Chapters:
00:01:34 - Predicting Elections
00:06:49 - Trump Assassination Attempt
00:11:45 - Prediction Markets
00:21:13 - Nate’s New Book on Risk
00:25:53 - Institutional screwups on COVID
00:33:30 - Why are People so Averse to Probabilities?
00:35:27 - Silicon Valley’s Blind Spots
00:40:43 - Excessively Risky Behavior
00:44:29 - Regulations
00:48:55 - Finding Common Ground Between Village and River
01:00:18 - Alternative Voting Structures
01:06:50 - Signal vs Noise in the AI age
01:12:15 - Advice for Sharpening Your Models
01:22:25 - Nate’s Maxims For A Win-Win Future
Transcript:
Nate: [00:00:00]
I think we've kind of reached an inflection point where prediction markets are going to be an important part, part of news coverage. When Joe Biden had that awful debate, I was watching with a, you know, well informed group of people. And we were all kind of looking at each other. Like we know this is like really bad for Biden, but then you see polity market and others shifting right away, right.
Kind of before even. The conventional wisdom had sucked in in the media. Traders were like, yeah, this is really bad. I know we don't have any data yet, but this is actionable intelligence.
Liv: Hello, friends, and welcome to the Win Win podcast. Now, I try and keep this show as apolitical and timeless as possible.
Sometimes you just got to talk about the current thing. So what better time than to speak to Nate Silver. Nate is one of the most successful election analysts of modern times. So, as well as talking about politics, we also talk about ideological differences between the gambling community and the non gambling establishment.
Nate: We bet on things, right? There's an actionable way to know who's right and who's wrong. It's not about your credentials. It's like, do you have the right [00:01:00] answer? And can you make money on it?
Liv: A little bit of game theory.
Nate: The lesson of the prisoner's dilemma is that if we don't trust one another, we get worse outcomes collectively.
Liv: And of course, lots of discussions on win win solutions to ongoing
Nate: societal problems. Have government behave in ways that are more entrepreneurial. If you're really good at your job, you should be able to make a lot of money for the government if you increase expected value for taxpayers. Let's dig in.
MUSIC
Nate, welcome to the win win podcast.
Nate: Thank you for having me.
Liv: So to get started, obviously the big thing everyone is thinking about right now, certainly here in America, is the election, which is nearly, I mean, we're nearly two months away from it now. And it seems like this is probably the most hectic election cycle thus far.
You know, there's been incumbents dropping out, [00:02:00] an assassination attempt. So I'd love to understand a bit how you are accounting for all this, like, what seems like excessive volatility, and especially like given the 2016 election, the polls seem to drastically underrepresent the actual likelihood of, of Trump winning.
So do you think that same phenomenon is happening again in 2024 or is there something else going on?
Nate: Yeah, so several good questions there. One thing I'd say is that in elections, I mean, we're used to doing things like poker where you get to a larger sample size relatively quickly and even in poker.
You know, we did some simulations in the book over a 10 year period, like a live tournament poker player. They still, it's still a lot of variants that might even out after like 10 years. Um, so our view is that Trump overperformed his polls in 2016 and 2020. Um, but we don't make any particular adjustment for that in 2024.
Um, the reason being that it's N equals two. I [00:03:00] mean, this audience will understand what that means. Um, and also that in general, the market tends to be a little bit self correcting where pollsters are aware that. Trump did way better than the polls, they were embarrassed by that, they changed their methodologies.
And so you kind of count on the market to correct itself. However, we also account for the fact that the polls could be wrong in either direction. So basically you take a forecast and have like pretty wide error bars around it. We think, if anything, our assumptions are a little bit conservative about that.
It's based on data all the way back to the 1930s, back when polling, It was very old fashioned and like, you'd have to like mail somebody a ballot and things like that. So, so yeah, the short answer is people should definitely be worried about the chance that the polls are wrong, but they could be wrong in other direction.
In 2022, Democrats in some of the key Senate races did better than their polls and not worse, for example. Um, in terms of the volatility, I mean, one good thing from a model standpoint is that Americans are so polarized and so partisan. That 90 percent of people are going to vote the same way, no matter what.
Right. [00:04:00] So you don't have as much swings as you would have in the UK, for example, which is less polarized. You'd never have like a landslide election, like labor one, for example. Um, in the U S it might top out at like an eight or nine point win. Obama won by seven points in 2008, for example. Um, you know, I mean, there are always challenges for when a candidate drops out.
How do you adjust your model in general? I believe in like kind of minimally invasive procedures when you have a model. Um, you know, if you built a system that's robust and it's trained back to, you know, 50 years ago, where you have funky circumstances in the past, like Ross Perot running and dropping out or races where the candidate was chosen very late.
So you try to not make too many adjustments because then you introduce more subjectivity. And of course, inherently a model does involve some degree of subjectivity in terms of design choices that you make every kind of node in your decision tree. You can often play it either way. Um, but once you kind of commit to a solution, I think it's important [00:05:00] as a modeler to try to try to stick with it because you can get emotionally invested in your forecast or emotionally invested in the campaign potentially.
Um, and making major changes in the middle of a cycle kind of defeats the purpose in some ways of having like a model to begin with, I think.
Liv: Mm. Do you feel personally that you are more or less emotionally invested in this cycle than the last one?
Nate: I think a little bit less. I mean, I don't know. I used to be very afraid of what happens if the model is 70 30 and the 30 percent wins, which is what happened in 2016.
And now I kind of don't care. You have a, you know, you have top set against a flush draw and the flush draw is going to come through sometimes.
Liv: And I can't imagine people being Any more unreasonably angry at you than they were last time. Cause it was insane how people reacted. It's like, it was, your model was the one that gave Trump the highest chance.
So you did the best job out of anyone. And somehow you got the
Nate: market. We were long Trump. So if you had bet on the forecast, you would have bet you could get six to one odds for, you know, two and a half to one shot. And so that would have been a pretty good. [00:06:00] bet to make. Um, but you know, one thing is that with the book and now the model is kind of at my newsletter.
Um, I thought I would have like a smaller audience. It still has a really big audience, which is good, I guess. Uh, but like I'm framing it in a way where, yeah, the goal of this is to be probabilistic, right? We're not trying to tell you who to vote for. We used to be at ABC News, which is like the largest media organization in the U.
S. and very, very mainstream, right? Um, it's a lot of like 70 year old grandmas watching ABC News and things like that. And like, This is more of a medium sized niche product and it's, you know, from a former professional gambler, that's the mentality that we take, um, the way we describe it in the newsletter post is often.
pretty technical. I mean, we take care to write it well. Um, but I think it's framed in a way that we'll have a more understanding audience if that happens.
Liv: Why is it that the polls barely seem to move when Trump's assassination attempt happened? Because I think poly market [00:07:00] went from was it 60 to 70 percent and but then a week later returned and the polls barely moved at all.
What's going on there?
Nate: This is where I will throw a little bit of shade on competing products. Um, I mean, there's two answers. Well, I mean, one is that because people are so polarized that even Trump getting shot or even Trump being convicted of various felony counts or even Biden being 82 years old, that only moves a handful of voters.
However, one thing we spent a lot of time on in the model is how aggressive you should be when you encounter new data. Um, and we think some of the other models are not tuned that well where they're actually too conservative. They're using old data when it's clearly time to. to update your priors a little bit faster.
So it's a technical answer, but like, so we did begin to see, um, polling after the assassination attempt. And after the Republican convention where Biden was down by about four points on average, which doesn't sound like a lot, um, but Biden won by four and a half points last time. So you have a net eight, nine [00:08:00] point swing, which in the UK would not be a big deal.
But in the U S is. I mean, Trump is not a popular candidate, um, and he's lost a popular vote twice. The fact that he was winning by this pretty chunky margin meant that people were moved by that.
Igor: So you mentioned how the models would adjust to the, this, this event, for example, but um, as I understand polling is like, they can choose who, whether they get a representative sample or not.
But otherwise it's still mostly just whatever people tell they would vote for, right? It's, it's, it's, and now the new 538 also just does polling, right? It's not really a model or.
Nate: Yeah, well, so, um, the, so the old fashioned way to do it in the golden age of polling is that everybody has like a landline phone.
And they're really excited when there's a phone call. They're like, Oh my gosh, a phone call. I have something to do now. And they'll pick up the phone. And so you'll like randomly dial people in the phone book. Of course, it's like no longer a realistic assumption. Some people don't have landline phones at all.
In fact, most people don't use their landlines. Um, [00:09:00] you know, I, people screen their calls a lot and have caller ID. So pollsters have to basically. Turn this flawed data into models. So every poll is kind of like a model unto itself. And we're kind of like a meta model that aggregates these models and put some uncertainty estimates around it.
Um, because like old white people really like answering polls still and people who are really enthusiastic about their candidates, like right now, if you call today, Democrats have their convention, Kamala is already moving up in the polls, so they're more likely to answer the phone and that could create Bias in the polls.
Igor: I wasn't aware that the polls already include a little bit of adjustment basically in quite quite a bit So, yeah,
Nate: um, and in fact, I think one lesson they've learned from 2016 and 2020 is that with the old fashioned gold standard way Doesn't doesn't work anymore when uh You have severe selection bias and who responds to surveys.
Igor: What do you make of the explanations that, uh, were given back then in 2016 about why maybe the polls seem to [00:10:00] underrepresent the chance of Trump or whether it was just a normal event. One of those was that people, people assumed that maybe On a poll, people weren't as willing to answer at the time because it came with stigma to say that you're voting for Trump.
Do you think that was a legitimate explanation, or is that just something people made up?
Nate: I mean, it's a logical theory. I think the evidence mostly points against that. Um, for one thing, it's kind of like how a Democrat might think of a Republican, but Republicans are very proud of Trump, or at least Trump voters are.
They wear the MAGA hats and they, and they have lots of yard signs and things like that. Um, You know, also Americans tend to be pretty honest about sharing their opinions with strangers, which is kind of unusual in other cultures. Um, for a consulting project, I did a little bit of work on the Indian election.
And in India, the polling is just a disaster. There are so many languages and so many ethnic groups, but there are also so many like social classes in India. And so, you know, some classes don't feel comfortable. Giving their honest opinion about politics when like a pollster [00:11:00] calls, um, in the U. S. We don't have that as much.
It's more that the people don't answer the polls to begin with. There's like, you know, I'm calling from the New York Times. Would you like to take a poll? If you're a trump supporter, then you probably hang up the phone,
Igor: right? That makes sense. Then why the model adjustments would be rather minor and you would expect the market to kind of having adjusted for it.
Nate: Yeah. And again, I mean, it's it's worth worrying about. Although there's also other data. I mean, you see, for example, Kamala Harris is fundraising, increasing a lot. Um, the fact that she's a few points ahead. I mean, Democrats have usually won the popular vote. The electoral college is closer. So in some ways it's a very, non dramatic, non provocative forecast to say, yeah, it's 50 50 with a slight edge for Harris, which is what our model says now, basically.
Liv: You recently joined PolyMarket as an advisor. Can you explain for the audience who, I think, some might not be that familiar with what prediction markets actually are, why they are a sort of novel technology and like, what is it they are actually doing? [00:12:00]
Nate: So prediction markets are kind of what they sound like.
They're like a stock market, essentially, where you can make bets based on whether events will occur. Um, so for example, at the democratic national convention, there were rumors that there would be a special guest like Beyonce, it turned out they were false. But of course there were lots of DGNs like betting on poly market and other markets about this.
So it's an idea that's been around for a long time. And like you said, I am an advisor. I think now there's been a critical mass where. You have actual liquidity in the market. Um, so you have like real price discovery. There's like actually quite a bit of money that's being bet on these markets. Um, and they're very useful at times when.
There's not a good way to quantify something with a model and you need kind of people's intuition instead. Um, when Joe Biden had that awful debate, I was watching with a, you know, well informed group of people. Um, and we were all kind of looking at each other, like, we know this is like really bad for Biden.
Um, but then you see poly market and [00:13:00] others shifting right away, right. Kind of before even. The conventional wisdom had suck in, in the media traders were like, yeah, this is really bad. I know we don't have any data yet, but this is actionable intelligence. And it was funny if you look at like, um, things like Bitcoin, for example, because bit Bitcoin is thought of as being, you know, it's good for Trump is thought as being good for Bitcoin.
So rises for Trump in the polls tend to produce with a lot of noise increases in Bitcoin. Um, at some point the markets, both the crypto markets and the prediction markets thought, Oh no, this is going so bad now. For Biden that they might actually have to withdraw his candidacy, which could hurt Trump because then you get a better nominee instead.
And so you got a big balance for Trump and it kind of flattened out actually for the end of that night. But like, that's an example of where it's much faster, um, to collect that data and the intuitions that people have are usually pretty good. I mean, it's a whole notion of, um, price discovery and the whole notion of consensus.
Estimates. Um, it won't be perfect. I mean, if it were perfect, you'd have like no incentive to trade, but I think we've kind of [00:14:00] reached an inflection point where prediction markets are going to be an important part part of news coverage. If I'm writing a blog post, and I want to make sure, in a newsletter, I want to make sure nothing major happened in the campaign that would cause me to like, update my headline.
I'll just go to Polymarket, be like, oh yeah, this number's the same, nothing huge happened in the past five minutes, we can go ahead and publish and then etc.
Liv: So it sounds like they are Essentially capitalizing on the wisdom of the crowds
Nate: for sure. Yeah.
Liv: As a, as a form of truth seeking.
Nate: Truth seeking or, I mean, it's, you know, in our, in our gambling adjacent world sometimes, um, having a financial incentive gets you closer to the truth, right?
Skin in the game makes some people uncomfortable. Look, I think you have like pretty smart people. I mean, one of the problems before with like political forecasting. Is that unlike sports, where you can kind of like make a living while it's really hard as a sports better, there was like not a professional class of politics forecasters, but now between the prediction markets on the one hand, and also like, you know, every investment bank and every [00:15:00] hedge funds trying to price macro risk.
And so. They're more interested, too, in, in, you know, what effect would Donald Trump versus Harris have on interest rates or different sectors of the economy and so forth.
Liv: How much of an issue is it, though, that, you know, let's say you're using, you're looking at prediction markets to get an estimate, you know, see what the latest probability of Trump versus Kamala is?
Aren't the majority of, I would expect the majority of users of a prediction market, people who are actually betting on this stuff, to be more likely very risk tolerant gamblers who maybe swing more Trump? So is there, is there any kind of, like, how do you, how can that be factored in, essentially, the selection effect going on of the users?
Nate: I mean, in principle, when a market's robust, right, if there are a bunch of Trump biased bettors. In the market than I stand to have a very high expected value by being a well calibrated better. Um, yeah, look, in practice, um, especially for the less liquid stuff, they're probably a little [00:16:00] bit, a little bit Trump leaning.
They're probably also a little bit kind of third party, crypto, independent minded, contrarian, libertarian leaning on all these things, right? They might over it the importance of RFK Junior, for example, um, And it's funny, there's another market which is free, but also pretty good called Manifest or Manifold rather, they have a conference called Manifest, um, and it's like a little bit more kind of crunchy, granola, effective altruist adjacent kind of left leaning, and so there's often a delta between Polymarket and Manifest or Manifold, um, where Manifold's like a little bit more democratic, um, the Polymarket folks like me will say, well, it's because it's not real money, so it's less reliable, but there's probably like a little bit of, you know, Of cultural differences on both sides.
Igor: There's also a multiple point difference between poly market and predicted currently, which is the more right leaning and left leaning. I think over time you would expect that the, uh, accurate better is the one that will still, uh, drive well, the price will still be discovered basically. [00:17:00] in the end, but initially, um, the right leaning ones might actually kind of bias, I think, a whole side a little bit, right?
And then after many iterations in the limit, you would expect that those have money that have been more accurate over time. But this over time thing just needs to set in.
Nate: I mean, even in sports betting markets, when you have a lot of public money, then the market can actually have an exploitable bias because there's not enough smart money to Eat up all the dumb money, basically, um, in the Super Bowl, you probably, if you, you know, if you fade the public in the Super Bowl, meaning bet on the unpopular side of what the average Joe thinks, then it's probably plus CV enough to clear the house's rake, um, but probably not, not like in an everyday major league baseball or NBA game, only in like the big special events where there's an uncanny amount of public interest, but an election would qualify there.
I mean, I think these markets can be off because like, there are a lot of people with opinions. About Trump or Harris. Um, some of them have money, maybe more of the Republican leaning ones. [00:18:00] And so, you know, it probably is, there's still probably is. expected value to be gained from those markets, I think.
Liv: Yeah, people who are opposed to prediction markets on elections, their sort of core argument is that, well, if someone has money on it, then it's going to influence the way they vote. And then this is going to mess with democracy. Is that just nonsense? I mean, like there
Nate: isn't a lot of money in elections anyway, right?
Each campaign is going to raise, you know, a billion dollars. And it's, it's kind of, it's kind of, I mean, it's kind of school marmish objection, right? I mean, you know, there are some concerns about inside information. If you knew, for example, um, That Beyonce wasn't going to come to the DNC, then you could like make a lot of money on that, for instance.
But at
Igor: the same time, the prediction markets benefit from inside information, right? Like as, as in so far as a prediction markets value is providing information to the outside world, rather than only for the betters, you kind of want the insiders. Yeah. There's some radical
Nate: transparent view that why do we want to rely on the New York times, the Washington post to report the news?
Instead, we can kind of learn that through prediction markets. [00:19:00] You can sometimes have this funky thing where people. Think there's inside information when there isn't, it's kind of like a self fulfilling doom loop. Um, but like the way I put it is like, you already have people making bets on these outcomes anyway.
Um, in the stock market and in, you know, bond markets and things like that. Um, people are making big bets, but it's inefficient because you had a lot, have a lot of, to get the bucket of things that you want to trade. You're fading a lot of other risk, for example. Um, so it's very noisy. So, like, why not let people price political risk directly?
I would think.
Liv: I wonder if it's a little bit people hate the idea of someone benefiting.
Nate: Yeah.
Liv: You know, they're all this person's making money from this insider knowledge they have. Okay. Yes, it happens to make the market more accurate, but which is a very anti win win mindset. Like, actually, like, okay, but they're improving the information ecosystem for others, like, by making money from this insider knowledge.
I don't know. I can see it's a little, a little [00:20:00] bit of a moral quandary,
Nate: but. Yeah, but look, I mean, again, I think it's very hard to separate out any objections you have about political prediction markets from the stock market, for example. And there is insider trading. Maybe you could say that people in the prediction markets community should be more hawkish.
And crack down more on insider trading there. And so this kind of cavalier idea that I find attractive Igor, but like, but you know, you can, you can understand that that can be objectionable, but by the way, also in like in sports betting, um, you're allowed to bet, for example, on who will be the first pick in the NFL draft.
Um, if you are a beat reporter for ESPN, you 100 percent for sure have extremely actionable inside knowledge about that based on talking to teams and things like that. And so. You know, so why politics gets separated out. And I think people are a little bit precious about that. Um, but in general, I think markets are good.
I mean, it's kind of like probably the core, the core neoliberal proposition is that like on, on balance markets work because people are, people are flawed and, and experts are flawed too. [00:21:00] And so putting money in the line is kind of what creates a, creates a. A actionable consensus.
Liv: I mean, it ties in quite nicely with these, this concept that you talk about in your new book.
Here it is. Um, on the edge, which is basically all about risk. Frankly, I've never read a book that is more like my own personal, like life story than this. Cause it's got, it starts off around poker, like that's our whole thing, me and Igor, uh, for 10 and then it goes into, um, investing. Uh, Tech and Silicon Valley, who we've sort of been adjacent to in many ways, and then Effective Altruism, Rationality, all of this stuff, and it was a very fun read.
Uh, so it's core concept is about that there are these kind of like two categories of people in terms of their risk tolerance. You've got the river and the village. Can you explain what those
Nate: are? Yeah, so the river is people like. You guys are people like [00:22:00] us, I guess it's people who combine two qualities.
One of which is they're very analytical. They know all these concepts like expected value and game theory and Bayes theorem, all that stuff. We kind of speak the same language. Um, plus they're really competitive. They want, they want to win. Um, they can be a little bit contrarian sometimes tend to be skeptical of the, of the conventional wisdom.
Um, they're also maybe with the exception of the effective altruist are very, Individualistic, right? Um, kind of classic liberalism emerging from the enlightenment where we believe in individual rights and we believe in in free markets and we believe in democracy and things like that and kind of decentralized solutions in some ways.
Um, whereas the village is a community of of people who kind of produce ideas for a living but with a slant toward government, academia and media. Um, so more concentrated on the US east coast You know, Harvard and and the New York Times and Yale, a classic village [00:23:00] institutions and the village is more about the collective, right?
What's the collective good? Um, it's maybe I think less truth seeking, although in theory, it's supposed to be academia and journalism are supposed to be truth seeking, but it's kind of maybe moved toward being a little bit partisan, but it's kind of the, you know, It's kind of like the credentialed expert class, I guess you'd say.
Liv: Yeah, kind of like when people talk about the establishment. The
Nate: establishment's another, um, entirely appropriate term for it. Yeah.
Liv: Yeah. So it's, it's interesting you mentioned centralization, uh, decentralization for the river, because that's what sort of, I think if you had to like distill what the essence of the two things are, is that, Riverians very much, you know, their religion is, is the idea of let the decentralized intelligence do its thing.
Like, you know, in markets emerge and that's a way of truth seeking and the villages more expertise is important. Some things require more centralized structures, um, it's a little bit more control oriented in that way. So they're almost like different mechanisms [00:24:00] of power aggregation, something like that.
Nate: For sure. Yeah. I mean, you know. If you lose a little bit of faith in central planning, it probably makes you more reverie in, as I call it, I should say, these are both groups of elites, um, 1 percent versus the 1 percent and the 98 percent is somewhere or nowhere, or maybe they encounter the river in a casino in Las Vegas occasionally and things like that.
Um, so they're competing for power and influence when you had, um, Bill Ackman, the hedge fund manager. Uh, trying to get the president of Harvard and MIT ousted in Penn, that's a very river versus village confrontation, for example, or, um, or New York, the New York times suing open AI is an interesting one, although the New York times is like unusually entrepreneurial for a village institution.
So that's a little bit more understandable, I guess, but, but you do see this kind of clash explicitly. You also see. Some parts of the river in Silicon Valley having become more Trump pilled. Obviously [00:25:00] Elon Musk, for example. Peter Thiel kind of got the Trump religion in 2016, and they're both big figures in the book.
That's probably not typical. I mean, I think if you kind of had a survey at like some, you know, elite World Poker Tour tournament, I think you'd have more Harris voters and Trump voters? I don't know. Yeah, I think
Liv: it would skew about, it'd be close, but 60 40. Yeah. Something like that.
Nate: With a lot of third party votes of different kinds, probably too.
Um, yeah, but, but you have had like a permission structure in the river to vote for Trump that I don't think you had as much. Um, you know, for a, you go to like these finance poker games in New York and, Yeah, so I was like, yeah, I might vote for Trump and like, I thought it's just in my experience, that was less likely.
Maybe they were actually voting for him and not talking about it. Um, but that was less likely to occur four, four, eight years ago, I think.
Liv: How much do you think the distribution of people sort of leaving the village for the river or vice versa changed, uh, over the last few [00:26:00] years? Because it seems like certainly for me, uh, the entire COVID experience was just like, Oh wow, our institutions suck.
People call it getting red pilled or whatever. I don't, I don't particularly like that term because it, it's, it makes it too much about Democrats and Republicans and it's, it's not so much that. It's, it's more about being like, oh, these centralized institutions, even though I think they're incredibly important and valuable because they provide like stability and continuity.
We can't be too chaotic and decentralized but um, at the same time they're clearly like woefully uh, ill equipped for the modern day challenges. What direction are people flowing in more?
Nate: Look, if you look at where influence has been moving, it's toward the river for sure. I mean, you know, academia, um, you know, the most important development technology today, many people would say is AI, which is being done almost entirely in the private sector.
And OpenAI and Anthropic and Google will poach people from academia. So you have [00:27:00] like a little bit of a talent drain or brain drain. That's one issue. Um, and then, yeah, when you have a loss of faith in like every institution from like the Catholic church to the media, especially higher education in the U.
S. Is declining in trust. Um, so they've been losing status. Um, and the river, at least for like, we bet on things, right? We can kind of, maybe we're right. Maybe we're wrong, but we actually, there's an actionable way to know who's right and who's wrong. And it's not about your credentials. It's like, do you have the right answer?
And can you make money on it basically, which is a very capitalistic attitude to be sure, but, um, but seems to be prevailing in this world where there's a loss of trust. And then there's some like particular. I mean, yeah, for me, I think I have very moderate views on COVID, but like it was a a loss of faith in these institutions to be able to think about uncertainty, calculate cost benefit analysis, but also be truly impartial.
Um, that it seemed like there was a thumb on the scale where, okay, everyone's supposed to, um, uh, stay inside and socially distanced, but [00:28:00] then there's a, Set a political protest, which have like a, a left leaning cause. And then it's okay to go out, you know, which it was, I think it was fine. I think it was good to go out and be politically active.
It was probably also okay to like, go and visit your like dying grandma. It was a
Liv: hypocrisy. That was, that was what frustrated me so much. It was like, no, yeah, no, no one can go outside. This guy's getting arrested for wakeboarding by himself or, you know, or like, you know, it was on a paddleboard out in the middle of the ocean.
He's getting arrested. But then a few weeks later, yeah, please go out and protest for George Floyd. It's like.
Nate: You can't have both. And the fact that they were so unselfaware of how hypocritical it looked, you know? Um, it's also like when Fauci said at first, don't wear masks, they don't really do anything.
And then they flip flop and say, actually, we really need masks. They're really important. You might want to double mask, for example. Like people notice that. And the, and both sides, but especially the village tends to treat other people as non player characters, meaning people that have like no agency or intelligence and [00:29:00] can't like, understand that it's a, to use a game theory term, like a repeated game where credibility matters.
And like, we adjust our strategy based on what you did last time. This is why I think some people in the village were so uncomfortable. For example, when Elon Musk took over Twitter, which has had to be clear, positive and negative effects. Um, but that was kind of their safe space to like enforce the conventional wisdom.
Um, and it was actually important. I mean, the kind of pandemic version of Twitter was really important for enforcing discourse norms and for punishing people like me who are like, yeah, maybe we should think about this a little bit more, right? And like just asking questions became stigmatized as a right leaning or conservative leaning behavior, which I don't think it should be.
I mean, I think liberalism always involves a degree of skepticism of institutions. Um, but, but yeah, so we've seen a shift
Igor: for sure. You mentioned that some, the village sometimes tends to treat people like non player characters, NPCs. They also, yeah, it comes together with this, like, basically gaslighting around some of these [00:30:00] issues, right, where it doesn't even seem like a good strategy, though, from the outside for them to employ.
So it seems if Fauci or the Surgeon General at the time had said, hey, yeah, we kind of reset that mask, not necessary because we wanted to keep some for the, like, critical care workers. Sorry about that. Now they are actually more useful. I feel like a lot of people would have treated that better than, uh, if they just pretend like it never happened and like constantly kind of try to change what they have said in the past or why they did that.
Why do you think this strategy though prevails still?
Nate: I think they maybe have never had to put anything of actual value on the line, um, which I know sounds harsh, but like, um, you know, in the poker world or in venture capital, whatever else, then everyone's had losing bets where they were. Embarrassed by that and lost money and had to, and had to correct for that.
And hopefully improve their process next time. If there's like, if there's no self improving mechanism, I mean, in principle, So, [00:31:00] the scientific method's supposed to be a self improving mechanism within, um, the village. But like, that's also pretty broken in the era of the replication crisis and the politicization of certain branches of science.
Um, yeah, so they don't really pay a consequence for being wrong, and therefore they have no incentive to be right.
Igor: Right. They, they have insufficient skin in the game. Like they would say that they have some skin in the game in terms of, oh, well, I'm pursuing this job and I want to keep the job. But actually the institutions are designed such that.
tenure, for example, like they will just keep the, the bottom can't really fall out. The bottom is pretty high. So the skin in the game is kind of not the same as really putting your, Yeah, or,
Nate: or they'd be rational to lie to themselves or lie to others. Um, you know, if you, especially in the village where consensus and community are really important, there's a price to pay for bucking the conventional wisdom on COVID or things like that, right?
Um, you'll be ostracized or even. Canceled potentially. And so, you know, I [00:32:00] think they're kind of narrowly rational, but have a different function that they're trying to solve for.
Igor: I think that's, by the way, also an example where prediction markets are particularly useful, uh, which would have been another methodology to get to truth answers outside of prediction.
Twitter being bought and now actually discussing it at the time. Like if, if, uh, you had a prediction market on the question of where was Fauci and the search in general at the time, just lying to keep the, will we later find out about it? Probably the prediction market would have very highly showed that it's I know a guy, I
Nate: think he's talked about this publicly.
So like named Alex Washburn, um, who is a scientist who, you know, Um, he studied COVID dynamics carefully, and there was a big wave in South Africa, I think it was the Delta variant or some variant a couple of years ago, where, um, cases were increasing very, very fast, so people were getting worried again, and financial indicators that, um, traded based on this were going down, right?
Um, he realized that, like, actually, yes, cases are going up really fast. [00:33:00] However, hospitalizations are not and deaths are not. Um, so it's actually good news that we now have this like new dominant strain of COVID that's potentially less lethal. So he made a bunch of contrarian bets in the market. and won a lot of money, um, by betting that this is actually going to be good in terms of, like, medium term COVID forecasting, um, when all the public health people were like, Oh, you have to be really careful now, and we're going to have, you know, Omicron 3.
0, whatever. Um, so yeah, there are sometimes opportunities to profit from, from those people being wrong. I
Liv: mean, one argument for why strategically the village don't incorporate probabilities and predictions into, you know, and grayscale thinking into their sort of speech and, and their general approach to life is because it's hard to rally people around your cause.
Nate: Yeah.
Liv: And get group cohesion. If you're saying, well, this is only 80 percent likely to be true, which actually is very, you know, that's high [00:34:00] confidence. But if you, if you use any kind of granularity and seeming low uncertainty, Then you might lose people, even though actually, so I mean, I don't know, it's based upon the game they're in, it's sort of a game theory optimal strategy to do that, right?
Nate: Yeah, this is kind of classic, the classic pundit problem, where if I go on, you know, NBC News, like, Nate, what's your latest forecast says? And I say, Kamala Harris is going to win. Like it's a lot more hits than saying, Oh, it's 52 percent to 48%. So there's that incentive. Um, also there's a lot of sloppy heuristics, like things like, um, what's it called the precautionary principle?
Like, Oh, do no harm. It's like, well, you're going to do harm either way. In COVID either people are going to die. Or all, or all school children in America are out of school for a year, right? These are two pretty bad harms and we have to have like, some way of, of calibrating them and weighing them against one another.
So, um, and you have this kind of strategic bargaining where it is a game theory, we're like, so on dating sites, [00:35:00] um, you know, every man, You know, under six foot will add like an inch to their height or something, or maybe two inches. Um, because everybody else does anyway, right? So therefore you're even, you're coming like shorter effectively to the dating pool unless you lie.
And so the equilibrium involves like a degree of, of fudging up to the point where if your date encounters you, they're not like, you're not six foot three, you're five foot eight. You can't do that. That like is, you know, negative EV, but you, you can lie within certain boundaries.
Liv: Where does the river go wrong?
Like what are their blind spots or weaknesses?
Nate: I think there are, you know, quite a few actually, um, you know, one is that they tend to be a little bit cocky and overconfident because kind of by definition, especially people we talked about in the book where, you know, there is a bias when you're writing kind of a narrative book toward speaking with the success stories and not writing about the failure stories as much.
Um, but if you're somebody who has, you know, You know, made a couple of really good investments and you're kind of a winner or a [00:36:00] poker player who early in your career, you run really good and win a couple of big tournaments or make several big final tables. Um, you're probably due for some mean reversion and, you know, and you may be a little bit of an asshole until that happens until you've kind of had the shit kicked out of you a couple of times.
You're probably a little bit cocky and overconfident. Um, you know, look, there are certainly, there are certainly anti capitalists. critiques of, of the river. It is making a lot of money. Um, it's very individualistic. So if you're from the left, there are good critiques you can make. Um, you know, the river, I think, has issues with gender diversity and racial diversity to some extent.
Um, poker is very male. We can talk about the reasons for that. VC is very male. It's, you know, white and or Asian for the most part, most of these communities. So that's, I think, a valid line of critique. Um, I think the river is often has poor, political instincts, uh, you know, candidates who have been chosen, you know, like Michael Bloomberg was like a riverian running in the village.
Um, although he runs a media [00:37:00] empire, which is like, I guess, a little bit more ambiguous. Um, but he was a terrible candidate. He ran and spent a billion dollars on the presidential race and won one territory, American Samoa. And that was it. So why was he a terrible candidate? Because when you're trying to win elections, the grayscale stuff doesn't work, right?
You want big ideas that you can frame around that are unambiguous. You know, democracy and honor and truth and freedom and things like that. And it's just, it's just soundbites basically. You want soundbites basically. And also like most people aren't paying a lot of attention to politics. You need simple heuristics that.
that can sell to people.
Liv: Yeah, I think a thing that I feel that particularly Silicon Valley have a little bit of a, I don't want to use a word as strong as hypocrisy, but just like a little bit of, um, disingenuousness around, or at least a blind spot around, is that, you know, they talk about, we're all about decentralization, and, you know, and, Um, [00:38:00] yeah, not having any kind of top down control at these bottom up things emerge through the market.
And yet they make all these strategic plays such that the net result is that they end up accumulating a huge amount of power.
Nate: Oh yeah, and even as someone who, you know, I'm not Elizabeth Warren or somebody, um, but I think it's concerning when an Elon Musk or when uh, Andreessen Horowitz, if they're, if they're growing at 20 percent per year, and the evidence is that their IRRs, their rates of return are actually quite high, I mean, You know, 20 percent per year compounding makes you extremely rich and extremely powerful.
I mean, the 10 most rich people in the world have are twice as wealthy now as they were 10 years ago. Um, so even if you're like, yeah, you're going to have a few huge winners and maybe in some moral sense, Jeff Bezos doesn't deserve it. And it's anti utilitarian. You'd rather redistribute his income somehow.
You know, I mean, it's a classic capitalist thinking, um, on net capitalism lifts people out of poverty and it's been good and so [00:39:00] therefore we live with that. Um, but when you have people who have so much money and also are very interested in political power, you know, personally, maybe going back to more of an arm's length distance and say, Hey, look, our job is to make great products.
Right. Um, and of course, we'll preserve our self interest when we're debating regulation in Congress, but we're not going to try to. influence political outcomes as much, right? We're the gray tribe, we're aloof, we're a little libertarian, maybe we're a little anti Trump because of threats he presents to institutions, but we're not going to be big sticks in the mud about it.
Like that's an attitude that I wish there were more of, I guess. And instead people get very, they do get pilled, I think, by politics.
Liv: I think one of the interesting things going on in Silicon Valley as well, around like where the river is. Perhaps not being, um, as responsible with its risk taking as it could be is where, for example, with VC, you are, like, they kind of have, like, uh, an asymmetric payout thing going on where, like, they get, [00:40:00] with their gambles, They get to kind of internalize all the benefits, but their, you know, their downsides are sort of capped out at just, okay, well, I lost my investment here.
They can also externalize the actual true externality, you know, negative effects can be externalized sort of onto the public, you know, they've got this very much move fast and break things mentality. It's like, okay. Fine, that makes sense if, if the externalities are, don't exist, or like say they can internalize all the harms with whatever technology they are building, but that's not actually the case.
They're like often running experiments on the general public, like for example, social media algorithms. It's just like, they're just putting them out there and like, Oh shit, that turns out that made people unhappy or bad. Oh, whoops. What incentive structures could be designed such that those harms are more internalized?
Nate: Yeah, I mean, maybe so. For example, Sequoia Capital was sued by some investors over their investment in FTX. Um, as far as I know, I don't think that's [00:41:00] going very far, but like, because you'll hear over and over again, the thing you were saying before, right? They'll say, well, You can make a thousand X your investment, but you don't lose one X.
So it's a really good, good, really good business, right? We are encountering more technologies now that have maybe more downside than the classic kind of semiconductor and computer industry that was once the heart of Silicon Valley. Um, social media, I think the net utilitarian effects in society are deeply ambiguous, maybe skewing negative, for example, um, and with AI.
Then even like Sam Albin will say, yeah, this could actually, um, eat all the universe and turn it into a paperclip factory, but the benefits outweigh the harms. Um, but it's a little asymmetric cause he stands to benefit and OpenAI stands to benefit. In the world where it turns out well, but we all die anyway in the paperclip world.
And so, yeah, I, I don't know. I mean, certainly the EU takes an attitude toward having greater regulation. Um, but you know, if you have more tolerance in the U S and China and other places and regulation can be very hard to enforce, you kind of have like a arms [00:42:00] race or prisoner's dilemma dynamic. Um, but no, it's, it's a valid question.
Um, you know, look, we need one takeaway of the book is that we need these two communities to communicate. And understand one another better. I mean, we need, we need to have smart regulators monitoring AI. I don't have any policy recommendations in the book, but like this is technology that has the power based on the people who are actually creating the technology to like destroy the world.
Um, nuclear weapons when they were developed were developed by governments, right? If you had like Raytheon developing and testing nuclear bombs, you'd probably want to have a lot of regulations about that. So we should think about that with AI stuff as well.
Liv: Right, and it's, it's a little frustrating to see that, for example, Sam Altman has previously said yes, we, we need to have some kind of regulations.
because this is a potentially very powerful and dangerous technology. And there's now an incredibly watered down bill, the SB1047, right? Um, which has, as far as I can tell, incorporated a lot of like the big people's complaints. [00:43:00] There's still very strong, uh, quite organized, um, lobbying against it coming from VC.
And, uh, OpenAI and some of the other companies, I think Anthropic have now said they're okay with it, but it's like, if not that, then what would they ever allow? They've been talking a big game of allowing regulation, but if they won't even allow this, then it's like, I don't think it's going to happen.
Nate: You can't trust them to self regulate, you know, because look, in some ways, I think, you know, if you look at the three major labs, it's Google, Anthropic and OpenAI, I mean, in some ways you have leaders that are relatively safety conscious compared to.
Just a random grab bag of, you know, qualified people from Silicon Valley. But we saw with like the open AI board, failed board coup that like the engineers and the capital and everyone else are going to flow to the providers that are more aggressive. Right. And now we're lucky that they're only three and a half, depending on how you count meta, right.
Um, you know, important [00:44:00] players in the space that will, I think, well, actually that's, I don't know if that's true. You might think there's more proliferation or if models get cheaper, then you can have more people who are capable of building dangerous models kind of in their. Garage. I think it's probably not true, but like, I'm not quite.
Sure where the thresholds are for where compute gives you X amount of power. But, um, but yeah, I mean, I, we need to get smart regulators on this problem right now. I think there's kind of been a little bit of a plateau and like LLM capabilities, which is, which is good, but like now is a good time to like start having adult conversations.
Igor: Which specific regulations are the right regulations is a hard question. I think it's, uh, interesting to note that the assumption of why people who are in the river, their methodologies work is also because they have skin in the game. It's one of the benefits, right? But skin in the game doesn't quite work at the point when the, at the point which we discussed, You lose one X, whether it, uh, is now you're creating a pandemic because you did some pathogen research or whether the company just fails.
Probably the CEO [00:45:00] of the company, they actually have faced liability, but investors not that frequently. And Sequoia, despite the probably nowhere going lawsuits, um, they're still the most reputable investor that exists, even though they like, As much went down on their knees in front of SPF as I've seen anyone ever do.
Liv: They gave him the best blowjob of his life. Yeah, yeah, yeah. For sure. Back full hawk tour to
Igor: SPF. And frankly, others as well, right? Oh yeah, no, it wasn't just EA is also like Everybody was, yeah. Was really down on their knees in front of SPF, but so were VCs. But the thing is, EA has taken massive reputational hit.
In part, obviously deserved and, uh, Sequoia, not so much, which is kind of interesting because like,
Nate: yeah, we'd invest in it again, which in some ways is a refreshingly honest response, I suppose. But, but it also
Igor: makes sense if you're like as an LP in Sequoia, I'm, I'm, I'm looking at it and, uh, or a potential future investor into Sequoia, even though I don't even know if they manage outside funds actually, [00:46:00] but, uh, You're like, well, you've had this miss, but if you had like 10, 100 X hits, I'll just keep going.
Like the, the, the way how investors work is not measured by the failures. It's measured by their successes.
Nate: Yeah. Maybe in principle, if you had like a once a decade scandal, so like, uh, like FTX or like Enron, maybe there's some threshold in principle where you could have some extra degree of liability, like a penumbra of liability that goes The principles in the company toward the early investors.
Igor: I don't know. You mean it as a suggestion because currently it doesn't look to be the case. No, as a, as a,
Nate: as a conceptual idea, I don't know how I'm not an expert on regulation by any means at all. Right. I'm just saying like, if you had some super, super high threshold where for the extraordinary frauds and things like that, the Bernie Madoff type SPF type frauds, that there is some expanded liability to people who enabled them.
Just create like a little bit of a, Just a little bit of a deterrent, right? So it's not purely, so it's not purely 100 X versus [00:47:00] one X.
Igor: Yeah. I'm, I'm also kind of, uh, on the side similar to, uh, the people who are anti regulation that man, if we can find a non government intervention solution to this, I'd prefer that for sure.
And, and that could be just, Hey, can we through journalism or like kind of, uh, reputational damage make, make it sure that. Yeah, the reputational impact actually kind of, um, occurs on the basis of people seeing what, what, what happened, right? Because Sequoia get to just take down the blog, um, yeah, and which is fine, they should be able to, but, uh, it's, I
Liv: think the reputational damage effect though has just, it's become so capped because as you said, like, Yeah.
The internet has such a short memory, like everyone's just has amnesia these days because it's just so much
Nate: going on. You can create a little cult and bubble around yourself, right? And people like, people like comeback stories and reinvent yourself. So yeah, no, I mean, CAFT is a good term we're borrowing from poker here, I think.
But yeah.
Igor: Well, frankly, probably if they did [00:48:00] another extremely bad thing like FTX, then maybe they would suffer more. Um, yeah, it's, it's, it's hard to judge actually how much reputation damage they've faced, right?
Liv: I wonder if you can. leverage, particularly about making reputational damage within the river.
So the other Rivarians are like, okay, well, that was just bad, probably you, you did, you played bad poker there, essentially, you know, you, you played the equivalent of going all in every hand, just like blindly and dumbly, that could be some kind of error correcting mechanism.
Nate: Maybe because Rivarians are like hesitant to be as judgmental as the village, they err too far on the side of being, of tolerating bad behavior, right?
So in the poker world, to me, people who are known scammers or known cheaters, they're Still come and play. They just turn up. Kind of like this six month probationary period and then they come back and play and like, that seems quite irrational in, in some ways. Right.
Liv: So what do you think are the most promising paths [00:49:00] to building a nice rainbow bridge between the Riverians and the village so that they actually do work together?
Because there are. There's real wisdom and value in both camps. And part of the problem is that like, as you described, like the world is becoming sort of my more bifurcated in its risk. You've got people who. are willing to like double or nothing the light cone of the universe on a slight plus EV edge and then you've also got people who won't walk down the street without a COVID mask, you know, without a mask on.
How can we bring the two communities together such that there's like a sort of more win win y?
Nate: It's less,
Liv: it's less
Nate: rivalrous. I think one thing that might seem like a minor point but um, but to have government behave in ways that are more entrepreneurial in terms of the staffing, right? So like if you're really good at your job, You should be able to make a lot of money for, for the government if you increase expected value for taxpayers, basically, right?
And you should have less bureaucracy in government, for instance. Some of the smaller countries, the Scandinavian countries or Singapore or [00:50:00] whatnot, um, have this and they have higher kind of state capacity as a result. I do think you do see some self correction though. in the village. Uh, the fact that, for example, colleges are bringing back standardized tests, I think shows some ability to self correct.
Um, I don't know that I want to talk too much about wokeness, quote unquote, and that cluster of topics, but you've seen a retreat from that. I thought it was a particularly village way of looking at the world that's now, I think, in some degree of decline. Um, and you've seen, I think the, you know, I think you've seen the river kind of get arrogant and make certain types of mistakes recently.
And so you do have some kind of rebalancing. I hope the book in some small way can help to facilitate that.
Liv: Are there like any promising technological solutions?
Nate: You know, I mean, prediction markets are a good way. And I think to normalize them is probably helpful to encourage, because people have no problem with like the weather forecast, right?
If there's a 30 percent chance of rain, then you bring an umbrella, or maybe you don't, if it's a short trip that you're taking. Um, [00:51:00] so to like normalize that as a way to kind of Just kind of tempt people into grayscale thinking a little bit where it just seems natural. I think that would be helpful.
Liv: Yeah.
I mean, I went on a whole campaign like five years ago talking about, let's try and quantify our uncertainty, like telling the story of when JFK was being briefed by his chief of staff, uh, prior to the, uh, Bay of Pigs invasion, apparently the military advisors had told the Bay of Pigs, uh, his, uh, chief of staff that it was only about a 25 percent chance of success.
Right. Right. But then the chief of staff told JFK, uh, Oh, it's a fair chance of success.
Nate: Oh, yeah.
Liv: And he interpreted that because a fair chance is such a subjective phrase. He presumably interpreted that as something much better than 25 percent because we know he went ahead with the invasion and then that Um, created essentially the Cuban Missile Crisis, right?
And you actually give some estimates on what the likelihood of, uh, the Cuban, Cuban Missile Crisis could have ended in [00:52:00] a nuclear war. So in
Nate: the, in the Toby Ord book, uh, The Precipice, um, which does try to be very quantitative about these different catastrophic and existential risks. JFK thought there was like a one third to one half chance.
It could provoke a nuclear war. So in the range of 33 to 50%, um, you know, post facto, maybe we have some information now and, and I don't know, I haven't really kind of studied that particular case in detail, but yeah, look, I think the fact is that probability is a number which is bound between zero and one.
Um, so yeah, sometimes when you put a number on things, people take it like a little bit too seriously. Um, it's important to distinguish between like, Hey, it's just kind of my back of the envelope estimate. Versus this is from a very reliable model, right? I kind of think people should use like different fonts when they communicate online, right?
Like, so like comic sans, like a goofy font for like, I just pulled this out of my ass, I'm giving you a number versus like Times New Roman for like a proper actual. Very
Liv: formal. Yes. Rigorous. I like that idea. It's just like another [00:53:00] dimension of information. Essentially, you're giving a bit of
Igor: mine as well when people say like, Oh yeah, I'm just I'm guessing it's 27.
6 percent it's like, well, now you're communicating something entirely different. You shouldn't be allowed to go into false precision when you're. Very uncertain, basically, or you need to somehow highlight that with Comic Sans, I suppose.
Nate: It's funny because when you're verbally communicating, you have additional tools available.
20 percent versus 20%. So, you know, you lose those tools sometimes in, in flattened internet communication. Yeah.
Igor: It also seems that some people just stay away from probabilities because they're kind of dirty or something where like on some topics, people don't want to look at it as being a probabilistic event, but rather.
Um, it's, it's, uh, it's something that was meant to be, and, um, is, is, is that a reason? Like, do you think that the public sees probabilities as, kind of, I don't know, taking away meaning or something like that?
Nate: I [00:54:00] mean, you know, Peter Thiel, who had a kind of religious upbringing, um, and I think if you pressed him, he'd probably be maybe a determinist, uh, he had that objection when I talked to him, right?
And he, um, or a version of that, a more, Sophisticated version of it that you can find in the book. Um, and he's like, yeah, I think kind of the quants have gone too far and they've arbitraged all possible, a value out of it. And now it's all about kind of vibes, right? Where if you, if you read the vibes well, then that's actually where you gain a market advantage potentially.
That's a different question though. Yeah, no, I think, I think people find it taboo, um, to speculate or bet on certain things, right? It's, it's kind of like, I mean, we have these in society all the time, right? Like, um, if you go over to a friend's house for dinner, then in 99 percent of communities, if the friend said, Oh, you know, these tomatoes I bought were really expensive at the farmer's market.
I'm going to charge you your share. Give me seven bucks for the ingredients for preparing this dinner and my labor time, actually, let me give you a discount because I'm your friend. Like that would seem kind of icky and And even for I, you know, when I Even for me, when I go and, um, [00:55:00] go to like the prediction markets conferences, I'm like, okay, some of this is like a little bit too much, but like bet on their, um, on their sex lives, right.
Or bet on whether they'll still be religious 20 years from now. And I think, I think probably having some degree of taboo is probably worthwhile or it's just human nature, but yeah.
Liv: Yeah. I mean, we, our last episode we just recorded was, uh, about our relationship and we were making new 10 year predictions on the likelihood that we'll be together in another 10 years time.
Uh, I wonder. Some of, some of the feedback people I've, I've gotten when I've told people that we've done this is that that's actually, why would you, why would you be trying to quantify your relationship? This isn't this like your soulmates or you're not. Um, so yeah, it sort of falls into that realm.
Igor: And there is like in part also a reason for it.
For us, it works because we don't take it to, um, mean that the uncertainty one feels is inherently bad and a sign about the relationship. But there is also the counter effect where. You do increase the [00:56:00] probability if you are just a full believer into the thing, right? Like that's why people that are entrepreneurial, for example, and start a startup, it's like, is it maybe even better if they just believe, not that it has a 10 percent chance of success, but just like it can only succeed.
Failure is not an option.
Nate: Yeah. I think many, if not most irrational behaviors are. Rational on some higher meta level of rationality. Right. Um, that there are reasons to be strategically optimistic or pessimistic. Um, you know, for me, I was trying to estimate how many subscribers, like a newsletter would have if I did that instead of signing with a big company, for example, and I kind of way underestimated it, which is good for me.
Um, but could have led to mistakes. And I took other options, but I think it was probably some like rational risk aversion, right at this point, like. You're encountering some diminishing returns, you have reasons to be risk averse, so like I, if I had been paid to make the forecast, I think, and paid for that accuracy, it might have been better, right?
Um, but to kind of [00:57:00] implicitly give yourself like a 20th percentile outcome when you have reasons to be cautious and an 80th percentile outcome when there are reasons to be optimistic and you have less to lose, then that might be not the worst trait.
Liv: It's interesting, the, the value of delusion. Because again, like, you have these people, um, someone like Phil Helmuth, right, in poker.
He has, for those who don't know him, he's, um, He's the most successful World Series of Poker player by a huge margin. It's like truly incredible just how frequently he has managed to just wade through these huge lotteries, essentially, these big field tournaments and end up winning. And yet on paper, like, If you ask a lot of like top poker players, they don't think he's a good poker player because he doesn't, um, he seems to play more, much more intuitively and he doesn't follow, he doesn't sit and work with these solvers that everyone works with to find out game theory optimal solutions.
Yet he just finds a way to keep on [00:58:00] winning and it's. I've never met anyone with more self belief than him. Yeah. To the point of delusion. You're like, okay, it's like, it's like bordering on the realms of madness. And it like, there are other people, I think, these like extreme outliers who fall into this category, like an Elon.
People like that, who almost, it's like their force of will is like, changes probabilities or they are, they are operating on a level outside of normal, normal, probabilistic, uh, reality. Um, have you ever like dabbled with that yourself? Like thought about like, can I just try and trick myself out? So, you know, can I be, did we try to like be irrational?
Or I guess by being, trying to be irrational, you're now becoming irrational because you're aware of it. And so it doesn't work.
Nate: I mean, I think in poker tournaments, like having the visualization, Visualizing success probably is. helpful when you get to the elimination rounds, when you get to deepen the money in poker, right?
To envision yourself, I'm going to be at a final table playing really well. I'm going to win the tournament. They'll write a nice post about me at poker news and [00:59:00] things like that. I think, I think that's probably better than the other way around because, you know, um, having made some deep runs in the course of writing this book, um, you see people totally lose their shit on day three, day four, day five of the main event or something like that.
Um, and it can be self sabotage and it can be, I think. Imposter syndrome or, or, you know, at, at the world series where everyone starts leaving and all your friends are out of the tournament, you feel like survivorship guilt, I think sometimes. So, you know, envisioning positive things I think is generally helpful.
Um, maintaining your optionality, I think is often. Very helpful. Um, which is correlated with open mindedness, I think, in general, and extroversion and things like that. Um, I mean, I think with Phil, I think there are some people who are on some level, deeply insecure or awkward or introverted, and they kind of wear a mask and before long, kind of the mask becomes the, the mask.
The person, right? Um, because it's kind of funny. I mean, I talked, I had like a long interview with Phil in Palo Alto and he's like telling stories about how he's like [01:00:00] one upping like literally Michael Jordan and Tiger Woods and people like that and talking about Obama. I know I'm too busy to see Obama.
And it's like, it's kind of a great, I mean, he's like a Seinfeld character basically. Um, and it's kind of amusing. And I think he actually is, I know him well enough to know he's a kind hearted person, but it's kind of charming. If, if you're kind of like in on the joke a little bit.
Liv: One thing I'd love to pick your brains on is alternative structures, um, for voting systems.
Because, I don't know, it seems very suboptimal to me that the US is sort of stuck in this very inadequate equilibrium that is this two party system. That seems like an Inevitable result of using first past the post as, as a form of voting. Um, and you know, there are all these other alternatives. You've got like proportional representation, approval voting, ranked choice.
Um, so are there any of those that you think like are just an obvious thing that the U. S. should transition to? to fix this? Or is it not a [01:01:00] problem in the first place?
Nate: I mean, look, the U. S. has diverged from Europe in all types of outcomes over the past couple of decades, right? Um, our economy is actually doing quite a bit better than the EU, but like life expectancy, particularly, particularly for American men, has stagnated, right?
That seems worrisome because The whole neoliberal capitalist thing was always like, well, yeah, some people make a lot of money, but people are better off. As you can see, intangible improvements in both quantity and quality of life, right? And if life expectancy begins to stagnate in the U. S., it's like a, you know, a four or five year gap.
If you were in a regression of, you know, GDP versus life expectancy, but we're like four now or five years in the U S below where we should be. Um, it's cause of, you know, uh, opioids and, and. Cars and guns and, and a very unequal healthcare system and lots of things like that, you know, it's kind of about like risk taking behaviors.
So, yeah. Um, I mean, ranked choice voting, I think most wonks, including me tend to like it. The one flaw is that it puts a bigger burden. On [01:02:00] the voter where now you have to rank five candidates. And what happens in practice is that, um, you know, if you're like a super well informed political nerd, then you have your whole spreadsheet.
Um, if you're not, if you're someone who takes a minute to look at it, then people undervote the ballot, they'll just make their first choice. So kind of like it gives people who are more invested in politics, essentially more votes de facto, which I think is probably. Okay. But like, you can argue if you're from an equity perspective is a little bit
Igor: strange.
Well, if it, if it ends up being practically that, but everyone had the same choice to get themselves the position as well, then yeah, I guess
Nate: your choice to do the research. Yeah. I mean, it's
Liv: definitely, again, it's, it comes down to the like whole equality versus equity thing. It's like, do you want equal outcomes or equal opportunity?
And that's a very, I don't know. I mean, I think one of the biggest critiques of democracy is that by giving everyone the opportunity to have the same influence. But in practicality, those who actually do the most work and study it the most, [01:03:00] it incorporates a form of meritocracy in with it. Um, it's also, it's just more information into the system, right?
Nate: Which
Liv: is
Nate: usually a good thing. I think on balance, I like ranked choice voting for sure. In some states, you know, in New York, we Mayoral elections by rank choice, the state of Maine and Alaska have rank choice for most of their elections now. Um, so it's gaining ground. Um, it helps in general, more moderate candidates to get elected.
It also helps the formation of different third parties. So, so I'm in favor, but I, I think it's worth, And I agree, like maybe to give, I mean, the thing about like the U. S. still is a fairly meritocratic place, right? Um, you know, the economy rewards, the combination of having a good idea and being in the right place at the right time.
Um, you know, like, I don't think people who are intelligent are lacking for having lots of ways to make money and have high social status and enjoy their lives. These days it might be too much, almost, you know, where if you're like a person of very high agency, [01:04:00] then the world's great, right? Better than ever.
Um, and if you're not of high agency, then it's hard to navigate and you're under duress a lot of the time.
Igor: What do you think of approval voting? Some people prefer because it has the simplicity at least, right? You're just marking everyone you like and if you mark one, that's fine. If you mark three, it's fine.
Nate: I think it might be a good compromise for sure. Yeah
Igor: And then you would get the moderate candidate is the theory at least because like it where they would appeal to more people similarly actually, um first past the post was meant to push, or at least a good version of it pushes, uh, both parties that result to kind of fight for the median voter.
It doesn't seem to be what's actually happening currently, though. We've seemed to instead fallen into a more polarized version of it, which in part also obviously comes for other reasons like the media, et cetera. Yeah. Interesting. I mean,
Nate: um, my suspicion is that the parties are probably pretty close to GTO, to game theory optimal, given by the fact that, you know, both parties win.
Elections about half the time, and the coalitions are [01:05:00] kind of shockingly close to 50 50, right? If Harris pulls away and wins 54 46 by an eight point margin, that would be considered an amazing historic landslide, 54 46 is still approximately 50 50, right?
Igor: Um, I mean a counter to that would be though that, um, if it's the case that 90 percent of people are just predetermined with their votes.
Uh, have predetermined votes. And that has come maybe in part due to past polarization or past history of how they have gotten to those opinions. And then only those 10 percent points in the middle are the, then actually 54, 46 is landslide victory.
Nate: Among, no, for sure. I mean, you're getting like eight out of 10 of the persuadable people in the short run, in the long run.
I mean, look, um, parties have incentives that they want to make some trade between their ability to pass legislation. They want. Or accomplish their ideological goals and their political goals. That that's one constraint. Um, you know, again, [01:06:00] these are village institutions, so they care a lot about consensus and they, I think parties make the mistake sometimes of, Oh, we want the consensus of the democratic party will be our candidate when they should probably lean more toward the center and kind of left, let the far left or the far right.
Get pissed off. Right. But also, you know, also these dissonant voices are often very passionate and have a lot of, um, a lot of influence in the media. Right. So I don't want to talk too much about like. You know, Gaza war stuff, but there are lots of people. Who have very moderate positions on what's happening and just like this is really tragic and I, you know, it's really bad situation and are not polarized, but they're not, they just shut up about it.
Like I never blog about Gaza stuff, right? Um, whereas the people who are more extreme will, you know, will protest or whatever else.
Liv: In your book, Signaling the Noise, And you talk about sort of the importance of finding ways to distinguish meaningful information from whether it's random [01:07:00] fluctuations or actual misinformation.
So you know, the noise in the system in order to make more accurate predictions. You wrote that now like 10 years ago. And it feels like since then, the world, if anything, that ratio has gotten worse because we've got like generative AI and, um, also polarization itself has increased, which is incentivizing people to like create more noise about their opponents.
So how has that affected your models? Like presumably it's made it harder.
Nate: I don't know. I mean, the models are in some ways pretty robust. They're just like, we're just looking at the polls and all the other stuff filters through the polls. And then. You can ask a question, are the polls getting more accurate or less accurate?
Um, you know, look, I'm sure it contributes to polarization where people can cherry pick which information they get. Um, it is kind of funny that like somehow large language models take all this crappy content on the internet and turn into something somewhat useful with a whole ton of computing power, I suppose.
[01:08:00] Um, but yeah, look, I, I think it places a premium on. The few credible sources or reliably kind of high quality information, right? Like, um, like in this environment, in the media environment, then who's winning right now? It's like basically the New York times, which is the single most influential organization in the United States.
And like substack, uh, where it's like individual authenticity because it's like you're touching grass and it's very granular and reliable, right? And the whole middle ground is kind of losing credibility, I think.
Igor: So generally, uh, more data would lead to better predictions, one would think, or at least it's a factor that can lead to it.
But then the ratio also matters of signal to noise.
Nate: The ratio matters and also, you know, people have a tendency to overcomplicate models. More data leads to more overfitting sometimes and, and. And more wacky assumptions, or more, you know, partisan assumptions embedded. Do you think the ratio of
Igor: signal to noise has changed, uh, over the last ten years?[01:09:00]
Nate: I mean, in principle it gets worse, right? But the amount of useful information It's not increasing as much as the amount of total, like, bytes of, of terabytes of data in the world.
Igor: The thing I wonder about is whether that actually made predictions harder or easier. So if you were to spend, um, 100 hours on a question to build a prediction in the year 2012, or you spend 100 hours on the same question in 2022, when do you think would you have gotten to a more accurate prediction?
Nate: I mean, just for like reasons of faster software and more tools available and more if you're, if you're, if you're skilled, I think it kind of leads to more winner take all dynamics, right? Um, if you're skilled, you have more tools never before, including AI tools. And if you're not skilled, then you're more likely to trick yourself basically.
Igor: So like the genie coefficient and prediction making us worsened or something as well.
Nate: I think something like that. You know, I, and I mean, if you kind of like have, or when you have AI models that kind of. Create a rising tide where it's [01:10:00] 100 IQ or 120 IQ, then maybe it's like a 118 IQ and 87 no longer matters as much anymore.
But like, oh, this varies a lot for different tasks, I
Igor: think. That's one of the things I hope for a lot for sure is when, um, Gemini, for example, already has a million token context window and I sometimes just feed. hundreds of pages of text into it and then get a summary of it and like yeah the compression that you can get in the future from just like looking through a bunch of data, as long as there is not some consistent bias in the compression that they're doing for you, then you could get really useful information for everybody.
For the
Nate: book. I mean, chat GPT probably saved me, probably maybe like 1. 2 X more productive, um, for, for, for certain types of, I don't like how it writes. I don't like it's prose style. It's kind of like a language nerd. Um, but to vet certain discussions where it has expert level expertise about some technical topics about AI.[01:11:00]
about crypto, and you're talking to, I talked to 200 people for the book, right, but to say, okay, as a first draft, let me kind of vet this with like a reasonably intelligent person who can check my terminology on some AI term that I think I understand, but I want another verification of that, right?
Igor: Yeah, in particular, the thing I love using it for is, it can put on the hat of pretty much anybody.
So you can ask it, how would a libertarian respond to this? Or, and, and just like go back and forth constantly. And that's, that's pretty great.
Nate: Yeah. I mean, you know, uh, it helps to have this kind of understanding of these different vectors or kind of like color palettes that you're laying or laying on top of one another and things that have that, like, like layered texture.
It's like really good at, I mean, even now you see with like some of the. The vocal stuff, what can you ask? It's like, give me a, you know, read. Mary had a little lamb and a cockney accent, right? Like that's a very legible question for an AI where it knows the poem that no, it has some notion of like cockney accent in [01:12:00] this.
And you create that vector. Um, so to just like develop an intuition for like, this is a good AI question and this isn't, or a good machine learning question, I guess we should say, um, like that's a valuable skill to have, I think.
Liv: What's some advice you can give to listeners who are trying to improve their own prediction making in terms of like mindset or certain tactics?
Cause you, you, you talk about this idea of foxes and hedgehogs and. I know. So like a certain, um, short list of, of personality traits that you think people should be trying to cultivate more.
Igor: Especially something that's like, um, good bang for the buck in the sense of, yeah, outside of just really reforming their entire personality, just like a little.
Nate: I mean, poker, I do think poker is like this er, kind of singular activity where it teaches both the decision making skills and some of the people skills as well.
Igor: What are the main things you think that you've personally learned from poker?
Nate: So for me, like, I probably already had this kind of very [01:13:00] riverian mathy brain.
Like, I mean, so something I think I've gotten, like, I played a lot of poker in the book, um, kind of half for research purposes and half for practice. And maybe to make a little bit of theoretical EV, um, I think, uh, what I've learned from poker, I think having like a lot more mental focus, um, and mental stamina, right?
Uh, where you go, you know, the first time or 2022, I played basically five weeks straight at the world series for, um, And they're 12 hour days often, right? You'll go and enter some tournament and you bust out and play like the 2 p. m. tournament. And so, you know, when you're like, kind of like solving math problems for like 12 hours a day for five weeks in a row, then like that, I think, I think that like improve some types of processing for me at somewhat permanently, right?
I mean, I think the power to be like really. Observant in poker is worthwhile. I think, um, being the most important thing is being in high pressure situations [01:14:00] where naturally once in a while in a tournament, you make a day three, a day four of a big event, and you have like actual money on the line or you, maybe you're on TV or something like that.
Um, you know, once in a while I'll play like a high stakes cash game where it's real money that would sting to lose. Um, and so you learn to operate with this stress that is helpful in other situations, right? If I'm on national TV. During an interview, you're having a stress response. Your heart rate will probably go up, but if you're used to operating in that zone, like that's, I think, a quite valuable skill.
Igor: So put up 10 percent of your net worth in a single poker game and, uh, learn to feel that I wouldn't strictly recommend that.
Nate: It's hard to have simulated high stakes moments, but I do think, I mean, look, I think a lot of your EV in life comes from a handful of decisions that you make in times of great stress, right?
When you have a great opportunity or when you have to make a, you know, You know, when you see some tragedy unfolding, if you make a decision, do I have to like, flee or, or, or remain here and things like that. Um, so [01:15:00] to kind of manufacture those in the form of poker, I think is where there's enough where it actually stings.
If you lose, I think is probably, probably good for some people.
Liv: Okay. So poker is one example. Anything else in terms of mindset shift? Yeah. Is there anything that you have changed yourself personally since, you know, when you were 20 years old to today? that has been most valuable.
Nate: I mean, if you take those personality tests, like the Myers Briggs, right?
Um, I'm very strong on the N and the T, but I've gotten more from INTP to ENTJ. Okay.
Liv: Give us some cluster of adjectives that describe those two things. Okay. So
Nate: it basically means like I've gotten like, A little bit more extroverted in my external life and, um, a little bit more willing to trust my intuition.
Um, and the reason for the latter is that when there's a lot about this in the book, um, your intuition can be very valuable when you have, uh, a lot of experience and you have [01:16:00] implicit data that like. It's hard to quantify in a model, right? But if you've played thousands of hands of poker, then you could just tell sometimes when somebody has, or they don't, right?
Um, you could probably quantify it and say, okay, well, when you see this heart beating here, or you see the way they put chips in the pot, or, you know, but it's usually more subtle than that. It's like kind of like a semantic cluster of variables. So like when you have a lot of experience and you're still, as you get, you know, I'm 46 now, if you're still like very intellectually and mentally active and still exploring life when you're 46, and I think like life experience becomes like more important.
Liv: It seems right now that certainly a lot of the media and anyone involved in politics, and arguably extend that out to the higher, to the wider world, um, Everyone's on tilt. I mean, okay, yes, the election's coming up and so on, but for those who don't know, tilt is, uh, the word poker players use to describe that, like, an emotionally heightened state where you're typically making bad decisions because everything's just so intense and you're either angry or you're upset about something and, and you're not thinking [01:17:00] clearly or rationally.
Uh, so yeah, I think pretty, you know, Accurate to say that the media is on tilt. Do you have any tips for those or like messages you would like to give to those who you think have a lot of influence over the media, um, of how to reduce that from your own experience?
Nate: Yeah. I just say like, don't let the haters get you down.
Um, especially when we're kind of going back to like more of a, Substackian content creator model where being an independent voice can, um, be both creatively fulfilling and often a good business model. Um, there are a lot of people that got yelled at by pandemic era, Twitter that have survived and done and done fairly well.
I do think, by the way, the pandemic itself is like part of why we had this epidemic of tilt. Um, if you'd asked me to predict, like what'll happen if you kind of tell people to be really afraid of being around other people for a year and all these institutions that create, you know, Social [01:18:00] enjoyment or shut down.
Um, you think people might just have like some long term emotional damage from that and overcompensate. And so, I mean, you know, when you saw Vegas bursting back to life, once people felt, you know, not guilty about traveling again, like that seemed like an important, you know, thing. Um, so I think COVID is going to have some long term consequences, especially people that like, kind of, that were in like the age 14 to age 20, when COVID peaked, when it's like right in your formative coming of age years, you know, unfortunately, we'll probably have some cohort data that will let us quantify that down the road.
But I think it's going to be tough on that generation.
Liv: Because I, I remember when I was 17, all I wanted to do was be hanging out with my friends and socializing. You're in like. Search mode, right? You're, you're, search and explore. That's what young adults want to do. They want to expand their horizons and meet new friends and go out and have novel experiences and if you for two years can't do any of that during that very formative time, I don't know, I just can't, I would have gone insane.
So, [01:19:00] yeah, it's, it's, I have a lot of sympathy for the kids that got to miss out on that. I guess, again, it's the bifurcation. I expect some will be trying to like massively overcompensate for that and like try and make up for it. And then others just became very, very fearful.
Igor: Like as a poker player, usually how you get off tilt is also by, um, remembering what you've learned with higher certainty, focusing on the process, not, not worry about the current result, et cetera.
Right. So, that goes in line with the, don't focus on the haters, like, just like, um, rely on the things that you know to work. Maybe in journalism, it's like good old journalistic practices.
Nate: Yeah, I know if, if I, uh, so this year I play the World Series, but there's a lot of political news. Right. And so I just don't have my full playbook.
bandwidth when I'm at the table. And in some cases, it's probably plus V for me to like totally punt and go write a blog post and get, you know, if you get X subscribers, it outweighs the expected value of your stack and things like that. Um, but when you're, uh, when you're not [01:20:00] at your best, then remembering kind of the ABCs, right?
So it actually play like more my impression of what GTO play would be when I'm tired, which means I'll like randomize more. I'm like, I don't trust myself in my current fatigue state. to read somebody well, where I can have any advantage from that. So I'm going to look at the clock and Oh, this is a fold this time.
Right. And you get chill in the bluff or whatever. And like, whatever. Um, you know, when people are in, there's in the book, I talk to people who are like astronauts and mountain climbers who take physical risks as well, and they're like, yeah, um, just don't try to be a hero when you're under duress, if you can remember your training and do the ABCs and you'll be ahead of 95 percent of the population.
Igor: Yeah. I feel like, uh, similarly, it also relates to. What we previously talked about that, um, Fauci, et cetera, where using the gas lighting was basically kind of seems to me like an exploitative strategy rather than the very straightforward one. And you shouldn't be exploitative when you're not performing well, if the [01:21:00] landscape is on tilt.
Exploit less. Just go back to the ABCs, I think is a pretty good message.
Liv: Define what you mean by exploit less.
Igor: So exploit in poker means basically, notice what the error is that someone else has and try to adjust your strategy such that you gain from that error a bit more. But whenever you do that, you open yourself up to having been noticed, to having adjusted away from the kind of optimal, Strategy and then you are yourself now prone to be uh exploited and then it becomes this cat and mouse game Basically of who thinks one level ahead, but the problem is you can't really play the cat and mouse game well if you're not at your sharpest in the moment, which is what we discussed with the tilt question.
For
Nate: people that understand that would go a long way. Because I mean, that's a harder concept. Like in the book, there's a lot about expected value and like that you can kind of like explain like a mainstream podcast, right? Um, the game theory stuff is a little bit harder, even though in some ways it's kind of like more important to the book.
But the notion that like, [01:22:00] if I'm kind of cheating the system and taking advantage of people, I have like a little bit of fear. In the back of my head that I'll get taken advantage of myself and the system will adjust and adapt. Like that I think is actually a pretty good heuristic that travels well to lots of things in life.
But people don't, very few people like actually internalize that unless they like actually have like played poker.
Liv: You end the book on Talking about sort of three maxims that you believe the world needs to incorporate more, or at least move towards. Um, and I love them because they're so incredibly win winny.
You have that people should be looking to maximize their agency, so their choice, their plurality, and then reciprocity. Can you explain what those words essentially mean to you and why you chose those?
Nate: So agency to me, it kind of comes out of like the French revolution, slogan, Liberté, Égalité, Fraternité, Fraternité.
Um, you know, the book goes through this whole thing, how if AI is the equivalent to the industrial revolution, [01:23:00] we also had the French revolution and the American revolution then, and the enlightenment, a change in philosophy. Um, so the analog to liberty is agency, um, which means having robust choices that you can make, right?
I mean, a lot of times. Online, you're filling out some form and there's some opt out, but like, it's really hard to find, right? Technically you have the choice, but like, do you have agency there? And kind of more broadly, um, do you have the power in a non coercive way to shape your own life and make, and make rational choices?
Um, cause I worry, like I said before, that like the super capable people who are healthy mentally and physically, right. And have some means. And have the right amount of cynicism and skepticism, but like are not pilled, like they do really well, but maybe that's bad for like 95 percent of people. So I want to make sure that like we think about, you know, just having nominal choices is not sufficient anymore.
Liv: Right. And also technology is getting better and better at removing people's agency. For sure. And not all, but some [01:24:00] technologies in terms of you take, it's, it's much as, as. Algorithms, for example, become more personalized. Uh, and therefore better achieving their objective function, which is, for example, in social media, keeping people scrolling over time, that's going to, as it learns more about you, even the most high agency people are increasing, you know, the, the, the, if you rank everyone on the, on a sort of a spectrum of the lowest agency to the highest agency people, that watermark line is going up and up of people who are able to keep their head above, uh, The, the addiction parapet essentially, or above the, above the tide.
Um, so it sort of goes both ways. People themselves need to work on finding ways to improve their own agency, but also those who are in power, for example, the reverie and who are building technologies. Are not building stuff that removes people's agency.
Nate: Yeah, I think to be pretty strongly against [01:25:00] paternalism, um, you know, when Google Gemini had the phase where it was like drawing, like.
multiracial Nazis, for example. I mean, obviously Google was embarrassed by that. Um, but the notion that like, Oh, we're going to coercively insert. some particular notion of social justice into information that's supposed to be, you know, don't be evil, don't manipulate people. Like that seems like, that seems quite bad.
Liv: Right. That's kind of what the whole woke thing is, right? It's about like, Oh, these things, it's about deeming what is safe and what isn't safe. And this is harmful and this isn't, and being terrified of anything that could be interpreted as harmful, particularly to a protected class. But to the point where it just like, it's overfitting for everything.
Nate: Yeah, and it's risk averse, right? And, and neurotic and, you know, let people make choices, but make sure they have, you know, Make sure they're well informed choices. So what's plurality? That's probably the most straightforward one. I don't think we want any one faction to dominate. [01:26:00] Um, this comes out of an idea from Nick Bostrom, who is the EA adjacent philosopher at Oxford.
Um, who has an idea for like a moral parliament. So he's accused of being a utilitarian, right? And he said, actually, I'm not, I would want to have a moral parliament where the utilitarians have some seats and maybe the libertarians have some seats and, you know, classical conservatives or liberals have a say to, I thought it was a really kind of beautiful idea.
Um, but also in general, I mean, you know, we are facing, if you're a reverend, you should like pluralistic decision making mechanisms like the market economy, for example, um, or democracy, I think. Um, I worry that what if even, even with AI, with or without AI, either way, um, like what if somebody just gets like 50 percent plus one of the power and the say in the world?
Um, like what if in China the government just gets really good at permanently suppressing AI? It might not be in its political interest to do that, but let's say it just gets really good at like, [01:27:00] actually, we are now good enough to like, to win this battle. Um, and you have this kind of dystopian, not China's dystopian, but like Russia or something, right?
It might be kind of dystopian. Then, like, that seems potentially really bad. And also a lesson from any gambler, especially in sports betting, knows that you want consensus when you're making a bet. Um, if one model says one thing, the other says the other, and you never come together. If several sources point in the same direction or you kick the tires and idea in different ways, and it's robust to that, then that's very valuable.
Liv: It's trying to incorporate the idea that you want to keep a system as complex as possible and diverse in the true meaning of diversity, not in the woke meaning, but like actual diverse opinions and groups and, and make the system as easy as possible. interesting as possible and not become, you know, collapse down into a simplified monolith.
Nate: No, and by the way, I do think that Silicon Valley could do a lot better on like classical D diversity and would probably contribute to intellectual diversity too for what it's worth. But yeah.
Liv: [01:28:00] Uh, okay. And then the third one is reciprocity.
Nate: Which is kind of the most nerdy one of all. It comes directly from, from game theory, the notion that like not treating other people like NPCs, assuming they can adapt and be intelligent, but also like treat others, how, how you would want to be treated.
Right. So with dignity and respect, although reciprocity also means being able to reciprocate at times, you know, I think it's important on the internet, if people have repeatedly demonstrated that they're behaving in bad faith and disengaging from them or, or punching back as a deterrent, I think is our important concepts, but yeah, it's, you know, it kind of means that we're all in this together.
That's kind of where we're the fraternity part. comes from, is that we're all in this together in some ways. You know, the lesson of the prisoner's dilemma is that if we don't trust one another, we get worse outcomes collectively. There's a lot more about that in the book. When you have a loss of trust, the world becomes much more zero sum.
Um, and that's a big problem.
Igor: Yeah. At the same time, if you just were to press cooperate every time in the [01:29:00] prisoner's dilemma, probably you'll be taking advantage of, so it's kind of the, the mixture of, uh, tit for tat, but start with cooperation. Yeah. People don't understand.
Nate: I Don't understand how, like, if you give people the benefit of the doubt too much, then you, like, will be exploited in a world of 8 billion people that, like, people will find ways to take the expertise in your field and exploit it for their own project or, or for their own influence or political gain.
I mean, it will happen, um, if you don't have, like, adequate, you know,
Liv: It's acknowledging the fact that competitive dynamics exist. You know, it's not so pie in the sky that, Oh no, everyone, if we're just perfectly tolerant of everyone, then everything will be nicely nice. It's like, no, there are psychopaths out there.
There are people who try to exploit others. So you need to have some degree of, incentive mechanisms such that you can punish them so that they don't take advantage of you, while at the same time rewarding those who do cooperate and, or do use competition in a healthy way.
Nate: Yeah, in some ways it's actually kind of [01:30:00] quite Kantian, the categorical imperative, like if you universalize this behavior.
Then what's the equilibrium that results, right? Um, you know, if you're always, I mean, Tyler, Tyler Cowen, the economist is always like solve for the equilibrium is like his phrase, which I think is like a really, really good heuristic, right? And I am a short term. Exploitive situation where the equilibrium does not matter or in a repeated game where it does, and then what's the, what's the equilibrium that results?
I mean, even things like, um, You know, like a relationship with a partner or, um, or with children or with your parents, right? Like, you know, you have to make decisions a lot of the time about like, am I going to let this behavior that I don't like in my partner go or not? And you have to sometimes, I think, be willing for there to be consequences, you know what I mean?
That no, I mean, you said this mean thing to me and so I'm not going to go out to dinner with you. No, I'm going to cancel my plan. I mean, like probably in some sense, I don't, people don't like to think about that, but like we, I think we kind of do naturally [01:31:00] anyway. But I think. Like, I think it's always worth thinking about.
Igor: Because I'm somewhat of a, um, not fan of Kant's original formulation, I just wanted to pick, pick on that. I think saying the, uh, treated, like, what's the equilibrium of results gives Kant more credit. Like, his original writing was more, um, about, uh, is it logically possible for there to be a world with that particular moral framework?
And if nature doesn't permit it, then he was against it. It was more like, uh, Uh, kind of existence possibility of it, where like everyone wanting to be first, it's not, it's not possible for everyone to be first in something. So like Kant would be against the, that desire, for example, as well. And, uh, but I think the current notion that we have of the categorical imperative is actually much better.
And that would, the version that you described is how we nowadays treat it. Cool. Yeah, I did not know that. Yeah. Thank you. It's, uh, he's, yeah. Uh,
Liv: final question. [01:32:00] What's one thing that you think is obviously win win that the world is missing out on and needs more of?
Nate: I mean, maybe it's a very generic answer that's not novel at all, right?
But I, I, I, I wish people knew how much technological growth has improved the human condition. Um, like people don't know how many more people were living in poverty in India 50 years ago, and they don't know that until this idea of liberalism, um, not meaning left, but meaning like the liberal movement that emerged as out of the enlightenment, um, how we didn't have like progress or growth for the entirety of human civilization until about, you know, the late 19th century.
You know, 1776, 1800 or so, right? And then ever since then you have an inflection point and you have progress in life expectancy, progress in curing disease, progress in expanding freedom ever since. Um, so that is one lesson I hope doesn't get lost [01:33:00] in the book.
Igor: The idea that because someone else won and maybe has even gotten a bunch of spoils from creating a technology doesn't mean they've taken it away from someone else.
They may have actually created. Increase
Nate: the size of the pie. Right. Or you increase the size enough where even if there's some unfairness that like, and by the way, part of what made liberalism work is idea of like human dignity initially only properly extended to like white men who own property, which is a gigantic problem, obviously, but like the franchise has expanded gradually, um, and also a social safety net where, um, you know, the optimum level of risk taking is if you want people to take risks, especially in terms of like the market economy.
Then you don't want them to like permanently be out of the market and bust their bankroll the first time they make a mistake or a product doesn't work out. And so having like a welfare state. Is, is actually quite optimal, even from a capitalist perspective, as well as from kind of a humanistic perspective.
Liv: I'd love [01:34:00] to just end, given you are Mr. Predictions, with a series of rapid fire predictions. Do I have to give, I, I As a percentage.
Nate: Okay.
Liv: Just, it doesn't have to be, I mean, try and be as system one intuitive as you can be. Um, okay. Likelihood that Kamala wins the election.
Nate: 57%.
Liv: Likelihood that neither Trump nor Kamala win.
Win the election
Nate: 1%
Liv: Likelihood that TikTok gets removed from the app store in the US by the end of 2025.
Nate: 25%
Liv: likelihood that Covid was a lab leak,
Nate: 76%
Liv: spicy . Likelihood that China invades Taiwan by the end of 2026.
Nate: 11% [01:35:00]
Liv: likelihood that Epstein killed himself.
Nate: Oh, that he did kill. Uh. 60%.
Liv: Nate, thank you so much for coming on.
Nate: Thank you, Liv. Thank you, Igor.
Share this post