0:00
/
0:00
Transcript

Escaping The Moloch Trap | Kristian Rönn

Reporting live from Black Rock City...
2

Are “Reputation Markets” the solution to Moloch Traps? This week’s Win-Win guest Kristian Rönn thinks so — he believes prediction markets (like

or ) could be used as decentralized mechanisms to create reputation scores for powerful corporations and institutions, incentivizing them to adopt more pro-social, long-term benefitting behaviors.

Kristian is the CEO and co-founder of carbon accounting engine Normative and recent author of the excellent book “The Darwinian Trap”, and in this special episode we forego the usual purple chairs to sit down in the dust of Black Rock City to explore solutions to the world’s deadliest game-theoretic demon.

Watch on YouTube above, or listen on Apple PodcastsSpotify, or any other podcast platform. Full transcript below, and follow me on Twitter for updates on future episodes.

Chapters

0:00 - Intro

1:42 - The Darwinian Trap

3:43 - Why Is Coordination So Hard?

7:19 - Unstable Equilibria

13:55 - Natural Selection and Capitalism

20:16 - How Can A Market Model Anything At All?

22:10 - Betting On Our Values

27:29 - The Power of Reputation Markets

32:56 - Centralized Mechanisms as Solutions

35:16 - The Dangers of Over-Centralization

43:00 - Mixed Economies

45:55 - Cancer & Defection

50:59 - The Miracle of Evolutionary Success

1:00:00 - Why Burning Man?

Transcript

Kristian: Societies of multicellular organisms that manage to coordinate and cooperate. won over those who didn't. The most powerful states today are like, it's an amazing coordination machinery where we're all working together through rule of law. The fact is that those societies out competed those that didn't coordinate at all.

So you can give a false picture of - look at history - coordination is everywhere, but I think what that doesn't take into account is survivorship bias.

Liv: Welcome to the Win-Win Podcast. As you can see, I'm in a bit of a different setting. No purple chairs. In fact, no chairs at all because I am here at Burning Man, in the middle of the Northern Nevada desert, it's called Black Rock Desert. And the reason I wanted to record here, in particular, my guest today, Kristian Rönn, is also attending, and Kristian just wrote a book, called The Darwinian Trap, about the topic of Moloch.

We will be talking about not only like the nature of these coordination problems, these Moloch trap problems, as I like to call them, or as Kristian calls them in his book, Darwinian traps, but we also then talk about solutions to these types of problems and the different ways that We can design incentives and essentially redesign the competitive games that we play with in society such that those games are actually good for us.

Let's dive in.

Kristian, why did you decide to write a book on the Darwinian trap? 

Kristian: First of all, thanks for having me. I'm such a big fan of your podcast. What made me write the book in the first place is basically everything. That we hold dear depends on us surviving for the next couple of decades. And if you look at the things that really threaten us, everything from climate change to potential nuclear war to AI gone rogue, they all are different facets of global coordination problems.

We essentially behave in a way. We know it's bad stockpiling more and more nukes, to the extent that a nuclear war would kill like, 5 billion people. It's obviously bad. We all know it's bad. But we're still doing it because of the overall incentive structures. To survive as a nation state, you need to have more power than your neighboring nation states, if you're all competing for the same resources.

And same thing when it comes to climate change, right? Like in order to survive as a business, you need to be more profitable than all other businesses. So you get more investments in the stock market. You can grow more, et cetera, et cetera. So unless we solve these global coordination problems, we are not going to be able to solve all of the others.

Global catastrophic risks that threaten us. So that's what compelled me to write this book. Like really to go into the root causes of why we are here and why this type of global competition is bad, but also the evolutionary origins of this type of competition. 

Liv: I think what would be interesting to define for people is like what, why is it a problem of coordination in the first place?

Because I think a lot of things people say is if we could just all coordinate, then we would solve all our problems, right? So why is it hard to coordinate? 

Kristian: Yeah, so I think the best explanation of what is hard to coordinate is if you look at various examples from game theory. So there's this very traditional example called the tragedy of the commons.

And that sort of explains why we're all trapped in these arms races. So the tragedy of the commons, how it's usually Formulated is let's imagine that you have some sort of commons. It might be a fishery. So you imagine a fishing village with a bunch of fishermen and you have a limited amount of fish.

Every year you want the fishing stock to grow back again. So you can have something to fish because if you fish too much, you're going to kill all of the fish and you're not going to have any business left. But what tends to happen if you look at this from a game theoretical perspective is that One fisherman might think that okay, if I just fish a little bit more, I will make a profit here more profit than everyone else.

Yeah, 

Liv: it gives them 

Kristian: a little bit of an advantage, like fishing a little bit more. But then all of the other fishermen are saying, okay, you're fishing a little bit more, so I need to do the same thing. And then all of a sudden, everyone is fishing too much and the fish stocks gets depleted, and whoever says, Okay, I'm gonna not play this game I'm gonna try to be sustainable.

They're gonna lose to everyone else. So that's what is happening. And I think I just wanna bring another example of this, which is related to Burning Man . So on my way here to the podcast, there are these communal bikes called Green Bikes. So it's like a bike that anyone can pick from the community to drive somewhere.

And if you look at Burning Man overall resources are abundant, you can go anywhere to get camps that are handing out food or like alcohol, things to drink and experiences, everything is abundant. But bikes are not abundant, they're truly scarce resources, so on my way here, I saw a very interesting phenomena, because I was looking for these communal bikes, and I saw five of them on my way to the podcast, but they were all upside down.

With one tire deflated. And what is happening here? What I think is happening is that once you find one of those bikes, you don't want to let go of that bike. You want to keep it. So you deflate the tire. Put it upside down, so then you can pick it up whenever you want to and inflate the tire and make sure that nobody else, such as myself, is taking the bike and driving away.

Because then you, it will take some extra time for you to fetch your bike. So in a perfect equilibrium, everyone would coordinate and say we're not gonna deflate the bike tires, the bikes are for everyone. But as soon as someone starts doing that strategy, then all of a sudden bikes are getting even more scarce.

So when you get a hold of a bike, you're thinking, Oh, maybe I need to deflate this bike myself in order to keep this green bike. And it creates this negative equilibrium. 

Liv: That seems to be the crux of it is basically, there are certain types of competitive situations, certain types of games. So in this case, it's the, getting a bike.

So when you need it whereby they have these vulnerabilities to certain types of strategy that once one person adopts it, it faces everyone else. It's an analogy I also like if you're at a, let's say you're at a stadium watching a rock concert or something.

And it's one of those normal stadiums where if everyone is sitting down and you can all have a view, but a few people near the front decide they want an even better view by standing up. And now, that little action creates this cascade effect, which forces everyone else to have to stand up too.

Otherwise, you just can't, you can't see any more people standing in front of you. So now everyone is stuck standing up. And because it's so loud, you can't easily coordinate to say, Hey, everyone, this is stupid. Why don't we just sit back down again? Cause now no one's got a comparative advantage over anyone else.

Everyone's in the same sort of head distant, but now we're standing when we could have been sitting. Certain situations are very vulnerable. It's like, they're like a, They're, the equilibrium that they're in is instead of it being like a stable where the ball is at the bottom of a hill and it can't go anywhere, it's perched on top of a hill.

And any little nudge by just one new little strategy will force it to fall and leave everyone in a worse state. So That kind of brings to the like, the question of like, how, when there are so many of these different types of scenarios where it's just like a lot of civilization, it feels like it is perched.

It feels unstable. It works. It's awesome. And it's evolving, but it doesn't feel robust to these situations. Say, with environmental degradation or whatever, especially like some new technology comes along that gives someone a new advantage that could be, that has negative externalities.

How do we design competitive situations where we can have competition, it's not an unstable situation where it's going to fall down into a bad situation, into a bad outcome? 

Kristian: I think there's various ways of doing this, right? And I think you're really getting to the crux of the problem here. One strategy that we have done for the past 200 years is very much a capitalist strategy, where these problems disappear if we just have an abundance of growth and resources, essentially.

And like the current capitalistic system has lifted billions of people out of poverty, but I think that's not going to be able to go on forever. I think this, I think it's because we have essentially leveraged certain technological innovations. And then those technological innovations have an S curve where we can't scale them up anymore.

We discovered relativity, we discovered Maxwell's equations, we managed to build the internet, and we managed to build electricity, and fast travel, and all of those things based on fundamental scientific discoveries that led to a lot of growth. But I think that contract can't go on forever because eventually you will reach a point where it's actually really hard to innovate.

And that's why we have this phenomena of, okay, like the U. S. and the Western world were first with this innovation. So we saw double digit growth for a long time, but then it panned out. And then, countries like China are catching up because these innovations are basically free for them to adapt and then they achieve double digits growth, but eventually like you reach a point where innovating is so incredibly hard and then you get back to a position of scarcity.

So I think we have the wrong analysis of the capitalistic system. I think a common analysis is that, okay, this competition, Is great. That is the engine of growth, and I think that is true to some extent, but it also dismisses the fact that we have had like a lot of public spending in fundamental R and D, like the ARPAnet and NASA and the ARPAnet was the predecessor of the Internet and like spending in universities for fundamental research that sort of unlocked capitalism to leverage all of those innovations to its later, greatest extent.

So I think if you look at the rest of history where you have proto capitalism without these technological innovations, you're actually seeing very ruthless competition. So imagine societies back in the days where, basically, they didn't have any rules.

Most people lived their lives without too much involvement from the state. You might have a feudal lord, but a lot of people didn't even have a feudal lord. They just lived very free lives. But they did so within a context of very constrained resources. Which meant that the best strategy in order to survive might be to mobilize a lot of men and pillage the village nearby and kill a bunch of people.

And I think we see the same type of equilibria in nature as well, where it's a dominant strategy to be perhaps a predator and chase and kill other prey. Or it might be a dominant strategy to be a virus and replicate really quickly, or a parasite. So like the overall sum of just natural selection, unconstrained with limited resources, leads to arguably a world that is not worth living for most creatures.

And I think we can see this in nature with antelopes and just it's so much fear and suffering out there. And I think we're really living in an era of exceptionalism because we have all of these technological innovations that we've been able to, to leverage. So I think capitalism is one model, but it's not sustainable in the long run.

So I think what we need is other modes of cooperation where we are actually incentivized to be more altruistic. 

Liv: I think one of the main critiques of capitalism is that it is, by its nature, designed to reduce everything down to a singular metric, which is monetary value, dollar value, pound value, whatever currency you want to use.

But it's a quantification of intrinsic worth. And when you try to do that for things like a tree, Okay. Let's say a tree has different types of worth, right? It has its lumber price, which can be measured that's a fairly easy thing to measure. You cut it down, you turn it into planks of wood that you can sell for a price on the market.

That's the way that capitalism values a tree. But a tree also has a bunch of other values such as stabilizing the soil, or Producing oxygen or providing habitat for creatures, insects and such, which we then rely upon for maintaining our ecosystem and food supply and so on. And it fails to count, account for those values.

Yeah. And I've not met anyone who is a capitalist who has, Had a decent answer to that. And I call myself a capitalist. I still think it's the best system we have. Yeah. I don't want to switch it out for anything else because we, until we have a better solution. Exactly. But that is where I think the inherent problem lies.

So really what it is, it's actually a problem of information. It. The system is doing its best under its current capacity for understanding, extra gathering information about the true value of things. And so that's where, and so by not having sufficient information about the true value of something and it condenses it down into it's just low, this singular metric it's overwhelming.

It leaves the system ripe for exploitation and like having lots of externalities in the future because really that's when you hear people talk about, Oh, we're in this open loop resource extraction system. It's not by definition, it means that it's like palming off a bunch of costs.

It's only measuring the immediate costs, but it's blind to the long term costs of this like extraction. faster than the ecosystem can replenish. So it's, yeah, it's really an information thing. So I wonder what you talk about in the book about ways to try and improve the quality of information?

Because coming back to my Moloch when you talk about the, some of these examples like climate change, companies polluting, they're not, they don't want to degrade the atmosphere or, pollute the atmosphere with too much CO2 or methane or whichever greenhouse gas you want to talk about. It's not like they're thrilled about doing this, it's just that informationally, A, it's like a little bit muddy.

You could argue we've got enough information to know that stuff is bad, but certainly at the time when they first started their business models, it probably wasn't that clear. And then of course, the incentive is to use it. So like how can we better improve information flow so that people can actually see the long term effects of actions better?

Kristian: What a great question, so I talk about that quite a bit in the book, and to take a step back, like when you describe sort of capitalism and Moloch, in the book, I define the concept that I call Darwinian Demons, which is similar to Moloch. And I define it in evolutionary terms Darwinian Demons is a selection pressure where it's adaptive for an agent.

It can be a human, it can be an animal, it can be like a state or government or corporation. It's adaptive for an agent to adopt a strategy that negatively affects everyone else. So I define Moloch or Darwinian demons as this sort of selection pressure. So to answer your question, how do you change the system?

In order to answer that, we need to deep dive into what is the problem with the current system. And I think natural selection, fundamentally, is an optimization algorithm that can't predict consequences in the far future. It can only know that from one generation to another, what is an adaptive behavior that maximizes survival right here, right now.

It doesn't predict what will happen like many generations down the line. It's fundamentally incapable of doing so. So capitalism works through natural selection, our entire evolutionary history, in fact, works through natural selection. So I think we need to solve two things here. We need to build a system where we're actually incentivized to predict future consequences and future externalities in a holistic way.

That goes beyond very narrow success metrics like profit, so that's the first thing. We need the prediction component. But secondly we also need the incentive component. So let's say we predict that your actions might have negative consequences like X, Y, and Z. We need to create the right sort of mechanism design where you're actually incentivized to behave in a way.

Okay. That maximizes what we intrinsically value and what we intrinsically value is probably not profit. We want to use that money to buy things that will make us happier and live better lives overall or we might intrinsically value nature just in and of itself, for instance.

So I think you need to look at those two components and in the book, I talk about a concept called reputational markets, which is essentially based on the idea of prediction markets. 

Liv: Explain briefly what a prediction market is. 

Kristian: So let's imagine a normal financial market. I might bet money on NVIDIA stock, for instance, because I predict that, in the future due to the growth of AI, demand for their GPUs will go up and I will make a lot of money from that prediction, right? So you have sort of normal stock markets that have an element of prediction into it.

Like prediction markets instead are purely focused on the prediction in and of itself. One prediction could be, like, oh, who is gonna win the Euro World Cup, for instance. So that could be one type of prediction. Another prediction could be, like, who is gonna win the presidential election.

Or another prediction could be, like, right before Russia invaded Ukraine. Is Russia going to invade Ukraine in the first place? And so it operates a little bit similar to a sports betting market. So if you are correct in your prediction, you're incentivized because then you earn more money than anyone else.

And the thing is that it's strategy agnostic. That's the beautiful thing about it because different types of predictions might have different types of modeling features. Let's say you want to predict something like the soccer game and the outcome of that, maybe the best way is to look at who is in the team and what are the weather parameters and, the people that are experts build like really really detailed models of that.

But if you predict, is Russia going to war, you might have different types of input parameters. So it's fundamentally agnostic to what extent you used mathematical modeling or superforecasters or AI. Whatever is the best strategy to predict will win at the end of the day.

Liv: So how does that tie in with reputational markets? 

Kristian: Yeah. So a reputational market is essentially a prediction market with a few tweaks. So first of all, a lot of predictions markets have like purely useless stuff, if you ask me. The first step of setting up a reputational market is to actually vote on what are our intrinsic values in the first place. So if I register to this hypothetical reputational market, I will say Okay, I actually value happiness and being for myself. I value it for others. And I might have a few sort of intrinsic values. Then the next step is to look at different entities in the world and say, how do we think that they will affect the things that we intrinsically value related to a specific sort of outcome or metric.

So it could be like, let's say I value nature, then I might be interested in, okay, what Will BP have another oil spill in 2025? Or let's say I value well-being overall and AI existential risks. So then one bet might be like, okay, will OpenAI's GPT 5 enable a major terrorism attack in 2025? Because with these like AI models, it has been shown that you can use and jailbreak them to give instructions on how to produce things like bombs and bioweapons and all sorts of things. So then you place all of those bets related to what we intrinsically value. And then you create an overall score for any type of entity in, in real time.

So that's a reputational score for that entity. 

Liv: That’s interesting, as it's actually offering a way around this problem we talked about by filling that gap in information around these like more nebulous values that the current monetary system struggles with or the current capitalist system struggles to actually evaluate and weigh appropriately.

Kristian: Exactly. And I think moreover, it offers a decentralized way of doing so. So you're not saying that, Oh, this bureaucrat here in the government is the arbitrator of truth of what is right for human beings. It is very dynamic. And markets can adapt to whatever is happening. So if it turns out that, Oh, a bunch of safety researchers are leaving open AI.

They're like, Oh, I think actually that should affect the prediction of them enabling a major terrorism attack in one or two years. So the market can adapt. And one more thing that I think it's important to say around reputational markets is, okay, why would anyone care about their score in the first place?

You have these reputational markets that are saying like, okay, this company has a good reputation. You have a bad reputation, like, why should anyone care in the first place? Why not just ignore your score? And I think the beautiful thing is that when you actually look at power in the world, it is more decentralized than you think.

The entire world economy is super decentralized. So if you take something like building an AI system, you do not just need talented engineers. You need GPUs from NVIDIA, NVIDIA needs the foundries from TSMC, TSMC needs lithography machines from ALSM. All of them need minerals, critical minerals from, different mines in China and DRC and South Africa.

So you have this incredibly interdependent system with several sort of more or less bottlenecks. So if you remove a certain resource from that entire value chain, the value chain will more or less collapse. So let's imagine a system where actually if you are Nvidia, you might think that, Oh, like open AI just got a very low rating because of.

They're not taking safety seriously. Maybe we won't deliver chips to NVIDIA. To OpenAI. But then the question is like, why would NVIDIA care? The mechanism here is that, what if your score Depends on you sanctioning someone with a low score. So let's say you choose not to sanction OpenAI when you're NVIDIA.

Then that might affect your score, and you might be afraid that if you get a lower score, you might not get access to the chips from TSMC. And they might be afraid that, if our score gets downgraded, if we don't sanction, we might not get access to this type of machinery from ALSM, or you might not get access to investments from pension funds.

So I think you can create a sort of new equilibrium. where actually everyone cares about their overall reputation because you will be sanctioned by someone else in the value chain if you don't care about your reputation. 

Liv: What stage is this at in development? I love the idea in terms of it seems like a very elegant solution or at least a partial solution to this issue.

But I, it's not. My inner cynic feels like it's going to be so hard to get it to catch on given that the incentives are for companies that are currently doing a lot of short cutting harm externalizing practices. The incentive is for them to not have this. If they can get away with misbehavior without reputational costs, why would they want to essentially have such a reputational measuring system in existence in the first place?

How, what are they, like, how to step or how to get this established? 

Kristian: Yeah, I think you need an injection from like a powerful player in the global value chain network that actually genuinely cares about someone's score because otherwise it's almost like an infinite nesting doll, like a Russian nesting doll, meaning that, okay okay.

OpenAI would care because, maybe NVIDIA cares, but NVIDIA would only care if TSMC's care. And then you'd like, if nobody cares, you don't have an injection of caring into the system in the first place. And I think the most likely injection of caring would be through something like pension funds, for instance.

So pension funds, at least the big sovereign wealth fund, like the Norwegian or Swedish sovereign wealth fund, to some extent a lot of other sovereign wealth funds as well. The way they think about things is, okay, how can I maximize the values of the pension holders, like the people whose fiduciary duties it is for me to protect.

And it's interesting that If you look at the whole movement around ESG, there exists sort of these ESG ratings in the world, most of them are complete bullshit. Yeah. But the reason why they were created in the first place is that you had some pension funds that say, like the Norwegian oil fund, okay we're going to care about these ratings and them caring because pension funds overall in the world economy are responsible for caring.

53 percent of global investments. So if they start to care, like private equities that they invest in need to care and the portfolio companies of those private equities need to care and the supplier of them needs to care as you see these network effects. And in fact, like with my company, like the norm that I've been running for the past 10 years, we have been seeing those.

network effect of, okay, actually, like I need to care about my carbon emissions because my buyer cares about the carbon emissions and the investor cares about carbon emissions. 

Liv: So why are the ESG metrics bullshit? And how do we ensure that these reputational markets don't fall into the same category?

Kristian: Great question again. It's, they're bullshit because they're too narrow if you look from sort of an evolution, they're too easy to game, essentially. And take for instance, Shell has had a higher ESG rating for a long time than Tesla, which is absurd. It's absolutely absurd. And how have they managed that?

Because at the end of the day. The way they measure success is through do you have a certain amount of policies in place? So you can essentially gain the whole system by just creating paperwork. Oh, I have a human rights policy. I have an equity, diversity and inclusion policy. I have a reduction target.

I have, and you can just produce all of these papers and oh, fantastic. But that's much easier than fundamentally changing your business model to I dunno, like electric cars or low emissions products and so on. So they're way too easy to game. But then there are also the incentives of the ESG rating agencies.

Like they They want to tell a story to big investors and private equities and even pension funds. They want to tell a story where, oh, if you use our ratings, you're going to get a higher return of investment. So what they end up doing is that they're actually tweaking the parameters of the model to So that the model the rating that they have select like the most profitable companies, 

Liv: right?

They're not actually optimizing therefore for the environment. Exactly. They're optimizing for making their the people who select their thing money 

Kristian: Yeah, exactly. 

Liv: And it's usually slightly at odds. 

Kristian: Yeah, and again, like this is a problem of one central entity, like being the sole sort of arbitrary truth of what is sustainable and not sustainable.

And it doesn't matter if it's like a monopoly of private enterprises with ESG ratings or a monopoly of the government saying that it's good or bad. Like it's way easier to game than if you have a completely decentralized system where everyone has a say and where the market can instantly react and have a proper debate of, what is actually happening and what are we actually valuing here? 

Liv: So reputational markets are one possible solution to essentially creating a new set of incentives. 

Kristian: Yep. 

Liv: The long term things that matter and not just optimize for the short term. What other possible solutions are there? 

Kristian: Other possible solutions involve more centralized types of mechanisms.

For instance, we might reform the UN. The problem right now with the UN is that they can't enforce global laws. It doesn't matter if Putin is on the ICC wanted list or whatever the International Criminal Court says to arrest him. There's nothing they can do because they have no Global police to enforce laws and the thing about centralized mechanisms that make it so hard is that it's against the incentives of a nation state to give up their power in the first place.

But there are however some exceptions and I think the European Union is an excellent exception because with the European Union like 14 percent of nation states have willingly given up their sovereignty. And it has been steadily giving up their sovereignty over time unlike the UN, which was formed right after a world war, and, okay, we need to prevent this from ever happening again.

It was formed after the catastrophe. The European Union was also formed after the Second World War, to be fair. But it has constantly expanded. Because the incentives to join for a lot of countries are actually higher than not being a part of it. And the key here is the free market. For a lot of countries being a part of the European free market is so beneficial, so you're willing to give up certain parts of your judiciary and self governance in order to join.

The thing is that, I think over time we might have something close to a world government through a union of unions, but the timescales at which that operates is just so incredibly slow. And I think one could argue that it's not gonna happen at all. 

Liv: I agree. That said, it's not strictly obvious to me that while there are benefits that come from, for example, having, this very centralized, powerful, global essentially adjudicator of the world, including the inability to police stuff that is taken away, like that opens up the realm of potential tyranny.

Yeah. Extremely. And that is arguably as terrifying in many ways to me, because we've seen like over, oversteps, like when because no centralized agency is perfectly good at what it's meant to do. And in fact, often they're like the more centralized and powerful they become, the less error correction mechanisms there are within them and thus the more likely they are to go awry.

Like you see that with the WHO, which cleared about as bad a job with COVID as it could possibly be. So it managed to both fail to contain the disease. and also completely destroy public trust in important disease mitigation things. So we're arguably in a worse state, shouldn't the next pandemic come along, which may be much worse than COVID.

I think we're going to be even less equipped to deal with it because so many people are quite rightfully distrustful and pissed off with centralized agencies around public health. That's just one example. Really a long term solution, I think, that we need in that realm is, yes, having more powerful centralized agencies will help with certain things around resolving the tragedy of the commons, but then that opens up this risk of essentially if Moloch is the god of decentralization, decentralized game theory gone wrong, I there's the god of centralized sort of bureaucracy gone wrong.

Kristian: Yeah. 

Liv: I like to call it this thing called norm. Basically it's what's the vibe if Moloch's ah, you, I must win. Yeah. I must win this war and then screw the consequences. That's Moloch's vibe. What's the god of bureaucracy where it's just like this stagnant paper pushing have you ticked this box?

Oh, you haven't. And then they, they're very like rule stickly, sticklery. And inflexible and, oh, there's a little dust devil coming in, there's a little dust. 

Kristian: Yeah, there we go. Got some in my eye. 

Liv: Yeah, I call it the norm sorry, for northern. Yeah. It's just ah. How do we design a system that takes the best bit of centralization, but also the best bits of decentralization?

Yeah. Such that, it's like a third attractor state, as Daniel Schmachter Berger likes to call it. 

Kristian: Yeah. 

Liv: Which minimizes the risk of game theory driven chaos and death by bureaucracy. And potential risk of totalitarianism forever. 

Kristian: Yeah. What's the third option? First of all, totalitarianism really worries me.

Because, there's death of bureaucracy and stagnation is one aspect. I think about another aspect. It's like good hearts law, which is essentially like when you have a particular metric, it ceases to become a good metric because you can hack that metric. And that in bureaucracy is like a lot, right?

For instance, in the United States, a lot of police departments, it might seem super reasonable to have some sort of ticketing quota to make sure that police are doing their jobs, bringing in revenue to the police department and the county overall. But that leads to an incentive structure where the best way of doing so is just to be super aggressive in your policing and then it, derodes trust, deteriorates trust in, in, in local communities and so on.

And I think on a global scale, it could be much worse. Let's say your only success metric is we want to reduce existential risks at all costs. Then we get something that, like Nick Bostrom calls this high tech panopticon, potentially, where, you build a surveillance state where everyone is surveyed at all points in time to make sure that they're not doing something risky and you just have this global tyranny, which, which would be absolutely horrible.

So I 

Liv: think like this wouldn't be allowed to exist, but no, exactly. It's very laissez faire like the phrase safety third. 

Kristian: Yeah, exactly. You are way too free here. 

Liv: It's the most alive. I feel this place. It's yeah. If you want to climb that big structure, you want to go 40 feet in the air without any safety measurements, go for it.

I fall at your own risk. But that is the inverse of this sort of safetysuit culture, which many people who are very pro centralization want. 

Kristian: Yeah. 

Liv: And that is, yeah, extremely terrifying. It varies from person to person. Some people find I think depending on your risk tolerance, some people find the idea of decentralized chaos more scary and others find centralized tyranny more scary.

I think I found both. Yeah. It's terrifying. I want something I want neither, but I just interviewed Nate Silver. Yeah. Who just wrote this book on, basically the different appetites of risk. 

Kristian: Yeah. 

Liv: And like these different communities, he calls it the river and the The village, and the village is like the sort of the centralized establishment, which tends to be very risk averse and more of wanting the safest culture.

And then the river, he calls it, is like this sort of mixture of like gamblers and VC and like techies and, Silicon Valley and like people who are very they love calculated risk or even uncalculated risk, but they're much more risk tolerant and they're much more into decentralization. So it's really, again, it's like this tension, like a lot of this sort of culture war we're seeing going on is this tension between those who want more centralization and those who want more decentralization.

So it's what is the winning answer for both sets, because both have some, yeah, are right about some things and are wise about some things. So how do we, Find a win between both? Yeah, 

Kristian: I completely agree, first of all. And I think another, two communities that are fighting, mostly online, but I think that fight will have real ramifications in the next decade or so, is the effective accelerationism, that are very much, VC community, Marc Andreessen look at if we just have bureaucracy and centralization, look at what innovations it will prevent us from creating.

We might not use AI to cure cancer or go out and colonize the galaxy or do amazing things. We might not be able to 

Liv: have reputational markets. 

Kristian: Yeah, exactly. We might not be able to have that either, right? Yeah. And then you have I think on the other camp the more risk averse sort of effective altruists that are like, We need to stop.

this right now, at any cost, more or 

Liv: less? To be fair, I don't think it's just that a lot of effective altruists are not wanting to stop AI, but they're not. No, I agree. But it's true that there is a subset within who are, like, very much really just stopping. 

Kristian: Exactly. I think that the average effective altruist is very pragmatic and you'll see both sides, but then you have very extremes of effective altruism, right?

And I think you need to just be pragmatic about it. Like centralized mechanisms are better for some things. Decentralized mechanisms are better for other things. And if you look at them. Global like economy and nation states, the states that perform the best are the ones that have a mixed economy.

And you have a very pragmatic approach of where do we need centralization? We might need it for healthcare and basic infrastructure and R& D. Like, where do we not need it? We don't need it in terms of five year plans like the Soviet Union, because then it stifles innovation and just creates perverse incentives through Goodhart's law.

You So more concretely, like, how can you combine the both? I think AI presents such a good opportunity to combine both of them. Because if you look at the global system of artificial intelligence and the value chain of AI, it is actually highly decentralized. And what I mean by that is different nation states are dependent on each other and especially China and the U.

S. are depending on each other in a way that is not talked about enough. Now for instance you have U. S. export controls. We, the U. S. don't want to give access to high end GPUs to China because they might use that to create more and more capable AI agents that can be used for military purposes and a whole host of other things.

So they are frozen out from getting access to the high end ships. The interesting part here is that TSMC and NVIDIA and sort of the companies that are under the U. S. sphero influence completely dependent on rare earth minerals and sort of mineral inputs into the semiconductor industry. And when you look at that, like China and its allies in Sub Saharan Africa has complete control over that.

So that level of decentralization could create the right conditions where eventually China could say that, Okay. We're going to have export controls against you. We're not going to give you access to all of these rare earth minerals. And there might be a trade war for a while, and then both of them might realize that, actually we need each other to create AGI.

And we might do so through some sort of, These somewhat decentralized multinational AGI consortium where we agree on, okay, let's build like an AI together because clearly we need each other throughout the value chain and supply chain. It might be a little bit of an optimistic picture and there might be flaws to that argument.

It sounds 

Liv: nice, but that seems like an unlikely come. Even if it were true that they do actually need each other, it feels like the incentives to. defects are still so strong. Like, how would such a situation be immune to the same muller problem? 

Kristian: I think because there is this clear interdependency similar to how our cells in our body need to coordinate, you can defect and that's called cancer.

But if you do like you kill the entire organism. 

Liv: But then cancer continues to exist. Yeah. It happens, right? Because it doesn't, it's not smart enough to see what the long term effects of its behavior is. 

Kristian: Ooh, I actually want to take this in an interesting direction because I think like one, Sort of objection that I hear a lot is that global coordination and cooperation is more or less evolutionary inevitable.

Look at every major evolutionary transition that we have had, like micro, yeah, it's more coordination, like micromolecules learned how to create the first metabolism, then RNA and lipid cell membranes created sort of the first protocell, they coordinated and created like big clusters of cells that we call multi multicellular organisms.

We, we include them, right? Okay. Societies of multicellular organisms that managed to coordinate and cooperate won over those who didn't. The most powerful states today are like, it's an amazing coordination machinery where we're all working together through rule of law etc. And the fact is that those societies out competed those that didn't coordinate at all.

So you can give a false picture of, look at history coordination is everywhere. But I think what that doesn't take into account is survivorship bias. And to explain survivorship bias, there's this World War II analogy. During World War II you had a lot of planes that went out on bombing skirmishes, and then when they came back, statisticians looked at all of the bullet holes in the airplanes to see okay, where do we need to put extra armor in order for the planes to survive?

And it turns out that all of those bullet holes were, like, around the wings of the planes. And then you might conclude, okay, we must put more armor in the wings, because no bullet holes were around the engines. But actually the planes that hit the engine, they, those pilots crashed and died.

So they never came back. And I think we are committing this sampling error globally, because planet Earth is just one single data point. We don't know what looks like in other worlds there might be other planets where in fact multicellular life might source form really easily, like Nick Lane and other biochemists seem to think that, and I think they have strong arguments.

It's basically just chemistry to create biogenesis, but then in that biogenesis, let's say thermal vents in the ocean floor. Maybe viruses evolve really early, and they kill all of life. Or maybe a super predator evolves really early, before life can properly diversify and kill everything.

So there might be these defectors throughout evolutionary history that kill all of life. The question is, do we see the similarity to bullet holes in our plane, in our evolutionary history? And the fact is that we do, and I can't believe this is more, not more talked about, but arguably, five out of six major mass extinctions were caused by life itself.

So essentially bacteria or algae have a mutation where they're incentivized to just grow and multiply really rapidly, but that multiplication creates runaway climate change, or like acidification of oceans that almost kills all of life. And that has happened like Again and again. So I actually think that life through natural selection is less stable than we think.

And Bostrom has this analogy about technology. We invent new technologies and it's almost like picking a ball from an urn. Where some balls might be white and it's clearly beneficial technologies, but others might have a dual use that can kill us. It might be AI, it might be nuclear weapons or something else.

And it's actually black ball that can kill us. And I think the same thing exists in mutation through evolutionary history in biology itself. 

Liv: But then isn't that just a question of timescale? Because we, yes, there are all these, we've had all these six mass extinctions and so on, but if you zoom out far enough, Life has still prevailed.

And you could argue that it's still on this trend of increasing complexity, even though it might have these sudden mass die offs and so on, like these. Basically, if you zoom out enough, it seems like it's like a one step backwards but three steps forwards type of situation. But I guess your point is that we can only be here to say that in the worlds where we happen to have been lucky and survived.

Yeah. I don't know whether it's a true fact of life that coordination always exists. Or that we just happen to be extremely lucky on the one planet where it's managed to, life itself has managed to survive its own inner self destructiveness. 

Kristian: Exactly. We will always have a biased sample.

Liv: Our mere existence of being here asking the question. 

Kristian: Exactly. And so I think the best we can do is look at our evolutionary history. But also look at, where are all of the aliens? We don't seem to find aliens anywhere. And there's different solutions to the so-called Fermi Paradox.

Like, why don't we see Life everywhere. If you look at population curves, they tend to go up, and cosmically, that should be the case as well. It should be an adaptive strategy to go out there and colonize the universe, and in fact, you could colonize the entire Milky Way galaxy within a couple of million years, which is nothing in cosmic timescales, so we should see them everywhere.

And one explanation might be, oh, biogenesis, it's super hard for life to form. Or it might be, Oh, life always kills itself because of dangerous technologies. But I actually think that Moloch or like Darwinian demons and multipolar traps is just one continuous long filter because we're constantly picking those balls through mutation and natural selection.

Liv: Of just like basically a, yeah, a cancerous type species coming along and doing the cancer. 

Kristian: Exactly, and I think it's actually worse than just picking the balls at random, because I think we're incentivized to pick darker and darker shades of gray balls. If you plot the evolution of civilizations, oh, you had maybe some civilization back in, in, Mesopotamia or modern day Iraq that invaded, invented iron and they could create, produce better swords. So they built an empire around that. You might have the Mongol Empire that realized that, Oh, we can actually have archers on horses, and then we will overpower everyone else and build an empire.

So you have these selection pressures for more and more dangerous weapons over time, from like swords, to gatling guns, and machine guns, to nukes and now potentially rogue AI or like bioweapons where you need to just innovate like worse and more dangerous things. And there's a selection pressure to do that.

Liv: I buy the logical arguments, but it's just such a gloomy and seemingly hopeless situation if that were to be true. I don't even want to subscribe to it. Intuitively, it just doesn't feel like that's the case. If that was truly the case, that basically it's inbuilt in nature, that it's almost, the odds are overwhelmingly against any one planet from actually really just persistently making it.

Do you discuss solutions to that, if it were actually true? 

Kristian: Yeah, I think what gives me hope, at least, so I have to admit that gives a super gloomy picture. And when I first got that idea, it was like back in high school when I played around, like I wanted to be a better Python programmer. So I built evolutionary simulations.

So agent based simulations where you had mutations, predator prey dynamics, like virus host dynamics. And I'm like, whoa, Life dies all the time in these simulations, like, why is the same thing not happening on, planet Earth? And then when, as I deep dive, I'm like, whoa, maybe actually the same thing is happening, but we're just extremely biased in the way we look at our evolutionary history.

But what gives me hope here, It's the decentralization of things. I think we live in a system where everyone, in some respect, is dependent on everyone else. I have this example in the book. Let's imagine you want to create a chicken sandwich from scratch. It sounds like the easiest thing ever to do as a chicken sandwich.

Let's say you Don't, you can't buy any tools or raw materials from everyone else. They are basically back to stone tools. So you need to grow the grain, you need to figure out how to mill it, you need to feed the chicken, and there was actually a guy on the internet that tried to do that. Took him six months, and it cost him 1, 500 to create a simple chicken sandwich.

So just is a testament to how extremely interconnected power actually is. So it seems like power is not interconnected, but that's because we don't have this global view of what everyone else is doing. But I think something like a reputational market could offer that global view, where actually you know that if you deliver a certain component to someone, you know the reputation of that person.

Or, the reputation of the next company in the supply chain and so on and so forth. So I think that interconnectivity gives me a great deal of hope. 

Liv: What can people watching do to either find out more about reputational markets or to help build them and design them? 

Kristian: First of all, you can pre-order the Darwinian trap on Amazon and read the book.

But then secondly, I think we need to spread awareness. I think there is this. Narrative, where everything wrong in the world is due to a couple of bad apples. Okay, social media created a bunch of fake news, and everything is Mark Zuckerberg's fault. But the fact of the matter is that Facebook didn't invent the feed and sort of the pay per click model in a social media feed.

Maybe Myspace or someone else would have done it, and they would have been the winner. So it's not like we need to stop blaming individuals. It's don't hate the player, hate the game. As a famous hip hop artist said. I 

Liv: like to change it to, don't hate the players, change the game. 

Kristian: Yes, oh yeah, I love that even more.

Yeah. We need to all say, we need to recognize that there is a game here and we need to go away from scapegoating individuals and think about what system of incentives are creating this in the first place. So you need to vote for politicians that are not connected to this sort of simple narrative of you are good and you are evil.

So I think that's one thing that everyone can do, but then you might be in a position of power. You might be, I don't know, running a pension fund, for instance, then you might have a great deal of power in terms of who you invest in or who you do not invest in. You might run an enterprise.

You might have power like your suppliers or employees and everyone you're connected to. As a politician, you might even have a greater deal of power. So I think like everyone has a little bit of power, but I think the first thing we need to do is that we need to make this Transcribed Concept of, Moloch, multipolar trap or whatever we call it.

And probably we need to coordinate everyone who's talking about it and use the same name. I actually feel as a side note, it's like a little bit of coordination. failure on my part calling it like Darwinian Demons because when I wrote the book like there were not so many people talking about Moloch and then like your TED talk came out which is like brilliant by the way and I feel like there's been a lot of more you know popularization around that as a concept.

But we need to spread that concept to make people realize that this is actually the generator function behind a lot of the problems that we're seeing. 

Liv: Are there any specific resources that people can go to now? Is there a particular group already developing these reputational markets through prediction markets?

Is there a website people can go to? 

Kristian: No not not at the moment. But my hope is by presenting this idea. And I've talked to a bunch of people who are interested in developing the idea further. 

Liv: One of you should make a subreddit or something. 

Kristian: Oh yeah. I think that would be super cool.

You should maybe 

Liv: just make a subreddit of r/reputationalmarkets. You should just make that. Yeah. I love that idea. Anyone listening, Kristian's going to make it. Go join. Yes. I'll join it too. Yeah. And then let dialogue start and then maybe over time it can become a Discord server or whatever, but I think that's, that's talk about decentralized solutions.

Yeah. And that's a decentralized way to cause it, yeah, as you say, it needs a kind of hive mind. It needs a community to help make it. And then it is a, it is, it would be a form of a hive mind. So just to finish up last question, why Burning Man? Why are you here in the first place? What is it that is so special about it?

Cause to me, it is. A perfect example of a decentralized system. It's got the tiniest amount of centralized organization in terms of they lay out, the organization lays out the, marks out the streets. They provide a few porta potties and there's a little bit of, it's like a, there's a medical center and such if you need it, but otherwise it's there you go, there's the infrastructure.

Now you people come in and there's a few basic principles. What's the I don't have to say rules, but it's don't drop trash be, inclusive and don't make your any, don't build something that's only for a certain, like special group of people. Let anyone be able to access and enjoy whatever you build and be, and radical self expression, don't become a monoculture.

Like you want to be into hip hop, you can, you want to be into metal, you want EDM and there's different flavors of music, but it's just like different styles of aesthetic. And yet just through these basic 10 principles and this basic little bit of infrastructure, this very decentralized thing does its thing and so much cool stuff emerges.

Just, I've never seen something where I like, I call it like the high density of what the fucks per second. I have just I'm like, what the fuck is, there's like a guy wearing a, like there's a T Rex will be part of the, there's a dog pushing a lawnmower. I like it. It's just endless moments of surprise.

It's so fun. So why is it, what does it mean to you? Why is it a Win-Winny thing? 

Kristian: Yeah. So I've been a burner for the past like 12 years that has had such a great influence on my life. And I think what it really highlights is that humans are like pro social creatures, like we want to be loved and we want to love one another and we want to do cool stuff for each other.

In the ideal of circumstances we just want to do good. And I think, but I also think there's failure modes where Burning Man is glorified. And what I mean by that is, This situation is so unique. Like you have people that are organizing camps for a year. Like as soon as they leave Burning Man, they think about how to build the next thing, but there's so much resource resources.

There is an abundance of resources pulled into this week. So you essentially have so much resources that there is zero competition for resources with the exception of the bikes that I mentioned earlier, where you have the clear. coordination failure. So I think what it shows is that in the ideal of circumstances where we have resource abundance, we tend to cooperate.

And I think Burning Man ties down to this sort of old debate between Hobbes and Rousseau. Rousseau argued that, the natural state of humans is that, We just want to be in a tribe and we want to be pro social and Hobbes argued that, actually the natural state is that we will kill one and each other and that debate was renewed recently with Steven Pinker releasing Better Angels Are Nature.

Essentially arguing that, our nature is actually not that good. We have a lot of suffering and, like really horrible punishments and wars and, people like clubbing each other down on the street in, in, in the past, and then someone like Noam Chomsky defending Rousseau and saying actually anarchy is good.

We are all pro social, but I think both are right at the same time. And it all depends on the. constraints of the system itself. I think Rousseau is 100 percent right. If we have resource abundance, which I think to some extent we had pre agriculture with smaller societies where we would like all live off the jungle and the land without any problem, without like rapid population growth.

But with the advent of civilization, we went more into sort of the Hobbesian or Pinker world, if you will. So I think Burning Man is just a testament that we humans are multifaceted, and it all depends on the incentive structures of where humans are embedded at any point in time. 

Liv: Awesome. Thank you so much.

Kristian: Thank you. This was great. 

Discussion about this video

User's avatar