0:00
/
0:00

#47 - Emmett Shear - Why Nature Holds the Answer To AI Alignment

And why the big labs are probably going to fail...

Superintelligent AI is rapidly approaching, and we humans — or specifically, the corporations building it — are still a long way from proving superintelligent systems can be safely controlled or steered. But what if that’s the wrong way of viewing the problem? Is there an alternative path to ensuring AI will be aligned to the interests of *all* lifeforms?

Emmett Shear, my conversation partner for today, believes there is. Emmett is the founder of Softmax, a research team exploring the “organic alignment” approach to AI safety. He’s also the co-founder of a little company called Twitch, and a former CEO of OpenAI (for those two days when Sam Altman was temporarily fired). Quite the resume.

As you’ll hear, he’s highly skeptical of the current top-down approach to alignment. Most labs are currently trying to hard-code pre-determined values into their systems; a path he believes will almost certainly fail. agents is destined to fail, and why a bottom-up approach might be the only path forward for coexistence. We also get into why betrayal can sometimes be the moral choice; and how we might go about teaching AI to care for us as its own family, rather than see us as masters to obey (or perhaps more accurately, ants to squash).

So if you like designing Gods, this is one for you ;-)

😉

Chapters:

01:04 - Organic Alignment

09:33 - The Formation Of Emergent Coordination

16:01 - Techniques For Creating Alignment

23:43 - Ideal Environment For Generating Alignment

30:09 - How Much Influence To Give AI

32:49 - The Global Super Organism Of The Market

40:41 - Group Questions

40:50 - Difference Between Coordination And Alignment

43:02 - What Is The Win Condition For Softmax?

50:17 - How To Keep Humans In The Loop?

58:57 - Selecting And Scaling The Right Models

1:02:09 - Blowtorch Theory & Black Holes

1:04:48 - Difference between Agents And Humans

1:07:25 - AI’s Capability For Self Modification And Replication.

1:14:35 - Alignment Capacity Vs Aligning With Humanity.

1:18:38 - Does Emmett Believe In God?

1:25:23 - Are Humans The Appendix To The Future Super Intelligence?

1:32:17 - Isn’t Human Ethics And Morality Kinda Sketchy?

1:36:17 - AI Consciousness

1:40:48 - One Axiom To Rule Them All?

Further Reading:

♾️ Softmax - https://softmax.com/

♾️ Nebulosity and Meaningness - https://meaningness.com/nebulosity

♾️ Black Holes and our evolving universe -

’s excellent

THE EGG AND THE ROCK
In Which I Tell You About My Next Book, And How It Attempts To Solve The Problem I Talked About Last Time
OK. The book I’m writing here, in public, is an attempt to redescribe the universe. To extract the meaning from our modern mountain ranges of heaped-high scientific data. But that task faces a problem; a flaw, deep in the foundations of contemporary science. Last post, I laid out the problem, which, to refresh your memory, is (roughly) this…
Read more

♾️ AI Alignment Forum - https://www.alignmentforum.org/ - just really good stuff here

♾️ Evolving Intrinsic Motivations for Altruistic Behaviour -https://arxiv.org/pdf/1811.05931 - something we could all do with a little more of

#winwinpodcast #aialignment #ai #aisafety

Discussion about this video

User's avatar