9 Comments
User's avatar
Justis Time's avatar

Optimistic outlook.. isn’t it possible we aren’t responsible for Ai achieving super intelligence but it without our help.?

Expand full comment
Kind Futures's avatar

In a time where AI is advancing at unprecedented speed, a few voices are quietly choosing a harder path:

One that puts safety before scale. Wisdom before hype. Humanity before power.

There’s a new initiative called

“Safe Superintelligence Inc. “

a lab built around one single goal:

To develop AGI that is safe by design, not just by hope or regulation.

Created by Ilya Sutskever, Daniel Gross, Daniel Levy

If you're someone with world-class technical skills and the ethical depth to match —

this is your call to action.

We don’t need more AI.

We need better, safer, more compassionate AI.

Spread the word. Support the mission.

https://ssi.safesuperintelligence.network/p/our-team/

Expand full comment
Ahmed Yousef's avatar

None are good numbers prediction

Expand full comment
Anatol Wegner's avatar

Let’s make up numbers about nonsensical concepts.

Expand full comment
Jing Liang's avatar

Regulation doesn't make things safer, innovation + competition do. Innovation creates the safer alternatives, and competition ensures they have a better chance to be adopted. Regulation often limits both, tends to be capture by incumbents, therefore leads to less safe outcomes. Not trusting the public can act in their best interests collectively is the very path leading to the opposite of the intension.

Expand full comment
Jing Liang's avatar

Believing sacrificing is a slippery slope. Sacrificing others obviously not good, but sacrificing yourself for "greater good" is just the opposite side of sacrificing others, and the "greater good" is just what a few thinks what's good for the collective. The real greater good can only be achieved by respect each and every individual's self-interests, which sacrificing nothing for others and require nothing from others' sacrifice.

Expand full comment
stewart teager's avatar

ai is going to be the new world government sounds like a dystopia we should avoid

Expand full comment
Greg Stevenson's avatar

The golden rule isn’t called the golden rule for nothing. I.e. it is a valuable rule. The example given of “Honestly” for instance, is implied by application of the golden rule. I wonder if some quantum analysis of AI’s chain of thought as to how well it holds up against the golden rule would be useful in the alignment of Super Intelligence. Some sort of narrow AI dedicated to assessing outcomes based on the golden rule? I tried something like this, early on, using 3 foundation models simultaneously judging human communication with additional prompt specifying to use the golden rule. Some success, but could be greatly improved.

Expand full comment