Regulation doesn't make things safer, innovation + competition do. Innovation creates the safer alternatives, and competition ensures they have a better chance to be adopted. Regulation often limits both, tends to be capture by incumbents, therefore leads to less safe outcomes. Not trusting the public can act in their best interests collectively is the very path leading to the opposite of the intension.
Believing sacrificing is a slippery slope. Sacrificing others obviously not good, but sacrificing yourself for "greater good" is just the opposite side of sacrificing others, and the "greater good" is just what a few thinks what's good for the collective. The real greater good can only be achieved by respect each and every individual's self-interests, which sacrificing nothing for others and require nothing from others' sacrifice.
The golden rule isn’t called the golden rule for nothing. I.e. it is a valuable rule. The example given of “Honestly” for instance, is implied by application of the golden rule. I wonder if some quantum analysis of AI’s chain of thought as to how well it holds up against the golden rule would be useful in the alignment of Super Intelligence. Some sort of narrow AI dedicated to assessing outcomes based on the golden rule? I tried something like this, early on, using 3 foundation models simultaneously judging human communication with additional prompt specifying to use the golden rule. Some success, but could be greatly improved.
None are good numbers prediction
Let’s make up numbers about nonsensical concepts.
Regulation doesn't make things safer, innovation + competition do. Innovation creates the safer alternatives, and competition ensures they have a better chance to be adopted. Regulation often limits both, tends to be capture by incumbents, therefore leads to less safe outcomes. Not trusting the public can act in their best interests collectively is the very path leading to the opposite of the intension.
Believing sacrificing is a slippery slope. Sacrificing others obviously not good, but sacrificing yourself for "greater good" is just the opposite side of sacrificing others, and the "greater good" is just what a few thinks what's good for the collective. The real greater good can only be achieved by respect each and every individual's self-interests, which sacrificing nothing for others and require nothing from others' sacrifice.
ai is going to be the new world government sounds like a dystopia we should avoid
The golden rule isn’t called the golden rule for nothing. I.e. it is a valuable rule. The example given of “Honestly” for instance, is implied by application of the golden rule. I wonder if some quantum analysis of AI’s chain of thought as to how well it holds up against the golden rule would be useful in the alignment of Super Intelligence. Some sort of narrow AI dedicated to assessing outcomes based on the golden rule? I tried something like this, early on, using 3 foundation models simultaneously judging human communication with additional prompt specifying to use the golden rule. Some success, but could be greatly improved.