1 Comment

Good post. Although I think that there is already sufficient reason to be able to extrapolate risk to the point of having an immediate global moratorium on AGI (e.g. this list is terrifying re the potential for recursive improvement of systems -> uncontrollable superintelligent AI: https://ai-improving-ai.safe.ai/).

Perhaps the Taskforce could also look into some more fundamental questions though - such as whether there is any reason to think that scaling alone (money, data, compute) won't bring AGI, and whether alignment with a more intelligent AI "species" is even theoretically possible.

Expand full comment