The UK Foundation Model Taskforce has recently been launched, backed by the UK government and led by Ian Hogarth, an AI investor and AI expert concerned by extreme AI risks.
Good post. Although I think that there is already sufficient reason to be able to extrapolate risk to the point of having an immediate global moratorium on AGI (e.g. this list is terrifying re the potential for recursive improvement of systems -> uncontrollable superintelligent AI: https://ai-improving-ai.safe.ai/).
Perhaps the Taskforce could also look into some more fundamental questions though - such as whether there is any reason to think that scaling alone (money, data, compute) won't bring AGI, and whether alignment with a more intelligent AI "species" is even theoretically possible.
Good post. Although I think that there is already sufficient reason to be able to extrapolate risk to the point of having an immediate global moratorium on AGI (e.g. this list is terrifying re the potential for recursive improvement of systems -> uncontrollable superintelligent AI: https://ai-improving-ai.safe.ai/).
Perhaps the Taskforce could also look into some more fundamental questions though - such as whether there is any reason to think that scaling alone (money, data, compute) won't bring AGI, and whether alignment with a more intelligent AI "species" is even theoretically possible.