FOL produced a letter this week that is worth a read.
Their point is straightforward, the development of AI systems with human-competitive intelligence could pose significant risks to society and humanity, yet there is a lack of planning and management in this area. FOL feels AI labs are engaged in a dangerous race to develop and deploy ever more powerful digital minds, which can flood information channels with propaganda and untruth and automate away jobs, including the fulfilling ones. Powerful AI systems should only be developed when we are confident that their effects will be positive and their risks manageable. Therefore, FOL asks, AI labs should immediately pause for at least six months the training of AI systems more powerful than GPT-4, and use this time to develop and implement shared safety protocols for advanced AI design and development that are overseen by independent outside experts.
In other words, Time Out.
Which sounds a bit quaint. To anyone that has been part of a technology race, or even a pickup street hockey game, timeouts are only as good as the players willingness to honor them. AI is like water; if we block it off for six months, it’s just going to go somewhere else. We would effectively be handing a six month advantage to the rest of the world. The rest of the world would likely never thank us.
The focus should be on making today’s powerful, state-of-the-art systems more accurate, safe, interpretable, transparent, robust, aligned, trustworthy, and loyal, and working with policymakers to dramatically accelerate development of robust AI governance systems. These governance systems should include new and capable regulatory authorities dedicated to AI, oversight and tracking of highly capable AI systems and large pools of computational capability, provenance and watermarking systems to track model leaks, and a robust auditing and certification ecosystem.
But there will be no “AI Summer”. To jump to another analogy, we have to build this plane while it’s flying.