AI Safety and Development

SOURCE www.theguardian.com
AI safety campaigner calls for existential threat assessment similar to Oppenheimer's calculations before first nuclear test to ensure safe development of Artificial Super Intelligence (ASI). Max Tegmark emphasizes the importance of calculating the 'Compton constant' to determine the probability of losing control over advanced AI.

Key Points

  • Max Tegmark recommends calculating the 'Compton constant' to assess the probability of losing control over Artificial Super Intelligence (ASI)
  • Global safety regimes for AIs could be established based on consensus Compton constant calculations by multiple companies
  • The Singapore Consensus on Global AI Safety Research Priorities report outlines three key areas for prioritizing AI safety research

Pros

  • Emphasizes the importance of assessing potential existential threats posed by advanced AI systems
  • Calls for rigorous calculations similar to those done before the first nuclear test to ensure safety
  • Encourages global collaboration and consensus on AI safety measures

Cons

  • Potential delay in the deployment of powerful AI systems due to safety concerns
  • Complexity and uncertainty in calculating the 'Compton constant' and predicting AI behavior