The EU Commission recently proposed a new set of stringent rules to regulate AI, citing an urgent need. With the global race to regulate AI officially on, the EU published a detailed proposal on how AI should be regulated, explicitly banning some uses and defining those it considers “high-risk,” planning to ban the use of AI that threatens people’s rights and safety.
We can all agree with the sentiment of Margrethe Vestager, the European Commission executive vice president, when she said that when it comes to “artificial intelligence, trust is a must, not a nice to have,” but is regulation the most effective and efficient way to secure this reality?
The takeaways from the commission are incredibly in-depth, but the ones that make the most sense to me are those that stress regulated AI should aim to increase human well-being. However, regulation should not overly constrain experimentation and development of AI systems.
High-risk AI systems should always have unalterable built-in human oversight…