Insights

Governments, Artificial Intelligence & Regulations

Post by
R&D Team
Explainable AI is crucial for governments to regulate AI algorithms for biases and discrimination

Global Governance

As AI continues to dominate the technological frontier and become an essential in corporations, governments worldwide need means for regulation within this dynamic industry. Within the last couple of years, we have seen countries like the U.S, U.K, and Japan commit to unleashing AI innovation while ensuring transparency & unbiasedness. Google CEO Sundar Pichai says, “There is no question in my mind that artificial intelligence needs to be regulated. It is too important not to.” Countries around the world agreed to a global governance board to regulate AI development. Also the United Nations’ development of UNICRI(United Nations Interregional Crime and Justice Research Institute) Centre for AI and Robotics shows the global interest in regulation. Just recently in February 2020, the European Union published a draft paper for promoting and regulating AI. 


Individual Country’s AI Strategy Initiatives

Many governments worldwide have launched AI strategy initiatives, aiming at a variety of goals such as rapid technological development and achieving social good. Countries including the U.S agree that rather than over regulation, flexible policy environments that promote innovation and competition should be implemented. Similarly, the Indian government is cautious of over regulation and focuses on funding academic and commercial research in their “AI for all” approach to achieve social good. Regulation is being achieved differently across the globe. On January 7, 2019, the White House’s Office of Science and Technology Policy released a draft Guidance for Regulation of Artificial Intelligence Applications, which includes ten principles for United States agencies when deciding whether and how to regulate AI. The National Security Commission on AI deals with all security related AI regulation. The EU’s approach to AI regulation differentiates between high risk and non high risk AI applications. Only high risk is then considered for the regulatory framework. This regulatory framework consists of requirements for training data, robustness and accuracy and human oversight. China’s approach to regulation consists of state control of Chinese companies and valuable data along with mandatory use of People's Republic of China's national standards for AI. Singapore’s approach to AI regulation takes a “human-centric” approach.The plan involves explainability, transparency, and fairness, stressing the need for public trust in AI. In March 2020 New Zealand announced their plan on AI regulation by progressively incorporating AI controls into existing regulations as they are updated rather than having a specific regulation to control AI use. This approach ensures that new technologies are lawful and safe and do not violate any human rights that apply offline. This shows similarities with the United States’ approach since there are general guidelines on ethical practices and privacy laws but there are no regulations that may hinder the effective growth of AI in any way.


Goals of Regulation

Countries across the world have recognized AI’s importance in increasing GDP and promoting business efficiency but have also recognized the black box nature of algorithms and models. As mentioned previously, countries do not want to over regulate AI to the point where the technology is not being utilized to its full capabilities. This is why trust and transparency is at the epicenter of all regulatory initiatives. Governments understand that AI is a highly technical field and that many models will not be comprehensible by the general public, which is why the idea of transparency is essential. If needed, people can understand how algorithms are coming to conclusions and where predictions are originating from. Most countries agree that another goal for AI regulation is to assure that AI practices do not violate basic human and ethical rights. For example, in 2019 the department of defense adopted five principles of AI ethics. These five principles aim to ensure that AI capabilities are responsible, equitable, traceable, reliable, and governable. These basic guidelines give room for innovative AI development while ensuring human rights and holding the individual accountable in case there was an issue.


What next?

A certain fact is that AI will continue to lead innovation in the near future. Projections indicate that from now till 2025 AI will grow at an astounding compound annual growth rate of 42.2%. As long as the balance between regulation and innovation exists, we can expect to see AI used in essentially every industry. Since the current flexible regulatory system for AI has been successful so far, this will most likely be the continued strategy used by governments around the world. The next phase would be for governments to start rolling out formal regulatory laws regarding AI as its adoption increases. 

Governments can either partner up with private startups or companies that are solely focusing on explainable AI and together, build frameworks that align with their regulatory missions and guidelines. 

ExplainX + Governments

ExplainX.ai is building an open-source explainable AI library that combines state-of-the-art explainability techniques to help data scientists explain, debug and validate any black-box machine learning model. Partnering up with ExplainX, governments can easily fuse technical expertise and human-centric approach to regulating these black-box AI models and build a framework that will work for all stakeholders involved. 


To get started with explainable AI, visit our website and start using our open-source explainable AI platform - in just a few clicks. connect with us on our Linkedin and Slack channels to join a community of like-minded professionals passionate about building trustworthy and unbiased AI systems. 



Share this with your friends.

Ready to dive in?
Create your free account now.