International AI governance is an emerging field of interest. With AI in use in every global industry, and exciting advances like
AI supercomputing, global AI governance that can maximise gains and minimise losses from the use of AI is needed.
The most significant obstacles to global AI governance are competition and poor communication between countries. But what are the main threats of AI, and how is the British government currently modelling global governance of this technological new frontier?
The Role of the British Government in AI Governance
The UK has global research excellence, a highly skilled workforce, plus a regulatory environment that sets expectations, not only in the UK but across the world. The National AI Strategy sets this out more fully, building on these strengths.
The UK’s Industrial Strategy (2017) first laid the foundation for AI innovation. Then, in April 2018, the UK’s government and the country’s AI ecosystem agreed a deal of almost £1 billion to upskill and improve the UK’s global standing in AI development with the
National AI strategy.
The National AI Strategy’s Three Core Pillars:
- Investing in the long-term needs of the AI ecosystem: The UK plans to provide long-term support to its AI ecosystem in order to create a sustainable growth path for all.
- Ensuring AI benefits all sectors and regions: AI should be available to all: in all parts of the country, all types of business and all types of industry – to encourage benefits for the economy as a whole.
- Governing AI effectively: Striking the balance between innovation and legitimate interests.
The UK uses an agile, risk-based and context-sensitive regulatory approach that recognises the benefits of AI as it emerges and effectively mitigates risk.
The AI and Digital Hub is a multi-agency online advisory service that the government pilots as a support for innovation and a coordination platform between government departments and the private sector.
The UK’s AI Standards Hub
The AI Standards Hub, the first of its kind, is led by The Alan Turing Institute, founded by the Department for Digital, Culture, Media and Sport with generous support from the UK Research and Innovation Global Challenges Research Fund. It seeks to set the standards on AI design on a global basis and ensure that these standards are shaped around wider issues of ethics and safety.
The Hub produces practical toolkits for organisations, provides a space for national collaboration in AI through its web portal, and develops educational resources.
Collaborations and Impact
The UK collaborates with international initiatives, including:
- Global Partnership on AI (GPAI): Spearheaded by 15 founding countries to develop ‘an AI-driven future that works for people and society’.
- Trade and Technology Council: Created to ‘harmonise EU-US cooperation and coordination on trade and technology matters, including AI’
- UN High-Level Advisory Body on AI: Tasked with advancing recommendations for international AI governance.
- AI Safety Institute: Established by the UK to promote the internationalisation of knowledge on advanced AI.
Ethical Considerations in AI
As artificial intelligence (AI) becomes increasingly central to all our lives, ethical rules will play an ever greater role in driving new technologies. The responsible development and use of AI technologies is a crucial ingredient in developing trust in AI technology.
Frameworks for Guiding Ethical AI
Establishing ethical building blocks for responsible delivery of AI projects involves building a culture of responsible
AI innovation and applying governance architecture to bring the values and principles of ethical, fair, and safe AI to life.
Teams should ensure an AI project implements these four principles:
- Ethically permissible - consider the impacts it may have on the wellbeing of affected stakeholders and communities
- Fair and non-discriminatory - consider its potential to have discriminatory effects on individuals and social groups, mitigate biases which may influence your model’s outcome, and be aware of fairness issues throughout the design and implementation lifecycle
- Worthy of public trust - guarantee as much as possible the safety, accuracy, reliability, security, and robustness of its product.
- Justifiable - prioritise the transparency of how the model is designed and implemented, and the justification and interpretability of its decisions and behaviours.
To make sure they are fully incorporated into AI projects, a governance architecture should be created that includes:
Framework of ethical values
These ethical values, known as the ‘SUM’ values, support the responsible design and use of AI:
- Respect the dignity of individuals
- Connect with each other sincerely, openly, and inclusively
- Care for the wellbeing of all
- Protect the priorities of social values, justice, and public interest
Using the FAST Track Principles published by the Alan Turing Institute, the guidelines also recommend implementing a set of actionable principles covering:
Fairness: Addressing biases and promoting equitable outcomes.
Accountability: Holding stakeholders responsible for AI decisions.
Sustainability: Ensuring long-term viability and positive impact.
Transparency: Ability to explain how and why an AI model performed the way it did in a specific context, and justify the ethics of its outcome and the processes in use.
Challenges and Opportunities
In the last few years, several global AI governance frameworks have been published. These include principles from organisations like the OECD, the EU, and UNESCO. However, there are numerous challenges in co-ordinating a global approach to AI governance.
Geopolitical factors heavily influence global AI governance, and the competing interests of different countries pose a significant barrier to achieving a global agreement on frameworks and approaches. Different countries have different cultural norms, laws, and regulatory priorities. There is no global government and tensions between different regimes can make co-operation difficult. Even when cooperation is achieved, it is difficult to harmonise universal standards while respecting diversity. Some of these problems can be overcome by international institutions like the OECD that can encourage cooperation and facilitate communication.
The UN has established a high-level advisory body on AI, which recently released an interim report. After seeking feedback, the panel aims to finalise this report in mid-2024, incorporating diverse perspectives from various countries. The interim report emphasises global coordination on AI and advocates for universal buy-in. While this report represents an initial step toward global AI governance, other national and regional efforts, such as Africa’s AI strategy, also contribute to shaping the conversation around global AI governance.
UK’s Leadership Opportunities
The UK’s is already stepping up to lead the way for AI governance. The National AI Strategy is a ten-year strategy for making the UK a global AI superpower that includes a National AI Research and Innovation Programme and a white paper on AI regulation.
From Alan Turing to DeepMind, the UK is home to a great AI heritage: it’s third globally in private venture capital investment into AI companies behind only the US and China. Plus ,the UK’s context-specific, risk-based innovative regulation of AI demonstrates the new opportunity for global leadership.
Overall, the future of global AI governance depends on collective action, and the UK’s recognised abilities, historical achievements and ambitious forward-looking strategies give it a worthy place to take the lead in defining the future of AI.
Impact on Businesses and Innovation
The approach for AI regulation in the UK is described by successive governments as a ‘pro-innovation’ approach. This interim regulatory regime relies on the cross-sectoral principles generally used by existing regulators to enforce the rules for AI systems, including safety, transparency, fairness and accountability. These drive the ethically responsible use of AI and deliver a competitive advantage for businesses.
Regulators in the UK are turning the 2023 AI Regulation White Paper into reality, one step at a time. They are trying to strike a balance between the promise of AI and the risks associated with it, alongside improving capabilities and encouraging responsible AI use.
AI development is driving businesses forward across multiple industries and leading business innovation on an unprecedented level.
HP Zbooks are the perfect device for businesses interested in AI development and research.
HP Elite PCs are perfectly suited to business professionals who may use AI in their work.
Future Outlook
The UK has been proactive in positioning itself as a leader in the ethical and responsible development of AI. The future looks promising for the UK to continue to lead global AI governance, driven by strategic initiatives like the Centre for Data Ethics and Innovations.
The UK is set to continue to refine its regulatory frameworks balancing AI innovation with ethical considerations, positioning the UK as a global thought leader for AI governance. Alongside active participation in international forums and heavy investment in Ai research and development, the UK looks set to retain its position at the forefront of the ever-evolving landscape of AI technology and governance.
Conclusion
Despite challenges in global AI governance, including geopolitical tensions and diverse regulatory priorities, the UK's rich AI heritage and strategic initiatives like the National AI Strategy make it well-positioned to lead in defining the future of AI. As the UK continues to refine its regulatory frameworks and invest in AI research, it remains poised to maintain its position at the forefront of AI technology and governance, shaping the global landscape for years to come.