Navigating the AI Revolution: Global and Regional Strategies for Responsible Governance.


An overview of the UN, the UAE and Regional Initiatives.

By George Salama.

Humanity is witnessing a revolutionary race at an unprecedented pace— a race driven by artificial intelligence (AI) applications that are rapidly transforming nearly every aspect of our lives. While AI research and applications have been evolving for many years, the key difference today is their widespread availability to the public. As AI moves from the confines of research labs into the hands of everyday users, the focus on responsible use becomes sharper. The responsible use of AI is a global social challenge that requires a coordinated societal approach, incorporating governance models, public policies, regulatory frameworks, security measures, and safety aspects. Additionally, the socio-economic impact of AI necessitates collaboration among global and regional stakeholders. In response, several governments worldwide and specifically in the MENA region are developing comprehensive AI strategies, policies, and regulations to balance AI’s potential with its associated risks.

As AI continues to revolutionize industries worldwide, its economic impact is becoming increasingly significant. The global AI market, valued at approximately $150 billion in 2023, is projected to grow exponentially in the coming years. According to recent reports, the AI market is expected to reach over $1.8 trillion USD by 2030, driven by advancements in machine learning, automation, and the deployment of AI across sectors like healthcare, climate, finance, sports, and manufacturing.

This rapid growth underscores the urgent need for responsible AI governance and international collaboration to harness AI’s potential for economic development while managing the associated risks. As countries and organizations continue to invest heavily in AI, the economic opportunities are vast, but they must be balanced with ethical and regulatory considerations to ensure sustainable growth.

From Internet To AI Governance – The UN Approach:

Technological advancements frequently outpace the evolution of governance and policy development processes. Over two decades ago, as the Internet revolution surged, the United Nations responded by establishing a global governance framework through the creation of the Internet Governance Forum (IGF). This initiative, a key outcome of the World Summit on the Information Society (WSIS) held in Geneva (2003) and Tunis (2005), aimed to provide a structured, multi-stakeholder platform for addressing critical Internet-related public policy issues. Since its inception in 2006, the IGF has played a pivotal role in facilitating global dialogue and collaboration on Internet governance. However, significant challenges remain unresolved, particularly in the areas of Internet openness, digital rights, and multilingualism—issues that continue to present pressing concerns for users, especially in the MENA region.

From Internet to AI governance, in October 2023, recognizing the urgent need for global AI governance, the United Nations launched a High-Level Advisory Body on Artificial Intelligence. This globally coordinated body, comprising experts from Egypt, Saudi Arabia, and the UAE, is tasked with maximizing AI’s potential for the greater good while managing the risks and uncertainties posed by AI technologies, including AI-driven services, algorithms, and increasing computational power. Following extensive global consultations, the advisory body published its final report on September 20, 2024, titled “Governing AI for Humanity”. The report highlighted that there are hundreds of AI guides, frameworks and principles that have been adopted by governments, companies and consortiums, and regional and international organizations. Yet, none of them can be truly global in reach and comprehensive in coverage.

It is critical not to leave AI ungoverned, the UN report calls for a global approach to AI governance that begins with a shared understanding of its capabilities, opportunities, risks, and uncertainties. To achieve this, it is essential to provide timely, unbiased, and reliable scientific knowledge about AI, enabling UN Member States to establish a common foundation for informed decision-making. This approach will help bridge the information gap between nations and large corporations with costly AI research labs, ensuring that all stakeholders—governments, businesses, start-ups and the global community—are equipped to participate in shaping the future of AI on an equal footing for the benefit of humanity.

The report outlines seven key recommendations designed to address critical gaps in AI governance and promote robust international cooperation, ensuring AI is developed and deployed responsibly on a global scale:

  1. An international scientific panel on AI.
  2. Policy dialogue on AI governance.
  3. AI standards exchange.
  4. Capacity development network.
  5. Global fund for AI.
  6. Global AI data framework.
  7. AI office within the Secretariat.

The report concluded by reassuring the UN optimistic view about AI’s future and its transformative potential. However, this optimism must be grounded in a realistic understanding of the risks and the insufficiency of existing structures and incentives. AI is too critical, and the stakes are too high, to rely solely on market forces or a fragmented mix of national and multilateral efforts. A coordinated, comprehensive approach is essential to fully realize AI’s benefits while effectively managing its challenges.

From Global to Regional/National AI Strategy – Case of UAE:

The UN High-Level Advisory Body on Artificial Intelligence outcomes represents the global initiative such as the call for the bi-annual “Global AI Governance Forum”, still the regional and national initiatives are mandatory to secure a unique inclusive AI innovative adoption model that embrace policymakers, technology experts, and startups all on equal footing.

In 2017, the United Arab Emirates – UAE launched the “National Strategy for Artificial Intelligence 2031” to make it an integral part of the national business development agenda, enhancing people’s lives and government services. The strategy comes with the appointment of the world’s first Minister for Artificial Intelligence, His Excellency Minister Omar Al Olama. The UAE’s efforts to commercialize AI is part of a larger agenda to diversify the economy, stand up a digital ecosystem, and grow knowledge-based sectors. The Strategy includes eight strategic objectives aimed at employing Artificial Intelligence in vital areas such as education, government services and the community wellbeing. The objectives are:

  1. Build a reputation as an AI destination.
  2. Increase the UAE competitive assets in priority sectors through deployment of AI.
  3. Develop a fertile ecosystem for AI.
  4. Adopt AI across customer services to improve lives and government.
  5. Attract and train talent for future jobs enabled by AI.
  6. Bring world-leading research capability to work with target industries.
  7. Provide the data and supporting infrastructure essential to become an AI test bed.
  8. Ensure strong governance and effective regulation.

With the government’s vision reaffirming the UAE’s position as a global hub for AI, comes in parallel – 2018 the establishment of “G42” – the leading UAE-based artificial intelligence (AI) technology holding company based in Abu Dhabi with a global footprint. Since its inception, a team 20,000 people and a Moto of unleashing the full potential of AI to invent better everyday, G42 has achieved unprecedented partnerships with global US tech leaders, such as Microsoft and Nvidia.

In November 2023, Microsoft announced the availability of G42’s Jais Arabic Large Language Model on the new Azure AI Cloud Model-as-a-Service offering. Jais is a cutting-edge Arabic Large Language Model (LLM) that was developed through a collaboration between G42’s Inception, a subsidiary focused on AI and machine learning, Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), and Cerebras Systems, a company specializing in high-performance AI hardware. Named after Jebel Jais, the UAE’s highest mountain, The model has been trained on a mixture of 116 billion tokens in Arabic and English, sourced from a wide array of datasets, including books, academic papers, websites, and social media.

In April 2014, Microsoft invests $1.5 billion in Abu Dhabi’s G42 to accelerate AI development and global expansion. H.H. Sheikh Tahnoon bin Zayed Al Nahyan, Chairman of G42, said: “Microsoft’s investment in G42 marks a pivotal moment in our company’s journey of growth and innovation, signifying a strategic alignment of vision and execution between the two organizations. This partnership is a testament to the shared values and aspirations for progress, fostering greater cooperation and synergy globally.” The partnership will also support the development of a skilled and diverse AI workforce and talent pool that will drive innovation and competitiveness for the UAE and broader region with the investment of $1B in a development fund for developers. “Our two companies will work together not only in the UAE, but to bring AI and digital infrastructure and services to underserved nations,” said Brad Smith, Microsoft Vice Chair and President. “We will combine world-class technology with world-leading standards for safe, trusted, and responsible AI, in close coordination with the governments of both the UAE and the United States.”

In September 2024, G42 announced its partnership with NVIDIA to advance Climate climate technology with a focus on developing AI solutions aimed at dramatically enhancing the accuracy of weather forecasting globally. By bringing together G42’s AI expertise and NVIDIA’s powerful computing resources, this partnership aims to create effective climate solutions that blend scientific accuracy with real-world use.

On the AI governance front, G42 partnered with the POLITICO Research & Analysis Division launching the “Sovereign AI Ecosystems: Navigating Global AI Infrastructure & Data Governance,” report. The report highlights the challenges and opportunities in creating sovereign AI ecosystems, driven by the need for robust data governance frameworks that balance security, innovation, and compliance across different jurisdictions. It also examines the strategic importance of data centers, supercomputers, and the physical infrastructure supporting AI.

In recognition of the critical role startups play in building a vibrant AI ecosystem, the UAE aims to be more than just an incubator of AI technology domestically—it seeks to position itself as a global thought leader, shaping the evolving policy debate on AI. A strong startup ecosystem not only drives the development and adoption of AI technologies but also bolsters the UAE’s standing in the global innovation landscape. Several key incubators in the UAE are dedicated to supporting AI startups and attracting investors, including:

  • Dubai Future Accelerators (DFA): As part of the Dubai Future Foundation, DFA fosters collaboration between startups, private companies, and government entities, with a focus on emerging technologies such as AI.
  • Hub71: Based in Abu Dhabi, Hub71 is a global tech ecosystem that supports AI-focused startups. It has formed strategic partnerships, such as a 2021 agreement with AIQ, aimed at accelerating AI solutions in the energy industry.

In conclusion, the UAE has firmly established itself as a global leader in artificial intelligence (AI) by adopting a holistic approach that combines government initiatives, business innovation, and a dynamic startup ecosystem. Through forward-looking policies, ethical AI frameworks, and a strong emphasis on talent development and research, the UAE has positioned itself as a center for responsible AI advancement. By fostering collaboration between the public and private sectors as well as supporting startups, the UAE is not only driving AI progress within its borders but also actively influencing the global AI landscape, setting new standards for innovation, governance, and sustainable digital transformation.

Regional Initiatives Towards A Responsible AI:

Salama – The Policy & Business Intellectual Group is a forward-thinking organization focused on shaping the intersection of policy and business, particularly in the areas of artificial intelligence (AI) and emerging technologies. The group plays an active role in fostering dialogue and collaboration among global stakeholders, with a focus on creating responsible and innovative frameworks for the development and governance of AI.

In March 2024, Salama – The Policy & Business Intellectual Group launched a pioneering initiative from the MENA region to the global stage with the publication of “The AI for All” Open Letter.

Published in three languages – English, French, and Arabic – the letter has garnered signatures from AI stakeholders across 16 countries spanning three continents (Africa, Asia, and Europe). It serves as a call to action for the development, testing, adoption, and governance of AI grounded in three core principles: Innovation, Safety, and Responsibility. Endorsing the “AI for All” open letter means:

  1. Collaborative Force: Join hands with like-minded stakeholders to explore AI opportunities and advocate for shared goals.
  2. Accelerated Progress: Drive impactful change faster by uniting efforts towards addressing common challenges.
  3. Awareness and Commitment: Amplify awareness of crucial AI issues and demonstrate unwavering commitment to the “AI for All” cause.
  4. Humanity-Centric AI: Signal your dedication to shaping AI in a manner that prioritizes the well-being and advancement of humanity.

On a related note, with the growing intersection of AI applications and social media, the group has expressed concern that irresponsible AI adoption could introduce several emerging risks, potentially leading to biased public discourse. To address these risks, it is essential to develop and enforce ethical AI guidelines, enhance transparency, and promote diversity within AI systems. By taking proactive steps, both tech companies and policymakers can reduce AI’s negative impact on public dialogue and contribute to fostering a more informed, balanced, and inclusive society.

​These risks stem from flaws in AI design, deployment, and regulation:

  1. Data Bias: AI systems trained on biased datasets can reinforce and amplify existing societal biases. For example, AI model can be trained on historical data that reflects gender or racial biases which lead to biased outcomes; therefore it is important to ensue diversity in the training datasets
  2. Echo Chambers: AI algorithms designed to maximize user engagement can create echo chambers by recommending content that aligns with users’ existing beliefs, thus reinforcing biased perspectives.
  3. Algorithmic Influence: AI-driven recommendation systems can highlight certain viewpoints, shaping public opinion by giving higher visibility to particular narratives.
  4. Manipulation: Bad actors can exploit AI algorithms to spread misinformation and propaganda, using bots and deepfakes to sway public opinion.
  5. Deepfake: AI-generated deepfakes can create realistic but false videos and audio, undermining trust in media and public figures by making it difficult to distinguish between genuine and fabricated content.
  6. Lack of Regulation: Insufficient regulation and oversight of AI technologies can result in their irresponsible use, with little recourse for addressing bias and discrimination.
  7. Resistance to Innovation: Public fear and skepticism about AI can hinder its adoption and the potential benefits it can bring, stalling technological and societal progress.

The Salama group’s initiatives serve as the foundation for futuristic bold, expansive projects focused on creating global, independent multi-stakeholder dialogue. A dialogue that will spearhead AI governance and policy discussions, engaging with international organizations, policymakers, regulators, AI technology companies, and startups to drive responsible AI development and implementation worldwide.

Critical Analysis: Evaluating AI Governance Initiatives:

While global and regional AI governance frameworks represent significant progress, a closer examination reveals several areas where these initiatives face critical challenges.

  1. Fragmented Governance Approaches: Despite the growing number of AI strategies and governance models, a global, unified approach remains elusive. Many of these frameworks are independently developed with minimal coordination. This fragmentation raises concerns about the consistency and enforceability of AI governance across borders, especially as AI technologies often operate in a global context; in addition to the lack of binding agreements or enforcement mechanisms remains a major challenge
  2. Over-reliance on Market Forces: Leaving AI governance heavily solely in the hands of private sector players can be problematic. Commercial interests and market dynamics do not always align with public interest, particularly when it comes to ethical AI practices and long-term societal benefits. Current market-driven solutions do not sufficiently address these risks, raising the question of whether regulatory intervention is needed to mitigate harm from AI-driven misinformation, bias, and societal fragmentation.
  3. Data Sovereignty and Privacy Concerns: Many of the governance frameworks discussed focus on the responsible deployment of AI without fully addressing data sovereignty. As AI systems are increasingly reliant on large datasets, ensuring data privacy and security across international borders becomes a pressing concern. The challenge lies in striking a balance between fostering innovation and ensuring that data privacy and sovereignty are not compromised. A more rigorous approach to data governance frameworks is necessary, especially as AI continues to evolve and expand into new domains, such as healthcare and public services.
  4. Challenges in Addressing Bias and Discrimination: While frameworks and principles exist to promote fairness and inclusivity in AI, translating these principles into real-world applications has proven difficult. The lack of diverse datasets, particularly in regions with limited access to data collection resources, exacerbates this issue. Moreover, regulatory frameworks often lag behind the rapid development of AI technologies, resulting in insufficient oversight of biased algorithms. A more proactive approach, including independent audits of AI systems and the incorporation of more diverse data sources, is critical to addressing these challenges.
  5. The Role of Public Trust: Public trust is a foundational element of any AI governance framework. While the world is witnessing significant strides in building AI ecosystems, public skepticism about AI remains a significant barrier. Current governance frameworks do not adequately address the need for public education and transparent communication to build trust in AI systems. Future governance models must prioritize public engagement and education to ensure that AI adoption is both inclusive and transparent. The complexities of regulating AI require a more cohesive, transparent, and inclusive approach. A global multi-stakeholder developed framework that addresses the socio-economic, ethical, and technical dimensions of AI governance is crucial to unlocking AI’s full potential while safeguarding society from its risks.