Technological Innovation: A Tool Or Another Global Problem?
Is technological innovation a tool? Or another global problem? While the benefits of AI are fascinating to people, AI implementation involves risks that must be addressed with regulations and must be accompanied by equally swift progress in international law to effectively address concerns related to human rights and sustainable development.
Traditional Scope Of International Law
Public International Law
Traditionally, public international law has focused mainly—though not exclusively—on the obligations of states, the interactions between international organizations, and the recognition of individuals as holders of rights.
Private International Law
In contrast, private international law has generally dealt with market-related matters, including contractual transactions, jurisdictional questions, and commercial rights.
Until recently, the development, commercialization, and use of algorithmic systems largely fell within this latter domain, being governed by private law principles and legal frameworks such as:
- Tort Law
- Copyright Law
- Contract Law
AI At The Global Level
AI gained attention at Global level primarily through the rapid, disruptive integration of artificial intelligence into education, culture, and science, raising critical ethical, human rights, and equity concerns.
As a UN agency focused on education and human development, UNESCO felt compelled to step in to ensure AI technologies, such as generative AI, are developed and deployed in a “human-centered” manner, rather than exacerbating existing digital divides and inequalities
International Organizations Addressing AI Risks
Several international organizations and intergovernmental bodies have addressed the risk of Artificial Intelligence (AI) by developing governance policies and safety standards.
UN Approach To AI Governance
UN approach to AI governance is focused on Human Rights, Safety and Security. UN Secretary-General points out cautions in terms of peace and security in order to promote the development and protect all Human Rights, Democracy at risk and Undermining Science and Public Institutions.
Key Organizations And Initiatives
| Organization / Initiative | Key Contribution |
|---|---|
| UNESCO (United Nations Educational, Scientific and Cultural Organization) | Adopted the first-ever global standard, the Recommendation on the Ethics of Artificial Intelligence, in November 2021, focusing on human rights, fairness, and transparency. |
| ITU (International Telecommunication Union) | Co-leads the UN Inter-agency Working Group on AI and hosts “AI for Good,” a platform for promoting safe, trusted, and inclusive AI development. |
| UNICRI (United Nations Interregional Crime and Justice Research Institute) | Focuses on the malicious use of AI for terrorism, cybercrime, and threats to security. |
| UNICEF | Launched the Generation AI initiative to ensure AI systems respect children’s rights. |
| UNHCR | Uses AI for humanitarian response but focuses on ethical data use and risk mitigation. |
| OECD AI Principles | Adopted in 2019, these are the first intergovernmental standards for AI, promoting trustworthy, innovative AI that respects human rights. |
| OECD AI Incidents Monitor (AIM) | A framework that documents AI incidents and hazards to provide evidence-based risk assessment for policymakers. |
| AI Governance Tools (OECD) | The OECD provides a Catalogue of Tools and Metrics to help operationalize responsible AI, including risk management frameworks. |
| Hiroshima AI Process | The G7 developed guiding principles and a code of conduct to address the risks of advanced AI systems, including generative AI, and to promote safety, transparency, and accountability globally. |
Governing AI For Humanity Report (2024)
The United Nations Secretary-General’s High-Level Advisory Body on Artificial Intelligence released its final report Governing AI for Humanity on September 19, 2024, the report includes A global approach to governing AI begins with a shared understanding of its capabilities, opportunities, risks, and uncertainties.
There is a need for timely, impartial, and reliable scientific knowledge about AI so that Member States can develop a common foundational understanding worldwide. This will also help reduce information asymmetries between companies that operate costly AI laboratories and the rest of the world, including through greater information sharing between AI companies and the broader AI community.
Public Perception Of AI: Global Survey Findings
A survey by Ipsos for the World Economic Forum has found that 60% of adults around the world expect that products and services using AI will profoundly change their daily life in the next 3-5 years.
- 60% agree AI will make their life easier.
- Just half say AI has more benefits than drawbacks.
- Only 50% say they trust companies that use AI as much as they trust other companies.
It’s not the only difference between high-income and emerging economies uncovered by the survey. Ipsos reports that citizens from emerging countries are significantly more likely than those from higher-income countries to report being knowledgeable about AI, to trust companies using AI and to have a positive outlook on the impact of AI-powered products and services in their lives.
Challenges In Binding International Conventions
A key challenge in developing fully binding international conventions is that leading AI-developing countries are often reluctant to support frameworks that could slow innovation.
At the same time, their domestic policies on AI continue to evolve and remain complex, meaning that progress toward comprehensive international regulation is likely to occur gradually.
As a result, AI governance in the near future will largely remain under national or local authorities, which can be strengthened by aligning their approaches with international guidelines.


