With tech industry players rolling out shiny new AI investments on the banks of the Hudson River, international leaders gathered across Manhattan on the East River, coalescing for September’s annual UN General Assembly session — a global forum on big issues like sustainable development, ending armed conflict, and, amid it all, artificial intelligence.
Neil Sahota, CEO of AI research firm ACSILabs, was present too, a longstanding UN AI advisor and early AI R&D specialist. Twenty years ago, Sahota found himself in the midst of a “business intelligence” investment boom, eventually brought on to IBM’s secret team behind its Jeopardy!-playing AI, Watson. He’s a founder of the UN’s AI for Good initiative, observing the rise of global AI tools and accompanying concerns in real time, even shepherding them on. And for almost a decade, Sahota has been on call with the international body as it devised a “tactical” response to AI.
“It was a bit of a brave new world,” said Sahota. Since then, the UN has invested in hundreds of AI projects and programs, with different bodies taking a stab at AI guidance that reflects the needs of the global population. But with the acceleration of national AI investments, one unanswered question has loomed: How should it be regulated?
Despite its complexity, advocates like Sahota believe the international body is the world’s best bet at guarding the impact of AI. “The UN is one, if not the only, globally-trusted organization that has the credibility to actually lead this effort,” he explained. “It can become a leader, to help member nations — help the people, help the industry — understand and create a new mindset around AI.”
But it might be too late. “People are realizing that we’re running out of time, or maybe we’ve already run out of time, to figure these things out,” Sahota told Mashable. “We live in a time of hyper change, experiencing 100 years worth of change in the next 10 years. We don’t have time to react to things anymore.”
The UN steps into the AI arms race
Globally, nation states are rushing into AI investment at an increasingly high pace, attempting to beat each other to the technological punch. It’s what the AI Now Insitutite refers to as the “AI arms race.” The race has fostered the rise of what experts have coined “AI nationalism,” or the transformation of AI into a core industrial concern and national industrial resource, the institute explains. The claim for technological sovereignty among nation states leading the charge (mainly the U.S. and China) has grown alongside it.
Other governments and international bodies have spent the last few years formulating responses to the increasingly political nature of AI development. The UN has discussed the impact of AI in regulatory conversations since at least 2017. In March, the General Assembly adopted a resolution on “steering AI use for global good” amid “existential” concerns. U.S. representatives introduced the landmark statement of intent, saying the international community must “govern this technology rather than have it govern us.”
The UN’s current working group, the high-level Advisory Body on Artificial Intelligence, was formed in 2023, after several years of suggestions from advisors like Sahota.
Birthed from this year’s convening is a new “Governing AI for Humanity” report, which at times reads as a sobering list of risks and at others an optimistic guide to co-investment, amid AI’s burgeoning “opportunity envelope.” It recommends the creation of a new, independent scientific panel to survey AI “capabilities, opportunities, risks, and uncertainties”; it encourages “AI standards sharing” and sets out plans for a kind of AI governance network; and it pushes for a Global AI fund to foster more “equitable” investment.
“Fast, opaque and autonomous AI systems challenge traditional regulatory systems, while ever-more-powerful systems could upend the world of work. Autonomous weapons and public security uses of AI raise serious legal, security, and humanitarian questions,” the report warns. “There is, today, a global governance deficit with respect to AI. Despite much discussion of ethics and principles, the patchwork of norms and institutions is still nascent and full of gaps. Accountability is often notable for its absence, including for deploying non-explainable AI systems that impact others. Compliance often rests on voluntarism; practice belies rhetoric.”
Sahota provided input on the report, but didn’t sit on the committee. He explained that the report was in development for years — at one point, the possible culmination of the body’s AI for Good summit — but it needed unanimous input from all 192 member nations for it to have any credence.
Having observed the political give and take of formalizing an AI report of this size, Sahota noted the expected “mellowing out” of certain regulatory suggestions and the “beefing up” of other suggestions. Sahota has championed a separate UN arm dedicated to AI and technological oversight for years, and the new report recommends the creation of an “independent international scientific panel” and an AI office in the UN Secretariat. But there’s a long journey ahead before that body has any kind of formal influence.
An office of that kind, Sahota argues, is crucial, acting as a focal point to coalesce working groups, committees, projects, and to provide visibility to international regulation efforts.
The report notes a surfeit of “documents and dialogues” that have been adopted by governments, companies, consortiums, and international organizations that focus on AI governance. But, the UN argues, “none of them can be truly global in reach and comprehensive in coverage. This leads to problems of representation, coordination, and implementation.” The less-than-ideal future of AI governance involves “disconnected and incompatible AI governance regimes,” the UN says, prompting the need for coordination.
The call seems urgent, but it’s long overdue.
“In the digital age, there are no boundaries,” said Sahota. “Someone develops an AI technology, or any kind of technology, and there’s really no way to stop its spread or use anywhere in the world.” The omnipresence of AI has worried many, and its impact on the global majority, on formerly colonized nations, is an issue that will warrant international collaboration. In many ways, it bears the same complicated questions as the worsening climate crisis. And national policy is already making similar concessions.
AI sneaks past the long arm of the law
A variety of regulatory and standards-building efforts have been proffered by nations and political blocs. In May, the European Union signed into law a first-of-its-kind AI Act, intended to protect its citizens from “high-risk” AI. Canada also has a legally enforceable regulatory standard, known as the Artificial Intelligence and Data Act.
But for the most part, AI’s regulatory oversight has been piecemeal, reliant on soft law principles. UNESCO has led a widespread international effort to create a human rights framework around AI, including its AI Ethics Recommendations, a Global AI Ethics and Governance Observatory, and an AI “RAM,” designed to help member states assess readiness for implementation of AI. “In no other field is the ethical compass more relevant than in artificial intelligence,” writes Gabriela Ramos, UNESCO assistant director-general for Social and Human Sciences.
The Organisation for Economic Co-operation and Development, or OECD, is a huge player, too, establishing international frameworks for possible intergovernmental cooperation and creating methodologies for ethical evaluation. The OECD Recommendation on Artificial Intelligence, the first set of intergovernmental principles for trustworthy AI, emphasized “interoperability” AI policy. Notably, OECD’s biggest players — the nations signing onto their work — are wealthier, “industrialized” countries: Japan’s Hiroshima AI Process Friends Group, the US’s Global Partnership on Artificial Intelligence (GPAI), the UK’s Bletchley Declaration, and China’s Interim Measures on generative AI, for example.
The U.S. has introduced dozens of AI regulation bills, with states focusing on the regulation of synthetic digital forgeries, or deepfakes.
But the slow legislative efforts of nation states has allowed for a proliferation of bad use cases for generative AI, and the growth of private interests in its development and implementation.
The UN’s report suggests that, if extreme risk arose with the development of AI, the tech could be treated along the lines of a biological weapon or even nuclear energy — science that has been limited and regulated by participating member nations for the greater good of humanity. The International Atomic Energy Agency (IAEA), for example, drew lines around nuclear science for the purposes of energy and medicine and bans on further weaponization. (The irony that nuclear energy may be the next path forward for AI’s demands on the energy grid, is not lost.)
But the analogy is limited. “Nuclear energy involves a well-defined set of processes related to specific materials that are unevenly distributed, and much of the materials and infrastructure needed to create nuclear capability are controlled by nation states,” the report outlines. “AI is an amorphous term; its applications are extremely wide and its most powerful capabilities span industry and states.”
The UN’s ‘lead by example’ strategy
The diffuse nature of AI means collaboration and forethought is key. “One of my concerns is that we’re working on things that we don’t fully understand. As technologists, we build towards the outcome — we just need to measure the outcome we’re looking for,” Sahota explained. “We don’t think about other uses or misuses. We’re not thinking about these other ancillary impacts, these indirect impacts, the ripple effects.”
Even with the international body’s history, and the ongoing issue of cooperation-avoidant nation states, Sahota doesn’t believe there’s a better international forum for regulating AI. “We have to define what right and ethical use means. There’s just no way around that. And who is going to lead that? It’s tailor made for a body like the UN.”
Could it be, then, that AI’s existence as a broad, cross-sector tech — one that countries are eagerly seeking and which isn’t, on the surface, pegged to historically contentious issues — offers the first opportunity for unilateral agreement?
The UN, Sahota argues, can act as an international standards-setting body that nation states look to as a foundation for AI investment and regulation. Rather than just planning for the potential negative impacts, Sahota says, the UN should model the appropriate use cases of AI technology. “Policy and regulation shouldn’t just be to clean up the guardrails and limit negative risk or legal liability, there’s also a possibility to create good.”
That might be the only path forward too, as the UN’s recent AI governance recommendations are less of a regulatory framework, and more of a plan for co-investment. They require buy-in from international powers at large, those who will agree to things like a shared data trust, a global AI investment fund, or a “development network” to convene experts and resources. While the UN’s new report makes a similar ethical argument to Sahota, he says the lack of member state backing — proving there are many who are already on board with the “lead-by-example” plan — is a misstep.
“This AI fund could be a way to create that nudge, to create incentives for people to think about the impact these [technologies] may have,” he explained. “But it would have been nice to see the next steps laid out, to be able to see at least some of the buy-in, and for it to be a motivator or to lend credibility. It would show that this can be more than just talking heads. It’s more than pieces of paper that collect dust.”
The publication of the UN’s report, and the fact the high level meetings are devoting time to ethical AI discussions, is a monumental feat amid rising AI nationalism. But technology moves faster than people and processes, Sahota explained, and political bodies need to speed things up. “There are more and more people that see that this window is rapidly closing,” said Sahota. “It is now a people challenge. Can you imagine if everyone became a proactive thinker? How profound of a change that would be? You can tell people that there’s a couple years to figure this out, and they think that is a long time. Two weeks can feel like an eternity, but we only have as much time as we think we do.”
0 Comments