AI
AI
With the increasingly wide adoption of AI, TSG recognised early on that the lack of international standards around AI safety and regulation posed numerous serious threats, including to social cohesion and the spread of misinformation.
Key to developing global frameworks will be agreement between North American and Chinese AI groups. Accordingly, TSG convened two meetings in Geneva, Switzerland, in July and October 2023, attended by representatives of OpenAI, Anthropic, Cohere, Tsinghua University and other Chinese state-backed institutions.
“We saw a rare opportunity to bring together key US and Chinese actors working on AI – and it resulted in the first Track II dialogue process of its kind”
The talks allowed those present to freely discuss crucial areas of possible technical cooperation, and resulted in concrete policy proposals that were raised at talks around the UN Security Council meeting on AI in July 2023 and the UK AI Safety Summit the following November.
“We saw an opportunity to bring together key US and Chinese actors working on AI. Our principal aim was to underscore the vulnerabilities, risks and opportunities attendant with the wide deployment of AI models that are shared across the globe,” our CEO, Salman Shaikh, told the Financial Times for its report on the meetings published in January 2024. “Recognising this fact can, in our view, become the bedrock for collaborative scientific work, ultimately leading to global standards around the safety of AI models.”
The meetings were arranged with the knowledge of the US, UK and Chinese governments and, although not made public at the time, it was subsequently agreed that they could be confirmed as having taken place. The FT stated they were “a rare sign of Sino-US cooperation amid a race for supremacy between the two major powers in the area of cutting- edge technologies such as AI and quantum computing.”
Given that further collaboration in this area is of critical importance, TSG plans to build on these discussions in the future.