Loading...
The Financial Express

AI ripe for US-China cooperation


AI ripe for US-China cooperation

One of the key issues of the 21st century is whether artificial intelligence (AI) is a force for good or the ruin of humanity. With AI at the center of US-China tech rivalry, whether common ethical norms and standards can be adopted, or whether US-China zero-sum competition will lead to a race to the bottom, is a loaded question.

AI, which is usefully viewed as an inter-connected BigData/IoT/Robotics set of technologies, promises to be a game changer from economics to the automation of the battlefield. AI is more an enabling force, like electricity, than any other thing, and like apps today, it is already being applied to most industries and services.

By 2030, AI algorithms will be in every imaginable app and pervasive in robots, reshaping industries from healthcare and education to finance and transportation, to military organization. AI has already entered military management, logistics and target acquisition. 

The urgent challenge for the coming decade is to develop global ethical principles, standards and norms to govern the development and use of AI.

The good news is that an assessment of four major international statements on AI governance since 2017 from the US, the EU, OECD and in China's 2018 white paper on artificial intelligence standardization reveal large areas of commonality. While there are different points of emphasis, there appears substantial overlap on essential ethics and principles. Based on a review of all four statements, this outline captures core shared principles:

Human agency and benefit: Research and deployment of AI should augment human well-being and autonomy; have human oversight to choose how and whether to delegate decisions to AI systems; be sustainable, environmentally friendly, and compatible with human values and dignity.

Safety and responsibility: AI systems should be technically robust, based on agreed standards, verifiably safe, including resilience to attack and security, reliability and reproducibility.

Transparency in failure: If an AI system fails or causes harm or otherwise malfunctions, it should be explainable why and how the AI made its decision - algorithmic accountability.

Avoiding arms races: An arms race in lethal autonomous weapons should be avoided. Decisions on lethal use of force should be human in origin.

Periodic review: Ethics and principles should be periodically reviewed to reflect new technological developments, particularly in general deep learning AI.

How such ethics are translated into operational social, economic, legal and national security policies - and enforced - however, is an entirely different question. These ethical issues already confront business and government decision-makers. Yet neither have demonstrated comprehensive policy decisions on implementing them, suggesting that establishing governance is likely to be an incremental, trial-and-error process.

How to decide standards and liability for autonomous vehicles, data privacy, and algorithmic accountability are almost certainly very difficult to attain. Moreover, as AI becomes smarter, the ability of humans to understand how AI made decisions is likely to diminish.

Importantly, however, both a White House Executive Order on AI standards and China's white paper on AI standardization call for actively engaging in international standards development and both stress the importance of achieving AI international standards.

Even though AI may be a top arena of US-China tech competition, given the risks of catastrophic failure and the desire for markets should necessitate such norms for both governments and industry. The need for human responsibility and accountability for AI decisions and the downside risks of unsafe AI and lack of transparency to understand failure are shared dangers.

One recent example of US, China and other competitors cooperating is in the creation of standards and technical protocols for 5G. A coalition of global private sector telecom/IT firms, known as 3rd Generation Partnership Project (3GPP) in collaboration with the International Telecommunication Union, a key UN standard-setting institution, have, so far successfully, agreed to a host of technical standards for 5G technology.

While some in the US complained of Chinese assertiveness in pursuing standards that tend to favor Chinese tech, Beijing played by the rules, and like any other stakeholder, sought to shape global standards. US firms also had the opportunity to push for their preferred standards; they just have not matched Chinese efforts. But the point is that markets are global and all stakeholders have an interest in tech standards reflecting that, competitors or not.

Given the enormous stakes, getting AI governance right, "strategic competitors" or not, both the US and China have a mutual interest in adopting safe, secure and accountable rules for AI applications.

What's more, as the two leading AI powers, to the degree the US and China can reach an accord in a bilateral dialogue, the outcome would likely shape global efforts to achieve consensus in international standard-setting institutions.  

A reasonable next step would be for a representative global forum like the G20 to formalize the common views on ethics principles, and then move to codify them in a UN Security Council resolution and then press international institutions to set new standards and norms based on that UN mandate.

 

Share if you like

Filter By Topic