Artificial intelligence (AI) is gaining attention in many fields and has become a key topic in many conferences, especially to discuss its impact on different areas of life. One area where AI is particularly important is the military, as it is now a major indicator of a country’s strength, especially for big powers that are shifting from traditional arms races to developing AI technologies. For example, the U.S. military AI market is expected to grow by 13.1% annually from 2022 to 2029, showing the country’s strong interest in AI for military purposes. Additionally, in many conflict zones worldwide, AI is playing a leading role and is even seen as a key factor in prolonging these conflicts.

While AI is becoming central to national security, especially for smaller and medium-sized countries that rely on technology to compensate for smaller armies, the challenge is to use AI responsibly. Countries need to balance benefiting from AI while adhering to legal and ethical standards. There have been efforts to regulate AI, but like many other complex issues, global agreement is still far off.

One international attempt was a summit in Amsterdam in 2023, but no clear plan for responsible AI use was reached. Another summit in September 2024 in Seoul gathered over 90 countries but showed differences in the interests of major powers like the U.S. and China, who supported moderate approaches without binding commitments. Despite these differences, there are two positive signs: 1) discussions are based on the 1983 agreement on lethal autonomous weapons, and 2) in 2023, the U.S. issued a statement on the responsible use of military AI, which has since been endorsed by 55 countries.

Even though no concrete progress has been made, there is growing international concern about the misuse of AI weapons, and countries are working to ensure human control over military AI, prevent AI from being used to spread weapons of mass destruction, and ensure human involvement in using nuclear weapons. These efforts reflect the seriousness with which countries, including major powers, view the dangers of AI being used irresponsibly.

In my view, smaller and medium-sized countries need to actively participate in these discussions to help shape international principles that could lead to a broader agreement. There are three main reasons: 1) rapid AI developments threaten state sovereignty, especially for smaller countries whose vital facilities may be targeted by non-state groups; 2) many countries, including those in the Gulf, rely heavily on technology for development, making it essential to manage AI technologies, especially through laws; and 3) the ongoing regional tensions, where non-state groups are increasingly using technologies like drones, make regulating AI even more critical.

Note: This article has been automatically translated.

Source: Akhbar Al Khaleej

Dr. Ashraf Keshk, Senior Research Fellow