Enhancements within defense continue with AI as a leading innovation, shaping all industries and becoming increasingly popular globally. As countries race to adapt their military capabilities, contractors, organizations, and agencies must consider the associated risks AI presents while preserving national security. Recently, a Chinese AI application named DeepSeek, raised questions regarding cybersecurity as its popularity rose among U.S. consumers. With the Chinese government’s ability to access and control user data, the app raised national security concerns due to possible data collection, influence, and cyber-attack vulnerability.
A National Defense article highlighted concerns about the implementation of AI, stating although it is beneficial, without the proper tools and knowledge, organizations and military units can fall victim to potential conflicts. Using AI-enhanced systems can present various challenges, not limited to deceptive uses of AI, potential ethical and moral dilemmas, and sole reliance on AI.
AI has been a transformative tool when detecting, assessing, and mitigating adversary threats but it can also be deceptive. When combatting cyber-attacks from adversaries, military units have started utilizing AI to identify potential risks to national security with ease. The increasing sophistication of these systems can prompt increased challenges with threat detection, communication issues, and confusion.
Walking the Ethical and Moral Line with Autonomous Systems
AI stands on the frontlines as a means of defending nations against adversaries. North Korea’s military is in the process of developing and testing Suicide AI drones that are expected to detonate upon reaching their target. With nations placing AI at the forefront of war, ethical and moral responsibility must be considered. Accountability for faulty decisions made due to AI, and steps to implement enhancements effectively without endangering civilian lives are factors that can help clear ethical lines when using autonomous systems.
Relying Solely on AI Can Leave Units Vulnerable
As more nations modernize, it is critical not to fully remove human intervention. Enhanced systems are becoming more complex and powerful meaning they could evolve to operate fully autonomously. In 2020, the Kargu-2 made its debut on the battlefield as an autonomous drone that was able to operate with no human interaction at all. Relying solely on the advancement can eventually lead to increased human oversight in critical areas where regulation is needed, leaving teams vulnerable to potential attacks, system failure, cyber warfare, and misinformation that will affect vital decisions.
In January 2023, the Department of Defense (DOD) updated Directive 3000.09 giving humans the final action when dealing with autonomous systems. By creating parameters around the use of AI, agencies can reduce the risk of AI-enhanced systems operating out of their control.
The Best Ways to Mitigate Complications Stemming from AI Integration
The use of AI is just beginning as nations globally continue to adapt to emerging threats. The ISO/IEC 42001 is the first established international standard of use for AI and with the growing implementation of automated systems, more regulations may be in the foreseeable future.