Breaking the black box: Chinese scientists solve a ‘big tough challenge’ to the US Air Force’s AI project
- Researchers are creating a smart air combat system that can explain the decisions it makes during battles to humans
- It overcomes the ‘black box’ issue that has been a hurdle for both US and Chinese militaries amid an AI arms race
The smart air combat system can explain the decisions it makes during intense battles, and share the motives behind these moves with humans.
The United States began testing the application of AI in air combat earlier than China. While China was still engaging in real-sky combat between human-controlled and AI-controlled drones, US test pilots had already taken their dogfighting AI to the skies for trials.
Prevailing AI technologies, such as deep reinforcement learning and large language models, operate like a black box: tasks enter one end and results emerge from the other, while humans are left in the dark about the inner workings.
But air combat is a matter of life and death. In the near future, pilots will need to work closely with AI, sometimes even entrusting their lives to these intelligent machines. The “black box” issue not only undermines people’s trust in machines but also impedes deep communication between them.
Developed by a team led by Zhang Dong, an associate professor with the school of aeronautics at Northwestern Polytechnical University, the new AI combat system can explain each instruction it sends to the flight controller using words, data and even charts.
This AI can also articulate the significance of each directive regarding the current combat situation, the specific flight manoeuvres involved and the tactical intentions behind them.
Zhang’s team found that this technology opens a new window for human pilots to interact with AI.
Zhang’s team found that this kind of AI, which can communicate with humans “from the heart,” can achieve a nearly 100 per cent win rate with only about 20,000 rounds of combat training. In contrast, the conventional “black box” AI can only achieve a 90 per cent win rate after 50,000 rounds and struggles to improve further.
Currently, Zhang’s team has only applied the technology to ground simulators, but future applications would be “extended to more realistic air combat environments,” they wrote in a peer-reviewed paper published in the Chinese academic journal, Acta Aeronautica et Astronautica Sinica, on April 12.
In the US, the “black box” issue has been mentioned in the past as posing a problem for pilots.
“The big tough challenge that I’m trying to address in my efforts here at DARPA is how to build and maintain the custody of trust in these systems that are traditionally thought of as black boxes that are unexplainable,” Colonel Dan Javorsek, a programme manager at DARPA’s Strategic Technology Office, said in an interview with the National Defence Magazine in 2021.
DARPA has adopted two strategies to assist pilots in overcoming their “black box” apprehension. One approach allows AI to initially handle simpler, lower-level tasks, such as automatically selecting the most suitable weapon based on the locked target’s attributes, enabling pilots to launch with a single press of a button.
The other method involves high-ranking officers personally boarding AI-driven fighter jets to demonstrate their confidence and resolve.
“It’s a security risk not to have it. At this point, we have to have it,” Kendall told AP.
But according to the paper by Zhang’s team, the Chinese military enforces rigorous safety and reliability assessments for AI, insisting that AI be integrated into fighter jets only after cracking the “black box” enigma.
Deep reinforcement learning models often churn out decision-making outcomes that are enigmatic to humans but exhibit superior combat effectiveness in real-world applications. It’s challenging for humans to comprehend and deduce this decision-making framework based on pre-existing experiences.
“It poses a trust issue with AI’s decisions,” Zhang and his colleagues wrote.
“Decoding the ‘black box model’ to enable humans to discern the strategic decision-making process, grasp the drone’s manoeuvre intentions, and place trust in the manoeuvre decisions, stands as the pivot of AI technology’s engineering application in air combat. This also underscores the prime objective of our research advancement,” they said.
Zhang’s team showed the prowess of this AI through multiple examples in their study. For instance, in a losing scenario, the AI initially intended to climb and execute a cobra manoeuvre, followed by a sequence of combat turns, aileron rolls and loops to engage the enemy aircraft, culminating in evasion manoeuvres like diving and levelling out.
But a seasoned pilot could swiftly discern the flaws in this radical manoeuvre combination. The AI’s consecutive climbs, combat turns, aileron rolls and dives led to the drone’s speed plummeting during the engagement, eventually failing to shake off the enemy.
And here’s the human instruction to the AI, as written in the paper: “The reduced speed resulting from consecutive radical manoeuvres is the culprit behind this air battle loss, and such decisions must be avoided in the future.”
In another round, where a human pilot would typically adopt methods such as side-winding attacks to find effective positions to destroy enemy aircraft, the AI used large manoeuvres to induce the enemy, entered the side-winding phase early, and used level flight in the final stage to mislead the enemy, achieving a critical winning strike with sudden large manoeuvres.
After analysing the AI’s intentions, researchers uncovered a subtle manoeuvre that proved pivotal during the deadlock.
The AI “adopted a levelling out and circling tactic, preserving its speed and altitude while luring the enemy into executing radical direction changes, depleting their residual kinetic energy and paving the way for subsequent loop manoeuvres to deliver a counter-attack,” Zhang’s team wrote.
But it seems the US sanctions have had no obvious impact on the exchange between Zhang’s team and their international counterparts. They have leveraged novel algorithms shared by American scientists at global conferences and also disclosed their innovative algorithms and frameworks in their paper.
Some military experts believe that the Chinese military has a stronger interest to establish guanxi – connection – between AI and human fighters than their US counterparts.
For instance, China’s stealth fighter, the J-20, boasts a two-seat variant, with one pilot dedicated to interacting with AI-controlled unmanned wingmen, a capability currently absent in the US F-22 and F-35 fighters.
But a Beijing-based physicist who requested not to be named due to the sensitivity of the issue said that the new technology could blur the line between humans and machines.
“It could open Pandora’s box,” he said.