Advertisement
Advertisement
Science
Get more with myNEWS
A personalised news feed of stories that matter to you
Learn more
Chinese researchers are creating an AI-powered air combat system that can explain to a human how it made its decisions. Photo: EPA-EFE/Xinhua

Breaking the black box: Chinese scientists solve a ‘big tough challenge’ to the US Air Force’s AI project

  • Researchers are creating a smart air combat system that can explain the decisions it makes during battles to humans
  • It overcomes the ‘black box’ issue that has been a hurdle for both US and Chinese militaries amid an AI arms race
Science
In Xian, an ancient city in northwestern China that has given rise to some of the most powerful dynasties, a new power is emerging as scientists create a form of artificial intelligence (AI) for the military that has not been seen before.

The smart air combat system can explain the decisions it makes during intense battles, and share the motives behind these moves with humans.

This technological breakthrough means China has overcome a hurdle that has baffled militaries for years. It also signifies growing intensity in the AI arms race between Washington and Beijing.

The United States began testing the application of AI in air combat earlier than China. While China was still engaging in real-sky combat between human-controlled and AI-controlled drones, US test pilots had already taken their dogfighting AI to the skies for trials.

But while it is unclear whether America has also solved the same AI hurdle in its new AI-powered F-16 fighter jet that China says it has, the groundbreaking work by the Chinese scientists is certain to change the face of air battles in the future.

Prevailing AI technologies, such as deep reinforcement learning and large language models, operate like a black box: tasks enter one end and results emerge from the other, while humans are left in the dark about the inner workings.

But air combat is a matter of life and death. In the near future, pilots will need to work closely with AI, sometimes even entrusting their lives to these intelligent machines. The “black box” issue not only undermines people’s trust in machines but also impedes deep communication between them.

Developed by a team led by Zhang Dong, an associate professor with the school of aeronautics at Northwestern Polytechnical University, the new AI combat system can explain each instruction it sends to the flight controller using words, data and even charts.

This AI can also articulate the significance of each directive regarding the current combat situation, the specific flight manoeuvres involved and the tactical intentions behind them.

02:16

Australian leader slams China for ‘unacceptable’ use of flares near military helicopter

Australian leader slams China for ‘unacceptable’ use of flares near military helicopter

Zhang’s team found that this technology opens a new window for human pilots to interact with AI.

For instance, during a review session after a simulated skirmish, a seasoned pilot can discern the clues that led to failure in the AI’s self-presentation. An efficient feedback mechanism then allows the AI to comprehend the suggestions of human teammates and sidestep similar pitfalls in subsequent battles.

Zhang’s team found that this kind of AI, which can communicate with humans “from the heart,” can achieve a nearly 100 per cent win rate with only about 20,000 rounds of combat training. In contrast, the conventional “black box” AI can only achieve a 90 per cent win rate after 50,000 rounds and struggles to improve further.

Currently, Zhang’s team has only applied the technology to ground simulators, but future applications would be “extended to more realistic air combat environments,” they wrote in a peer-reviewed paper published in the Chinese academic journal, Acta Aeronautica et Astronautica Sinica, on April 12.

In the US, the “black box” issue has been mentioned in the past as posing a problem for pilots.

America’s dogfighting trials are being run between the air force and the Defence Advanced Research Projects Agency (DARPA). A senior DARPA officer has acknowledged that not all air force pilots welcome the idea due to the “black box” issue.

“The big tough challenge that I’m trying to address in my efforts here at DARPA is how to build and maintain the custody of trust in these systems that are traditionally thought of as black boxes that are unexplainable,” Colonel Dan Javorsek, a programme manager at DARPA’s Strategic Technology Office, said in an interview with the National Defence Magazine in 2021.

DARPA has adopted two strategies to assist pilots in overcoming their “black box” apprehension. One approach allows AI to initially handle simpler, lower-level tasks, such as automatically selecting the most suitable weapon based on the locked target’s attributes, enabling pilots to launch with a single press of a button.

The other method involves high-ranking officers personally boarding AI-driven fighter jets to demonstrate their confidence and resolve.

China’s J-20 stealth fighter has a two-seat variant, with one pilot dedicated to interacting with AI-controlled unmanned wingmen. Photo: China Daily via Reuters
Earlier this month, Air Force Secretary Frank Kendall took an hour-long flight on an F-16 controlled by artificial intelligence at the Edwards Air Force Base. On landing, he told the Associated Press that he had seen enough during his flight to trust this “still-learning” AI with the ability to decide whether to launch weapons in war.

“It’s a security risk not to have it. At this point, we have to have it,” Kendall told AP.

The security risk is China. The US Air Force told AP that AI offers them a chance to prevail against the increasingly formidable Chinese Air Force in the future. At the time, the report said that while China had AI, there was no indication they had discovered a method to conduct tests beyond simulators.

But according to the paper by Zhang’s team, the Chinese military enforces rigorous safety and reliability assessments for AI, insisting that AI be integrated into fighter jets only after cracking the “black box” enigma.

Deep reinforcement learning models often churn out decision-making outcomes that are enigmatic to humans but exhibit superior combat effectiveness in real-world applications. It’s challenging for humans to comprehend and deduce this decision-making framework based on pre-existing experiences.

“It poses a trust issue with AI’s decisions,” Zhang and his colleagues wrote.

“Decoding the ‘black box model’ to enable humans to discern the strategic decision-making process, grasp the drone’s manoeuvre intentions, and place trust in the manoeuvre decisions, stands as the pivot of AI technology’s engineering application in air combat. This also underscores the prime objective of our research advancement,” they said.

Zhang’s team showed the prowess of this AI through multiple examples in their study. For instance, in a losing scenario, the AI initially intended to climb and execute a cobra manoeuvre, followed by a sequence of combat turns, aileron rolls and loops to engage the enemy aircraft, culminating in evasion manoeuvres like diving and levelling out.

02:17

China airs footage of Fujian aircraft carrier featuring advanced catapult launch system

China airs footage of Fujian aircraft carrier featuring advanced catapult launch system

But a seasoned pilot could swiftly discern the flaws in this radical manoeuvre combination. The AI’s consecutive climbs, combat turns, aileron rolls and dives led to the drone’s speed plummeting during the engagement, eventually failing to shake off the enemy.

And here’s the human instruction to the AI, as written in the paper: “The reduced speed resulting from consecutive radical manoeuvres is the culprit behind this air battle loss, and such decisions must be avoided in the future.”

In another round, where a human pilot would typically adopt methods such as side-winding attacks to find effective positions to destroy enemy aircraft, the AI used large manoeuvres to induce the enemy, entered the side-winding phase early, and used level flight in the final stage to mislead the enemy, achieving a critical winning strike with sudden large manoeuvres.

After analysing the AI’s intentions, researchers uncovered a subtle manoeuvre that proved pivotal during the deadlock.

The AI “adopted a levelling out and circling tactic, preserving its speed and altitude while luring the enemy into executing radical direction changes, depleting their residual kinetic energy and paving the way for subsequent loop manoeuvres to deliver a counter-attack,” Zhang’s team wrote.

Northwestern Polytechnical University is one of China’s most important military technology research bases. The US government has imposed strict sanctions on it and made repeated attempts to infiltrate its network system, eliciting strong protests from the Chinese government.

But it seems the US sanctions have had no obvious impact on the exchange between Zhang’s team and their international counterparts. They have leveraged novel algorithms shared by American scientists at global conferences and also disclosed their innovative algorithms and frameworks in their paper.

Some military experts believe that the Chinese military has a stronger interest to establish guanxi – connection – between AI and human fighters than their US counterparts.

For instance, China’s stealth fighter, the J-20, boasts a two-seat variant, with one pilot dedicated to interacting with AI-controlled unmanned wingmen, a capability currently absent in the US F-22 and F-35 fighters.

But a Beijing-based physicist who requested not to be named due to the sensitivity of the issue said that the new technology could blur the line between humans and machines.

“It could open Pandora’s box,” he said.

43