"The civil aviation sector is resilient to cyber-attacks and remains safe and trusted globally, whilst continuing to innovate and grow."
That is the vision set forth in ICAO's Aviation Cybersecurity Strategy, approved and published under the authority of Secretary General Dr. Fang Liu and reaffirmed in the Muscat Declaration of December 2024. It is an aspirational statement, and a necessary one. It is also, as of this writing, more aspiration than reality.
In 2023, I wrote about the challenge of protecting aviation in an increasingly interconnected world. The core argument was straightforward: as our industry embraces digital transformation across navigation, communication, and operational systems, we must simultaneously build the cybersecurity defenses to match. That argument stands. However, the landscape has shifted in ways that demand a sharper and more urgent response. Connectivity has grown faster than the defenses around it. Artificial intelligence has arrived in both the attacker's and the defender's toolkit, raising the stakes to a level the sector has not yet fully reckoned with. The window of opportunity to close critical gaps is narrowing, and we must act with the same decisiveness that has defined aviation's approach to safety for decades.
This article identifies where we have progressed, where we remain exposed, and what must now be done—with specific attention to the role of AI and the persistent absence of cockpit-level cyber awareness tools for pilots.
The Shifting Landscape
Progress has been made since 2023. ICAO's Aviation Cybersecurity Strategy and its seven-pillar Action Plan established the global framework, covering international cooperation, governance, legislation, cybersecurity policy, information sharing, incident management, and capacity building. EASA and FAA have adopted DO-326A/ED-202A and DO-356A/ED-203A for airworthiness certification, embedding cybersecurity considerations into the design assurance process for new and modified aircraft systems. The FAA Reauthorization Act of 2024 directed systematic screening of software-based aviation systems for unauthorized access and designated a Cybersecurity Lead within the agency. Airlines and airports are formalizing Chief Information Security Officer roles, and industry cybersecurity spending has reached new highs. These are good steps that reflect an industry taking the threat with the seriousness it deserves.
But the attack surface for aviation has expanded dramatically. Modern E-enabled aircraft depend on multiple external data connections—Wi-Fi, SATCOM, cellular, ACARS, and remote diagnostics—that were not part of earlier threat models. Low-Earth Orbit satellite constellations are being deployed for both passenger connectivity and increasingly for operational data, creating new pathways into systems that carry safety-critical information. The proliferation of Electronic Flight Bags, cloud-based operations platforms, and off-board data pipelines means that an airline's cybersecurity perimeter now extends well beyond the airport/aircraft shadow. Third-party vendors, integrators, and software suppliers represent a serious attack vector, yet visibility into sub-tier supply chains is limited. Cyber-attacks against aviation and critical infrastructure have surged, with some reports indicating a 600 percent increase in recent years. The joint IFALPA/IFATCA position on cyber threats, updated in February 2026, reaffirmed that most legacy aviation systems were never designed with cybersecurity in mind and remain unencrypted and unauthenticated.
The result is a paradox that we must confront directly. The aviation sector has never been more aware of cyber risk and has never been more exposed to it. Regulation has advanced, but it remains focused on design assurance and organizational governance. The operational front line, where pilots, controllers, and dispatchers must detect and respond to cyber anomalies in real time, has received far less attention.
AI as a Double-Edged Sword
Artificial intelligence is reshaping the cybersecurity equation in aviation, and it cuts both ways. On the defensive side, machine learning models can now detect anomalies in ADS-B data—spoofing, replay attacks, ghost aircraft—with libraries published for real-time integration into monitoring systems. AI-driven Security Operations Centers and endpoint detection tools learn normal behavior patterns across airline networks and flag deviations, compressing response times from hours to seconds. Predictive maintenance analytics using deep learning identify patterns that precede both mechanical and cyber-related failures, improving both safety and operational margins. In each case, AI augments human defenders who simply cannot match the velocity and volume of modern threats on their own.

The same capabilities that strengthen defense are available to adversaries, and in some cases are already being exploited. AI-generated spear-phishing and business email compromise campaigns can now target airline, ANSP, and airport staff with unprecedented personalization and success rates. Deepfake audio and video of executives, pilots, or regulators can be used to manipulate personnel into changing configurations, altering operational plans, or revealing credentials. AI-driven malware can adapt to defenses in real time, probe hybrid IT/OT/avionics environments for weak links and automate lateral movement through interconnected systems. At the signal level, AI can generate fake aircraft transmissions, spoof ADS-B or navigation data, or manipulate operational data feeds to ATC and airline operations centers at a scale and sophistication that manual defenses cannot match.
Similarly, the governance dimension of AI in aviation cybersecurity warrants serious concern. AI is being deployed for both operations and cyber defense faster than governance, assurance, and certification frameworks can keep pace. The FAA's AI Safety Assurance Roadmap and equivalent EASA and ICAO efforts are steps in the right direction, but they do not yet address AI-based cyber defense and anomaly detection tools used in safety-relevant roles. Requirements for robustness against adversarial machine learning, data poisoning, and model tampering remain nascent. Any AI system whose outputs influence safety-critical decisions must be held to the same rigor we have always applied to every other safety-critical function in aviation. We cannot afford to embed tools we do not fully understand into systems on which lives depend.
The Cockpit-Level Cyber Awareness Gap
Perhaps the most consequential gap in today's aviation cybersecurity posture is the absence of standardized, pilot-facing cyber detection and alerting. Regulators have tightened requirements around design assurance, continuous integrity screening, and organizational information security management systems. Existing flightcrew alerting rules, such as FAA's 14 CFR 25.1322, provide a framework under which cyber alerts could be implemented. Yet there is no current mandate—from FAA, EASA, or ICAO—that aircraft implement cockpit-level cyber-attack warning systems or standardized cyber alerting logic for pilots. Flight crews remain without dedicated tools or required training to recognize and respond to suspected cyber compromise in real time, despite being the professionals on the front line when something goes wrong at altitude.
The technical challenges are substantial and should be acknowledged honestly. Many cyber effects—corrupted sensor data, intermittent bus errors, GPS spoofing—resemble ordinary hardware faults, making reliable distinction non-trivial. Avionics platforms have limited spare computational capacity for real-time anomaly detection. AI and machine learning detectors that show promise in laboratory settings are non-deterministic and difficult to certify under existing airworthiness standards such as DO-178C and DO-330. Any new class of cockpit alert must be integrated into existing procedures without increasing cognitive load or generating nuisance alerts that crews learn to ignore. These are real engineering and certification problems, and they suggest why the industry has concentrated effort on preventive and architectural measures—segmentation, secure development, SATCOM encryption—rather than on a dedicated cyber warning system for the flight deck.
However, these barriers do not justify inaction. Aviation has confronted comparable challenges before. TCAS and EGPWS were not simple to develop, certify, or integrate into crew procedures, yet we built them because the safety case was compelling and the consequences of inaction were unacceptable. The same logic applies here. What is needed is a coordinated program of work: standardized requirements for cyber anomaly detection functions that feed flight-deck alerts, clear and harmonized crew procedures and training for suspected cyber compromise scenarios, and integration of AI-driven anomaly detectors into avionics and Electronic Flight Bags with human-factors-validated alerting. The pilot must be treated as a first-class element in cyber risk management, not an afterthought behind organizational processes and back-office security operations centers.
A Call to Action
Regulators—ICAO, FAA, EASA, and national authorities—must now move from high-level strategies to explicit requirements. That means mandating flight-deck cyber monitoring and alerting capabilities with associated crew procedures, establishing governance and assurance standards for AI systems used in safety-relevant and cybersecurity roles, and setting minimum outcome-based SATCOM security baselines for aviation use. AI-related cyber risks must be integrated into existing safety risk management frameworks rather than treated as a separate domain that can be deferred.
Industry—OEMs, airlines, ANSPs, airports, and SATCOM providers—must treat pilots and operational personnel as a primary control in cyber risk management. That requires investment in tooling, training, and procedures at the operational level, not only in back-office security operations. AI should be adopted for defense deliberately, with priority given to interpretable anomaly detection and human-in-the-loop response for any function that touches safety. Cross-sector information sharing must be strengthened, especially on AI-enabled attack patterns and defenses, building on ICAO's cyber information-sharing guidance and the Traffic Light Protocol framework that enables trusted exchange among stakeholders.
Finally, professional associations and labor groups must continue to advocate for pilot-centric cyber safety: standardized training, involvement in rulemaking, and clear operational guidance for cyber compromise scenarios. They must engage in AI governance debates to ensure that human factors and front-line operational realities are reflected in how AI tools are specified and certified for aviation use.
ICAO's vision of a civil aviation sector that is "resilient to cyber-attacks and remains safe and trusted globally" is the right aspiration. Achieving it will require the same shared commitment, rigorous learning from risk, and global coordination that built aviation's unmatched safety record. Cybersecurity and artificial intelligence must be treated with exactly that rigor. The operating assumption must be that attacks will occur. The design principle must be resilience. And the professionals on the front line must be informed, equipped, and empowered to act when systems are compromised. This is not an aspiration. It is an obligation.



