Image: Heartbleed Patch Needed, Creative Commons, Public Domain
First, some quick facts about Heartbleed: Heartbleed is not a malicious attack. It is a design flaw (a bug) which has left the security codes which protect some of our online transactions vulnerable to exploitation since 2011.
That bug is a security gap in a control code called Heartbeat, used by OpenSSL (Open Secure Socket Layer) for management of the encrypted transfer of sensitive information. Heartbeat, may have already been exploited countless times. We can’t know. There would be no trace.
The sensitive nature of both transport systems, one the physical transport of people and the other a cyber-transport of identity information, led me to think about how one sector can learn from the other, and how both can benefit from the lessons of recent events.
It bears asking: Have those who form the infrastructure of Aviation (its supply-chain) secured their information systems against Heartbleed? What regulatory response is in place to check on potential leaks of vital technical and operational information? The answers to those questions might show how prepared we are as an industry to continue advancing our technology. Because these questions need answers, I have reached out for the FAA for comment and am awaiting their reply.
A risk unseen is the greatest risk of all.
Aviation learned its hardest lessons on hidden risks in 2001. The industry, always focused on safety and security, still had gaping holes in its processes which turned into an international nightmare scenario. The events of 9/11 not only changed aviation, but world history.
Since the 9/11 attacks, aviation has implemented a number of measures to prevent the recurrence of such a catastrophic event. Some have been more popular than others, but all have had a degree of effectiveness.
Aviation must be ever-vigilant, but it is no longer naïve.
Tech and IT, have not so much been naïve as suffering from deficient self-governance.
System risks have been apparent from the beginning of tech development, and the damage from systems exploitation has been considerable. Yet we want to feel safe, we pretend to feel safe, and we smile as we drink the Kool-Aid.
In 2007, the entire nation of Estonia was brought down by cyber attacks. Yet, national and international governance of Cybersecurity remains reactive and ineffectively collaborative.
Industry best practices and coding standardization for IT Frameworks remain voluntary.
To quote Executive Order 13636, written by the President of the United States in 2013, entitled Improving Critical Infrastructure Cybersecurity:
“The Cybersecurity Framework shall incorporate voluntary consensus standards and industry best practices to the fullest extent possible. The Cybersecurity Framework shall be consistent with voluntary international standards when such international standards will advance the objectives of this order…” [emphasis is mine].
A central Crisis Management Team is essential to contain and recover from critical events.
The NTSB does a good job in this role for the US, and even internationally when asked for help. Equivalents to the NTSB in other nations have also proven effective.
The delays and some of the confusion surrounding the search for MH370 highlight that aviation can do better on centralisation and coordination of efforts, as well as information sharing and communication.
Until we find debris or the elusive black boxes, we cannot begin to calculate the timeline for identifying systems flaws and drafting corrective actions.
Further, aviation has not capitalized on technology available to prevent circumstances like MH370. Funding for NextGen systems is deficient, and a rethink on established flight tracking practices is required.
That said, Heartbleed has highlighted the nascent crisis management systems in place for information technology. Whereas response to aviation accidents or critical events are, for the most part, centralized, Heartbleed has proven that IT has no platform for centralized communications. There is no single authoritative source for information and updates on risks or reliability of fixes. User confusion is considerable. In a few words, no one knows how bad this is, and no body exists to define the risk fully or advise on best defense for the user.
The sound advice from various experts in this field is for users to change all passwords.
However, if strong fixes are not carried out across the board, then password changes may not be enough defense. Further, we must question what other similar bugs might yet reside, unidentified, in other system-critical code.
A control body in place to evaluate processes, identify best practices and agree to universal checks before deployment could more effectively have prevented the Heartbleed bug.
Were such a system in place for code development, especially code which deals with cybersecurity, someone other Codenomicon and Neel Mehta of Google Security might have identified the bug back in 2011.
Admittedly, code is massive, intricate, and complex. It is also intellectual property. The need for quick deployment would make excessive regulation of this sector a nightmare.
However, IT might learn from Aviation’s destructive testing practices. A centralised regulated unit of experts, assigned to break code to ensure its stability and security could prove beneficial.
Aviation infrastructure is no less massive, with its compendium of parts and technology integration. It consists of much intellectual property, all protected. However, the aviation systems of controls are often far too sluggish a process for technology to adopt anything similar.
Aviation could learn from technology to streamline testing and approval processes.
Heartbleed might help aviation better appreciate its check and balance processes, but it also supports the argument some make that some analogue control of aircraft is beneficial. Certainly, for pilot training we’ve learned that’s true. Asiana proved that.
But the advancement of technology is here to stay, and can help aviation in important ways. Preventing a recurrence of a situation like the loss of MH370 is one important example.
To do this, the incorporation of software to manage communications channels for the data tracked on aircraft will be necessary. As we consider these infrastructures, we must also consider the vulnerability of such systems and plan against them. This is not to say that we should write-off advanced technology as too dangerous for aviation. Quite the opposite. We must simply ensure that we set a high standard of security for the technology we implement.
As aviation adopts new systems, it must learn from the Heartbleed event to apply the same critical checks for software as it does for hardware. A bug in an aircraft’s control systems could be lead to catastrophic situations for aviation. Code vulnerability needs strong consideration in the evaluation of avionics, communications channels management software, and the very traffic management systems we so desperately need for NextGen infrastructure.
Heartbleed has been to IT what MH370 has been to aviation. No one wants a recurrence of either event. This combination of crises presents a limited window of opportunity, for improvement in both sectors. We shouldn’t close that window without letting in some fresh air.
Feature Image: Binary One Null Crash Administrator Attack, Geralt, Public Domain CCO