AI in the cockpit: Efficiency soars in aviation, but can law keep up?

AI, aviation
AI is transforming aviation industry, but ethical and legal concerns over privacy, bias remain.

Aviation is a highly regulated industry, and AI has a significant impact on its operations. Artificial intelligence enhances operational technology, optimises flight hold and cancellation decisions, and helps retrieve and match data from multiple antiquated IT systems. According to market.us, artificial intelligence in aviation is set to grow from $1.6 billion in 2023 to $40.4 billion by 2033. However, like any new technology, artificial intelligence brings its own set of legal considerations, ranging from security and safety to bias, privacy, and intellectual property.

Consent for data collection, automated profiling, and legitimate interest are ongoing privacy issues with artificial intelligence. It is essential to ensure that artificial intelligence systems relying on personal data are transparent and accountable, preventing unfair or biased decisions. Comprehensive data protection laws restrict artificial intelligence, and the European Union’s General Data Protection Regulation (EU GDPR) recommends that organisations using automated data processing take sufficient measures to ensure fairness. Artificial intelligence systems should align with human values. For example, Amazon had to shut down an artificial intelligence tool used in its hiring process because it discriminated against women.

READ | Game On: Online gaming tax row nears end with GST Council’s move

Challenges facing AI

Work created by artificial intelligence cannot be protected under existing copyright laws, and artificial intelligence cannot be considered the owner of intellectual property, as owners must be natural persons or legal entities. Lawsuits against artificial intelligence companies, such as those by writers and artists for copyright infringement, highlight the legal challenges. Nonfiction writers Nicholas Basbanes and Nicholas Gage sued OpenAI and Microsoft, claiming their content was stolen.

While generative artificial intelligence hangs in legal limbo on ‘legitimate use,’ creators worry about their work or style being used to train artificial intelligence generators without permission or compensation. The UK is one of the few countries offering copyright protection for works generated solely by computers. Getty Images filed a lawsuit against the creators of Stable Diffusion, alleging improper use of its photos, violating copyright and trademark rights.

A key decision in the AI industry, including aviation, is whether to build artificial intelligence in-house or partner with third parties. Building artificial intelligence requires access to talent, exclusivity, and first-offer rights for new technology. Using third-party artificial intelligence tools involves considerations like third-party validation, access to distribution channels, compliance with licenses and guidelines, potential claims or violations of intellectual property rights, and the type of data used to train the artificial intelligence model along with associated risks. 

Global Comparisons

There is a geopolitical race among the EU, US, and Asian countries to establish artificial intelligence laws. President Biden issued an executive order on artificial intelligence on October 30, 2023, emphasising the US’s preemptive steps to harness artificial intelligence’s potential while managing risks. The EU AI Act, the world’s first artificial intelligence law, was passed on March 13, 2024. The US has state law regulations for the private sector, while India lacks specific laws directly addressing generative artificial intelligence.

In India, the Ministry of Electronics, and Information Technology (MeitY) issued an advisory on December 26, 2023, to ensure compliance with existing IT rules to address misinformation concerns regarding deepfakes. Various provisions under the Information Technology Act, 2000, offer civil and criminal remedies for deepfake crimes related to privacy violations.

Copyright infringement exceptions exist in all these jurisdictions. US laws allow fair use of copyright-protected works, the EU permits text and data mining, and fair dealing is an exception in the UK and India. The EU AI Act classifies artificial intelligence systems by risk level, creating harmonised rules for artificial intelligence in the EU market. High-risk artificial intelligence includes robot-assisted surgery and critical infrastructure, while minimal risk includes artificial intelligence-enabled video games.

To address artificial intelligence misuse fears, regulatory measures such as the US Equal Employment Opportunity Commission (EEOC) and the Federal Trade Commission enforcing anti-discrimination laws are in place. The National Institute of Standards and Technology (NIST) has developed a voluntary artificial intelligence risk management framework for designing and managing trustworthy artificial intelligence. NIST’s role at the intersection of government, science, technology, and commerce is crucial as we advance with big data and artificial intelligence.

Ensuring ethical artificial intelligence development is imperative for the aviation industry. This involves establishing robust guidelines and frameworks to govern artificial intelligence usage, ensuring that artificial intelligence systems are transparent, fair, and accountable. Collaboration between industry stakeholders, regulators, and ethicists is crucial to address ethical concerns and develop artificial intelligence technologies that align with societal values. The focus should be on creating artificial intelligence systems that respect human rights, avoid biases, and promote inclusivity. This ethical approach will build trust in artificial intelligence technologies and foster their responsible adoption across the aviation sector.

If US artificial intelligence regulation continues at the local and state levels, the potential impact of federal data security and privacy legislation on the aviation industry will need assessment and evaluation. The future developments in this field remain to be seen. Solving privacy challenges is essential for the long-term success of artificial intelligence. Balancing technological innovation and privacy considerations will promote the development of socially responsible artificial intelligence, ultimately creating public value.

(Anubha Agarwal is a Delhi-based lawyer with over 15 years of experience as a legal professional.)