Threat of AI bias: US tech startup Sanas has developed software capable of altering a person’s accent. The demo on the company’s website explicitly states that Sanas is tuned for Indian and Filipino speakers of English. Clicking the ‘With Sanas’ slider after recording the voice gives a somewhat robotic, yet distinctly white American accent.
Sanas’s objective is to make call centre workers sound like white Americans, regardless of their country of origin. The company asserts that this transformation will enhance the relatability of call centre workers to American customers, resulting in improved customer service. However, it is evident that Sanas’s software exhibits bias by perpetuating the notion that white American accents are superior to others, notably Indian and Filipino ones.
If voice accent bias appears trivial, consider this: In 2019, a study by the US National Institute of Standards and Technology (NIST) revealed that 189 facial recognition algorithms, developed by 99 different entities, including Microsoft, Cognitec, and Megvii, inaccurately identified African American and Asian faces 10 to 100 times more frequently than Caucasian faces. This study also indicated that Native Americans had the highest error rates in a database containing over 18 million photos of approximately 8.5 million individuals.
READ I India-UK FTA: Weighing trade gains against non-trade barriers
This example highlights the consequences of AI bias, including false positives, racial profiling, discrimination, exclusion, and even unjust incarceration. These consequences will become increasingly widespread as AI is integrated into various critical business applications, spanning customer service, marketing, sales, product development, operations, risk management, fraud detection, cybersecurity, human resources, finance, healthcare, manufacturing, transportation, and energy.
For instance, Amazon employs AI to personalise product recommendations, optimise its logistics network, and detect fraud. Google harnesses AI to enhance its search engine, create new products and services, and automate tasks. Microsoft leverages AI for developing productivity tools, enhancing its cloud computing platform, and safeguarding customers against cyber threats. Netflix employs AI to suggest movies and TV shows to its subscribers, while Walmart utilises AI to refine its supply chain, manage inventory, and optimise pricing.
Nevertheless, as with any transformative technology, AI is not without its flaws, with one of the most pressing issues being bias, which can result in unjust and discriminatory outcomes.
The nature of AI bias
AI bias is not merely a technical challenge; it also encompasses social and ethical biases, which can yield a range of adverse consequences. Given the increasing role of AI systems in making pivotal decisions about individuals’ lives, such as employment opportunities, loan approvals, and access to healthcare, biased AI systems have the potential to perpetuate unfairness, disproportionately harming people of colour, women, and marginalised groups.
While there is no easy solution to the complex problem of AI bias, organisations can take several steps to mitigate these risks:
Implement debiasing algorithms: Employ algorithms designed to identify and reduce bias within AI systems. These algorithms can adjust the weighting of different features in an AI model to minimise bias against specific groups.
Regularly audit AI systems: Conduct regular bias audits by testing AI systems with diverse inputs and scrutinising output patterns.
Provide training: Offer training programmes for employees to recognise and address bias in AI systems, covering both technical and ethical aspects.
Foster diversity: Involve a diverse range of stakeholders in the development and utilisation of AI systems to ensure fairness and equity in their design and deployment.
Embrace transparency: Be transparent about the use of AI and efforts to mitigate bias by publishing documentation on AI systems and making data available for public review.
Enhancing transparency and explainability
In addition to these measures, organisations must strive to create more transparent and explainable AI systems. Researchers are actively developing AI systems that are more transparent and comprehensible, simplifying the process of identifying and understanding bias in AI systems.
To build public trust in AI, organisations should be forthcoming about their AI use and bias mitigation strategies. This includes publishing documentation on AI systems, providing access to data for public scrutiny, and addressing inquiries from the public regarding AI utilisation. Furthermore, organisations must commit to fairness and equity, fostering a culture where individuals are encouraged to report bias and where everyone feels valued and respected.
Positive steps by Big Tech
Some of the large technology giants are taking measures to address AI bias:
- Google has developed tools and resources, such as TensorFlow Fairness Indicators and the Google AI Principles to assist developers and organisations in mitigating AI bias.
- Microsoft has published white papers on AI bias, including Fairness in Machine Learning and The Ethics of Artificial Intelligence.
- Amazon offers tools and services, like the AWS SageMaker Fairness Console and the AWS AI Ethics Review, to aid developers and organisations in mitigating AI bias.
As AI’s prevalence continues to grow, it is essential for all organisations to adopt mindful approaches, ensuring that their AI systems are fair, equitable, transparent, and explainable. AI bias is a huge problem, but it is not insurmountable; it is a persistent threat, but one that can be managed. By taking proactive steps to mitigate these risks, organisations can help ensure that AI benefits all, without discrimination against marginalised communities.
(D Chandrasekhar is a senior fellow at Centre for Innovation in Public Policy, a think tank based in Gurgaon.)