AI regulation in India: As artificial intelligence continues to evolve rapidly, regulators worldwide are seeking ways to both manage and leverage this technology. The Competition Commission of India has announced its intention to commission a study on AI’s impact on competition in key user industries, including all competition-related aspects within AI ecosystems. To this end, the CCI has issued a request for proposal to hire an agency or institution to undertake this research.
The CCI recognises that AI, while offering significant benefits for competition, could also confer unfair advantages to certain firms. This study is designed as a knowledge-building exercise to deepen understanding of how AI might affect competition. This initiative follows the regulator’s investigation into certain fintech companies, exploring whether their technology use impacts competition amid ongoing regulatory challenges within the sector.
READ | Nestlé’s sugary baby food in developing nations expose double standards
Concurrently, the government is drafting new legislation aimed at establishing a regulatory framework for major digital economy firms. Insights from the study are expected to inform this new law, enhancing understanding of the AI and competition in India. The study, a part of the CCI’s advocacy efforts, is anticipated to serve as a valuable resource for industry stakeholders and policymakers, with technology firms, investors, startups, industry associations, independent developers, and customer firms all contributing to the exchange of insights.
This is not the CCI’s first attempt to institute such studies. The agency had conducted research across sectors like mining, e-commerce, and film distribution, identifying key market trends and anti-competitive practices, which subsequently informed policy recommendations.
Evolving AI regulation
Artificial Intelligence is the result of a series of technological advancements that enable machines to demonstrate increased intelligence and mimic human abilities. AI technologies allow machines to perceive their environment, process information in human-like ways, and respond accordingly. AI is becoming an essential factor of production, potentially enhancing traditional factors such as labour and capital, and driving innovation and technological change as part of total factor productivity.
Artificial Intelligence is poised to transform market efficiencies by streamlining operations, reducing costs, and enhancing service delivery across various sectors. By automating complex processes and optimising supply chains, AI technologies can facilitate more competitive markets and improve consumer experiences. However, this efficiency must be balanced with considerations for market fairness and equity, ensuring that AI does not inadvertently create monopolistic conditions or reduce market transparency.
Regulating AI presents unique challenges due to its complex and rapidly evolving nature. Traditional regulatory frameworks may not adequately address the implications of AI technologies such as ethical considerations, privacy concerns, and the potential for unintended biases. Developing agile and adaptive regulatory systems that can respond to the pace of AI development while ensuring safety, fairness, and transparency is crucial for fostering an environment where AI can thrive without causing harm to society or the economy.
Best practices in AI regulation
A significant 70% of American experts surveyed by the IGM said that AI could lead to societal challenges, such as job displacement, political instability, data privacy issues, and new types of crime and warfare. The technology is also being misused in business practices, such as dynamically adjusting prices based on customers’ perceived willingness to pay. These issues are complex and unpredictable, necessitating proactive government measures. Authorities must adjust cyber regulations to tackle these emerging AI-related threats.
The CCI is seeking to understand the current and evolving regulatory and legal frameworks governing AI systems and applications in India and other major jurisdictions. Notably, the US and the European Union are leading regulatory efforts. The US National Institute of Standards and Technology’s AI Risk Management Framework offers a standardised approach to mitigating AI risks, while the EU’s proposed AI Act focuses on technical standards, certifications, and conformity assessments to ensure responsible AI development. The US and EU are also collaborating through the Trade and Technology Council to standardise AI terminology and coordinate AI standard development.
Although India currently lacks a specific policy framework to govern AI, there are growing concerns about its potential risks and harms. The ministry of electronics and information technology has issued advisories to intermediaries and AI platforms to manage AI risks and ensure that biases in AI models do not adversely impact Indian users.
Since 2018, discussions around AI have been part of governmental agendas, starting with the NITI Aayog’s National Strategy for Artificial Intelligence which emphasises data management and the importance of data protection frameworks specific to various sectors. According to a PwC report, China could see a significant 26% GDP increase from AI by 2030. It is crucial for India to prioritise education and training in AI technologies to harness the economic potential of AI and capitalise on its demographic dividend.