Skip to main content

The era of AI and legislative measures to protect consumers’ interests  

Artificial intelligence, or AI, is technology that enables computers and machines to simulate human intelligence and problem-solving capabilities IBM. As AI takes the world by storm, the theme to mark this year’s World Consumer Rights Day was ‘Fair and Responsible AI for Consumers’.

When ChatGPT first came into our lives, we knew it would never be the same again, but how it would change the information landscape was left open to consideration. What started as a trickle two years ago, has now turned into a torrent, with a proliferation of generative AI (Artificial Intelligence) technologies, with realistic photos, audios and videos, all just a few clicks away. While ChatGPT took the  world by storm, and opened our eyes  to the magic of AI, artificial intelligence has existed since long before the launch of these marvels.  It all started with our reliance on machines, and then on the reliance of these machines on AI. Thanks to AI, we now have smart lighting, smart security cameras, smart televisions… the list is long. 

Broadly, AI can be divided into three categories based on capabilities:

  1. Narrow AI: Designed to complete very specific actions (as used by Siri, or Alexa, even ChatGPT)
  2. General AI: Designed to perform (learn, and think) in a similar way as humans (widely identified by its use in ChatGPT).  
  3. Super AI: Capable of exceeding human knowledge and abilities.

Generative AI, a subsect of narrow AI, has become the talk of the town for its ability to generate high-quality text, images, and other content based on simple prompts.  Several of these softwares are free at some level, allowing anyone with an internet connection to generate a range of content with very little effort, content that can then be used on education platforms, websites, social media platforms etc. Much of this will strike the casual observer as purely a positive addition to how content is created. 

In fact, the emergence and widespread adoption of generative AI technology has the potential to  automate laborious and time-consuming tasks that were once carried out manually. For us as consumers, it also has the potential to significantly cut costs.  For example, legal information available on the internet has been made easier to sieve through and understand, thanks to AI. And all this is available at no cost to the consumer. But like with all good things, AI also comes with its own set of concerns, some of them quite serious.

Challenges of generative artificial intelligence :

  1. Bias and Inaccuracies: AI learning (or machine learning) is accomplished by training the network on large amounts of data. The data made available at the training stage will determine the content generated by the machine. A network trained on biassed data will therefore generate biassed, racist, or sexist content. Generative AI can also be used to create authentic sounding fake news. When believed in and shared by real humans, this can amount to propaganda, with significant repercussions. Generative AI systems are also being adopted by various institutions and companies to perform public facing tasks, such as providing financial advice. These can have potentially serious consequences for consumers if the advice given by the machine is misleading. 
  2. Security concerns: While AI has immediate implications in enhancing cybersecurity, its ability to manipulate and shape human action and emotion also has unlimited potential to do the opposite. Deep fake videos, voices and verbal content can be used to phish and scam, adding a new layer of threat to the already difficult to manage area of cybersecurity. 
  3. Monopoly: While the use of AI itself requires no qualification, and is therefore universal, creating AI machines and promoting these remain in the hands of a few elite. As AI overwhelmingly determines what we read, buy, believe in, it is no wonder that there are serious concerns that this concentrates power over the masses into the hands of a few companies and individuals. 

Legislative Measures: 

While India currently lacks a dedicated regulation for governing AI, recent years have seen a range of advisories, strategies and policy frameworks, meant to offer legal oversight. The government is currently working on a draft regulatory framework for artificial intelligence which it aims to release between June and July 2024. 

The Niti Ayog, an organisation which serves as the apex public policy think-tank of the Government of India, launched the first national AI strategy in 2018. The AI strategy’s focus was to  identify critical sectors including healthcare, education, agriculture, smart cities, and transportation and the role of AI within these. In 2021 , NITI Aayog drafted the Principles for Responsible AI, as an addition to the original framework. As reported, the  draft principles examine all aspects involved in implementation of AI and have categorized the same  into system and societal considerations. System considerations primarily address the principles related to decision making,  fair inclusion of beneficiaries, and accountability, while societal considerations focus on  automation’s impact on job creation and employment. 

The Digital Personal Data Protection Act 2022, which came into effect in August 2023, covers aspects of using personal data as input for AI, thus addressing some of the privacy issues raised by the use of AI.

Deepfakes, a problem very unique to AI, is currently not addressed by any specific regulation within India.  However, a consumer is eligible to file a case under Section 66E of the Information Technology Act, 2000 (addresses deepfake crimes related to privacy violations), Section 66D ( malicious use of communication devices or computer resources), and  Sections 67, 67A, and 67B of the IT Act (can prosecute those publishing or transmitting obscene deepfakes) if they fall prey to deep fake scams.

The Bureau of Indian Standards, which is the national standards body of India, has also established a 30 membered committee on AI, comprising of various stakeholders including Amazon India, Google India Private Limited, IBM India Limited, Microsoft Corporation (India), etc.  The committee is proposing draft Indian standards for AI and are on the verge of finalising the same on the basis of the consultative paper published by the Telecom Regulatory Authority of India (TRAI) in August 2022. The paper is a compilation of information on AI, which also documents  the risks and challenges involved in using AI. 

At a global level, the European Union has passed the world’s first comprehensive artificial intelligence - EU AI Act. This Act will come into effect by 2025.The aim of the Act is to  regulate artificial intelligence (AI)  by analysing  the level of risk posed by the usage of AI. Risks have been categorised into four categories: unacceptable, high, limited and low risk. The aim is to provide clear requirements with regards to use of AI, while also minimising the administrative and financial burden for businesses. These laws will prohibit practices involving unacceptable levels of risk, thus reducing the time spent in analysing individual systems. It also sets clear guidelines and obligations for those systems which are likely to be high risk applications, requiring a conformity assessment before they can be implemented. The Act also specifies that a system should remain trustworthy even after being placed in the market, requiring ongoing quality and risk management assessment by its providers.

The future of responsible AI

India has recognized the growing importance of AI and the need for appropriate regulation. There have been discussions about the formulation of a national AI strategy and the introduction of AI-specific laws to address emerging challenges. It's essential to monitor developments in Indian legislation and regulatory frameworks as the landscape of AI governance continues to evolve globally. Stakeholders, including policymakers, industry leaders, researchers, and civil society organizations, play a crucial role in shaping responsible and inclusive AI policies that align with India's societal values and priorities.

Add new comment

Plain text

  • No HTML tags allowed.
  • Web page addresses and email addresses turn into links automatically.
  • Lines and paragraphs break automatically.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.