Tech
Next Story
Newszop

India's AI strides run into privacy law headwinds

Send Push
A host of companies including information technology firms, banks and cloud storage providers are seeking legal advice amid apprehensions about their use of generative artificial intelligence (GenAI) running afoul of the provisions of the data law, said industry executives.

Many companies are building proprietary GenAI models without enough transparency about the use of personal data being processed for training purposes, experts said, adding that this could go against the principles of lawful consent, fairness and transparency as prescribed in the Digital Personal Data Protection (DPDP) Act.

The legislation, passed by Parliament in August last year, provides for the protection of the personal data of individuals while allowing the processing of such data for lawful purposes. With privacy being a fundamental right, companies are worried about legal liabilities that could arise over non-compliance, said experts.


“Ideally, using publicly available data for GenAI training without appropriate consent stands in conflict with DPDP or copyright laws,” said Joebin Devassy, senior partner at Desai & Diwanji.


GenAI models generate new output, learn and reason by themselves and adapt to new information, he said, adding, “In such a flow, establishing breach of consent becomes challenging. AI is a complex animal in the court of law.”

Companies are consulting with lawyers on issues such as how to define the scope of their privacy policies to seek appropriate user consent, the kind of contractual obligations needed for data processors while offering AI-as-a-service and the global laws and regulations that apply to multinational data exchange.

“The DPDP Act also mandates the principles of purpose limitation and data minimisation, whereas models trained on the same data are being used for multiple applications and there is uncertainty whether the personal data being processed is limited to what is necessary. Further, under the Act, data fiduciaries cannot bundle all processing activities under a blanket consent,” said Akshayy S Nanda, partner (competition law and data privacy practice) at Delhi-based law firm Saraf & Partners.

He further said, “Can the model delete select parts of its memory? Or does it need retraining? Are companies ready to bear that cost? These are some of the pressing questions we hear.”

While scores of lawsuits over copyright infringement are pending in global courts without any strong precedents on GenAI’s violation of citizens’ rights, Indian companies want to future-proof themselves from legal shocks, said industry executives.

Tata Consultancy Services (TCS), the world’s second most valued IT services company, said it is continuously seeking to understand and navigate the evolving legal landscape through proactive risk management and adherence to regulatory standards.

“This includes ensuring compliance with data privacy laws, like the EU’s GDPR (General Data Protection Regulation) or India’s DPDP Act,” said Siva Ganesa, global head of AI.Cloud business unit, TCS. “By building robust governance frameworks and mechanisms for consent management and data retention, organisations can future-proof their business practices and continuously monitor global regulatory trends on IP, transparency and fairness.”

It is not just personal or publicly available data which could be misused, according to experts. Inferences about an individual are also considered personal, they said.

“Inaccuracy and bias are the most critical concerns for companies who are experimenting with GenAI applications in marketing, hiring, digital lending, insurance claims, etc.,” said Aadya Misra, counsel at Bengaluru-based Spice Route Legal. “Who is responsible if the model hallucinates or collapses? Is it the data fiduciary? Or developer companies such as OpenAI?”

Legal Guardrails

However, experts said, doors must not be shut on large language models (LLMs) for fear of future legal setbacks.

AI companies such as OpenAI and Google have started to indemnify their customers for any kind of lawsuits they may encounter because of their LLM use, said Paramdeep Singh, co-founder of Shorthills AI, which provides model training solutions.

However, this may involve legal complexities, he said, adding, “AI applications are currently treated as experimental, and we as data processors, do not hold responsibility for inaccuracies and hallucinations. And so, our customers (data fiduciaries) do not force this as a contractual obligation.”

“Organisations do understand that AI deployments will always have some element of risk,” said Vijay Navaluri, co-founder, Supervity.ai, which builds AI agents for clients including Daikin, Mondelez and Ultratech. “To address this, Supervity follows the PACT (privacy, accuracy, cost and time) framework, which helps companies to structurally think through what weightage needs to be assigned to which areas.”

For instance, for highly sensitive data such as that in banking, financial services and insurance, and healthcare, companies prefer private LLMs which entail strict access controls and use of techniques such as data masking and data anonymisation. In the case of applications in finance and accounting, which require the highest level of accuracy, all transactions are approved after human review, said Navaluri.

These technologies are evolving at a breakneck speed, which laws cannot catch up with, said Spice Route’s Misra. “More than AI law or regulation, we need AI ethics and principles. Self-regulation by companies or technology industry bodies is the need of the hour,” he said.
Loving Newspoint? Download the app now