Regulating Digital Deception: India’s Approach to Dark Patterns and Deepfakes
In today’s digital landscape, the convergence of advanced technologies and sophisticated design strategies has led to the emergence of deceptive practices that compromise user autonomy and trust. Notably, “Dark Patterns”, manipulative design elements in user interfaces, and “Deepfakes”, Artificial Intelligence (“AI”) generated synthetic media, have become prevalent tools for misinformation and user manipulation. This article delves into the intricacies of these deceptive practices, examining their mechanisms, potential consequences, and the current regulatory frameworks addressing them.
Dark Patterns: The Silent Manipulator
The Central Consumer Protection Authority (“CCPA”) notified Guidelines for Prevention and Regulation of Dark Patterns, 2023 (“Dark Pattern Guidelines”), under Section 18 of the Consumer Protection Act 2019 (“CPA 2019”), on November 30, 2023. The Guidelines define ‘Dark Patterns’ as “practices or deceptive design patterns using user interface (UI) or user experience (UX) interactions on any platform that is designed to mislead or trick users into doing something they originally did not intend or want to do, by subverting or impairing the consumer autonomy, decision making or choice, amounting to misleading advertisement or unfair trade practice or violation of consumer rights.”
Annexure I to the Dark Pattern Guidelines provides a non-exhaustive list of what might constitute dark patterns:
1. False Urgency: False urgency refers to misleading tactics that create a sense of scarcity or time pressure to influence user decisions. This includes exaggerating product popularity or falsely limiting availability, such as stating ‘Only 2 rooms left! 30 others are looking at this right now‘ without proper context. Another example is framing a sale as an ‘exclusive’ or time-sensitive offer to push users into making a purchase.
2. Basket Sneaking: Basket sneaking is the inclusion of additional items in the cart such as products, services, payments to charity or donation at the time of checkout, for instance, pre-selected gift wrapping that a user did not purchase.
3. Confirm Shaming: Confirm shaming manipulates users by instilling fear, shame, or guilt to coerce purchases or subscription renewals. For example, a flight booking platform might warn “I will stay unsecured” if insurance isn’t added, or a shopping site might add a charity donation with a dismissive phrase like “charity is for rich, I don’t care“. This tactic pressures users into desired actions by exploiting their emotional vulnerabilities.
4. Forced Action: Forced action occurs when a user is compelled to purchase additional goods, subscribe to unrelated services, or sign up for things they didn’t originally intend to access the initially desired product or service. This can include blocking access to a paid service unless an upgrade is purchased, requiring a newsletter subscription to buy a product, or forcing the download of an unrelated app to access advertised features. Essentially, it’s manipulating the user’s intended path by making additional purchases or actions mandatory.
5. Subscription Trap: A subscription trap intentionally obstructs the cancellation of a paid subscription by making the process impossible, complex, hidden, or confusing. This can include requiring payment information for free trials, employing ambiguous cancellation instructions, or simply making the cancellation option difficult to find. Ultimately, it aims to keep users paying for a service they no longer want.
6. Interface Interference: Interface interference manipulates user interfaces by highlighting specific information while obscuring other relevant details, ultimately misdirecting users from their intended actions. This can involve tactics like making “no” options less prominent in pop-ups or using deceptive icons that trigger unwanted actions instead of closing windows. Essentially, it’s a design strategy that prioritizes influencing user choices over clear and honest presentation of options.
7. Bait and Switch: Bait and switch is a deceptive tactic where a desirable offer is advertised to attract customers, but then an alternative, often less appealing or more expensive option is presented instead. For instance, a seller might advertise a high-quality product at a low price, only to claim it’s unavailable when the customer is ready to buy, pushing a similar but pricier item. This practice lures customers in with a tempting offer, then exploits their interest by substituting it with something else.
8. Drip Pricing: Drip pricing is another manipulative practice where price elements are hidden or revealed gradually, often increasing the final cost beyond what was initially presented. This includes charging a higher amount post-purchase than the initial checkout price, advertising a product as “free” without disclosing limitations or required in-app purchases, or preventing access to a paid service without additional purchases. Essentially, it’s about obscuring the true cost to make the initial offer seem more attractive.
9. Disguised Advertisement: Disguised advertisement refers to the practice of concealing ads by presenting them as something else, like user-generated content, news articles, or other non-advertising formats. Sellers and advertisers are responsible for disclosing when their platform content is actually an advertisement.
10. Nagging: Nagging is a dark pattern that overwhelms users with excessive requests, information, options, or interruptions unrelated to their intended transaction, disrupting the process. This can manifest as constant prompts to download an app, requests for phone numbers under false pretences of security, or persistent notification requests without a clear “no” option. Essentially, it’s using repetitive and intrusive tactics to pressure users into actions they may not want to take.
Further to combat deceptive practices like Drip Pricing and Disguised Advertisements, the CCPA and the Department of Consumer Affairs have also introduced other comprehensive guidelines. The Guidelines for Prevention and Regulation of Greenwashing or Misleading Environmental Claims, 2024, ensure transparency and accuracy in advertisements related to environmental sustainability. These guidelines specifically address misleading environmental claims that falsely exaggerate the eco-friendliness of a product or service, whether through online or offline channels. Additionally, the Guidelines for Prevention of Misleading Advertisements and Endorsements for Misleading Advertisements, 2022, set clear standards for truthful advertising, prohibiting false claims and ensuring that all mandatory fees are disclosed upfront. Recognizing the influence of celebrities and social media influencers, the Additional Influencer Guidelines for Health and Wellness Celebrities, Influencers and Virtual Influencers 2023, mandate that celebrities, influencers and virtual influencers presenting themselves as health experts or medical practitioners, when sharing information, promoting products or services or making any health-related claims must provide clear disclaimers, ensuring the audience understands that their endorsements should not be seen as a substitute for professional medical advice, diagnosis or treatment. These initiatives collectively aim to curb deceptive practices and strengthen consumer protection.
Analysis
At first glance, the Dark Pattern Guidelines appear to be a step in the right direction. However, the disclaimer in Annexure I, which describes the list as illustrative, contradicts the broader principle of the Dark Pattern Guidelines, which state that anyone engaging in the listed activities would be considered to be using Dark Patterns. This inconsistency creates ambiguity in interpretation and weakens the enforceability of the Guidelines. The widespread use of Dark Patterns has significantly impacted consumer autonomy, making them more vulnerable to deceptive practices. To address this issue effectively, a clear and well-structured regulatory framework is necessary, one that ensures businesses provide their services transparently rather than imposing them on consumers.
Globally, the regulation of dark patterns is still in its early stages. So far, the responsibility for curbing such deceptive practices has largely fallen on data protection authorities and consumer protection agencies. This is because dark patterns often involve the manipulation of user privacy. India needs to develop a robust enforcement mechanism while simultaneously crafting innovative solutions tailored to its own legal and digital landscape.
Deepfakes
Deepfakes, much like dark patterns, distort reality to manipulate user perception and decision-making. Both exploit cognitive biases whether through misleading endorsements, fabricated media, or deceptive interfaces to deceive individuals. As AI-generated synthetic media continues to blur the line between truth and deception, Deepfakes have become a growing global concern, including in India. These hyper-realistic forgeries not only mislead individuals but also manipulate public opinion and disrupt financial markets.
Understanding Deepfakes
Deepfakes are AI-generated media that alter or fabricate videos, images, or audio recordings to make individuals appear to say or do things they never did. Using techniques such as face reenactment, generation, swapping, and speech synthesis, Deepfakes convincingly manipulate a person’s face, voice, or likeness. While deepfake technology has positive applications in entertainment and accessibility, its misuse in areas like politics, finance, and media raises serious ethical and legal concerns.
India witnessed a concerning Deepfake incident last year when a morphed video featuring the chief executives of the Bombay Stock Exchange (BSE) and National Stock Exchange (NSE) surfaced online. The video falsely depicted them giving stock recommendations to the public, causing panic and confusion among investors. In response, both stock exchanges swiftly issued cautionary statements warning investors not to trust such misleading content. This incident highlights the growing risks posed by Deepfakes, particularly in financial markets.
Recognizing the growing concerns over misinformation, the Union Government on November 7, 2023, issued an advisory under the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, urging social media platforms to detect and remove Deepfake content within 36 hours of receiving complaints. Another advisory, dated December 26, 2023, emphasized the need for intermediaries to clearly outline prohibited content, such as impersonation, in their user policies and agreements. While these measures are a step toward addressing the issue, they reflect India’s current reliance on advisories, rather than a comprehensive legal framework specifically targeting Deepfakes.
Analysis
Though the term ‘Deepfake’ is not explicitly defined under Indian law, several existing laws, including the Information Technology Act, 2000, Bharatiya Nyaya Sanhita, 2023, and the Digital Personal Data Protection Act, 2023, may apply. However, there are gaps in the legal framework that need attention. The rapid spread of Deepfakes can severely impact public perception, reputation, and privacy, especially in cases of defamation or exploitation. Court actions often fail to provide timely relief, which makes it difficult to mitigate the damage effectively. Moreover, identifying perpetrators can be difficult, and the delay in legal proceedings may make remedies ineffective if the Deepfake has gone viral. Social media platforms must take a more proactive role in detecting and removing such content to minimize the harm.
As the nature of AI continues to evolve, an effective regulatory framework is crucial for tackling Deepfakes. The European Union has made significant progress by drafting the world’s first ‘Artificial Intelligence Act,’ which classifies and regulates AI-based systems based on their risk implications. While the AI Act does not outright ban AI applications, it restricts their use in specific scenarios and imposes strict compliance requirements on AI developers, including transparency in training data and adherence to copyright laws.
In contrast, India has thus far taken a more cautious approach, focusing on leveraging AI’s potential while urging social media platforms to take proactive steps in monitoring and removing harmful content. The current legal framework, based on the intermediary responsibility under the IT Rules, 2021, remains limited in scope, and the extent of intermediary liability is still evolving. As these platforms gain more technological capabilities, their obligations under Indian law are likely to become stricter. However, to address the growing challenges posed by Deepfakes, India would benefit from establishing more specific, well-defined guidelines that directly tackle this emerging threat.
To strengthen the regulatory response to Deepfakes, India could draw inspiration from other countries, such as the United States. While India has relied on advisories under the IT Rules, 2021, the US has taken a more legislative approach, enacting laws like the Malicious Deep Fake Prohibition Act, 2019, the Identifying Outputs of Generative Adversarial Networks (IOGAN) Act, 2020, and the Deepfakes Accountability Act, 2023. These laws focus on criminalizing malicious Deepfakes, studying the technology, and establishing accountability for creators and distributors. India could consider adopting similar legislative measures to complement its current regulatory framework, thereby strengthening its response to the dangers posed by Deepfakes.
Conclusion
India has taken significant steps to regulate digital deception through consumer protection guidelines and intermediary rules. While the Dark Pattern Guidelines provide a foundation for addressing manipulative design practices, their ambiguity may hinder enforcement. Similarly, the government’s response to Deepfakes, though proactive, remains reliant on advisories rather than a dedicated legal framework. Given the rapid evolution of AI-driven deception, a more robust regulatory approach—drawing from global best practices is necessary. The strengthening of enforcement mechanisms, enhancing platform accountability, and introducing comprehensive legislation will be crucial in safeguarding consumer rights and information integrity in the digital age.
This article was first published in Lexology.