Ethics and Regulations in Digital Marketing
Data, Data, Data
The Digital Trail and the Granularity of Data Digital marketing is fundamentally distinct from traditional communications because it relies on the extensive “digital trail” or footprint that consumers leave behind as they interact with online platforms.
Unlike the broad segmentation of the past, modern marketers have access to granular data at the individual level, which Hanlon (2020) categorizes into three specific types: demographic identifiers (e.g., age, location), psychographic information (e.g., interests, political views), and webographic data (e.g., pages visited, likes, downloads). While this data enables highly targeted customer experiences — such as “behavioral microtargeting” — it often occurs without meaningful consent because users rarely read the lengthy terms and conditions that authorize this data harvesting. This creates a significant ethical tension where the “digital footprint” exposes consumers to privacy risks and “data deception” on a global scale, as seen in incidents like the Cambridge Analytica scandal (Westby 2019).
Companies systematically harvest and monetize vast amounts of personal data—everything from search histories and location trails to clicks and likes—turning intimate details of our lives into a commodity. Zuboff (2023) calls this model “surveillance capitalism”: firms collect behavioral information, use it to build predictive profiles, and sell those insights to advertisers and other buyers, often without fully informed consent. That drive to monitor, profile, and influence people at scale creates powerful incentives that threaten privacy, individual autonomy, and democratic decision-making.
The Privacy Dilemma: Personalization vs. Exclusion
The central ethical dilemma in digital marketing is not a simple choice between “good” privacy and “bad” data use; rather, it involves complex trade-offs between consumer protection and economic utility.
A key tension is Privacy vs. Personalization. Dubé et al. (2025) argue that data-based personalization is not automatically harmful and can be a “win-win” that helps consumers find niche products and enables smaller companies to compete.
Paradoxically, strict privacy regulations may lead to algorithmic exclusion, where disadvantaged groups (who often generate less data) become “invisible” to the marketplace, effectively denying them access to beneficial offers or credit.
Furthermore, “transparency” highlights a failure in current governance: “notice and consent” regimes often cause “consent fatigue,” where consumers are bombarded with requests they cannot realistically process, failing to provide true protection.
Consequences and the Regulatory Balance
The consequences of these ethical failures are significant for both consumers and organizations. Beyond the immediate erosion of trust, the regulatory responses to these issues—such as GDPR or Apple’s App Tracking Transparency (ATT) — carry unintended economic costs. While intended to protect autonomy, these regulations can inadvertently stifle innovation by small entrepreneurs who rely on third-party data to reach audiences, thereby tilting the competitive landscape in favor of large technology incumbents who already possess vast amounts of first-party data. Consequently, the “dilemma” for governance is finding a balance that prevents manipulative practices and “online shaming” without destroying the economic value that digital inclusion and personalization provide. Effective ethical governance requires moving beyond simple restrictions toward “boundary regulation,” which supports consumers in sharing information when it benefits them and restricting it when it does not.
The Shift to User Rights and Global Divergence
The global regulatory landscape has shifted fundamentally from “company ownership” of data to “user rights,” spearheaded by the EU’s General Data Protection Regulation (GDPR) and ePrivacy Directive. These frameworks mandate that consent must be active, granular, and easily withdrawable, effectively banning “just in case” data collection and establishing the “right to be forgotten.” This contrasts with the US approach under the California Consumer Privacy Act (CCPA), which operates largely on an “opt-out” model (e.g., “Do Not Sell My Info”) and focuses on transparency regarding tracking pixels and data sharing. Meanwhile, the UK is diverging slightly with the Data (Use and Access) Act (DUAA), which aims to remove consent requirements for “low-risk” analytics cookies, though the PECR still enforces strict rules on electronic marketing similar to Australia’s Spam Act, which requires explicit consent, sender identification, and functional unsubscribe mechanisms for all commercial messages.
Localization and Vulnerable Audiences Beyond the West
Regulations are becoming increasingly localized and prescriptive to protect national interests and vulnerable groups. China’s PIPL and Algorithm Law enforce strict data localization (keeping data within mainland China) and explicitly ban algorithmic price discrimination (“Big Data Killing”) and mandatory “turn off” switches for personalized feeds. Similarly, India’s DPDP emphasizes inclusivity by requiring privacy notices in 22 designated languages and introducing “Consent Managers” to centralize user permissions. Finally, protecting children has become a distinct priority; the US COPPA and Australia’s Online Privacy Code mandate verifiable parental consent and age verification, with Australia proposing strict limits on social media access for those under 16. For marketers, this fragmentation means that a single global strategy is no longer viable; tactics like centralized data storage or universal retargeting pixels now carry significant legal risks depending on the specific jurisdiction.
Reflect of these Digital Marketing Strategies
Tactic A (The Tracker): putting a Meta Pixel on the homepage for everyone.
Risky Region: European Union.
Reason: Under the ePrivacy Directive and GDPR, placing non-essential trackers (like retargeting pixels) requires prior, informed, active consent (opt-in). You cannot just load it and offer an opt-out later.
Tactic B (The Cold Email): buying a list to send cold sales pitches.
Risky Region: Australia (or China/EU).
Reason: The Spam Act 2003 in Australia prohibits sending commercial electronic messages without consent (which cannot be inferred simply from a bought list). China’s MIIT regulations also require strict opt-in.
Tactic C (The Data Lake): storing all global customer data on a single server in California.
Risky Region: China.
Reason: The PIPL generally requires that personal information generated within China be stored domestically (Data Localization). Transferring it to the US without passing a security assessment and obtaining separate consent is illegal.
Platform Governance and Algorithmic Fairness
Digital platforms like Meta, Google, and TikTok act as “private regulators,” creating and enforcing guidelines that often supersede local laws to manage ethical risks and user safety. A critical area of these guidelines focuses on preventing discriminatory targeting and addressing “data deserts” that disproportionately affect marginalized groups. For instance, data brokers like Experian were found to be 50% less likely to contain information about Hispanic and Asian individuals compared to White individuals, effectively excluding these demographics from credit and housing offers dependent on such data. Furthermore, platform algorithms can inadvertently perpetuate bias even without explicit instructions; a study showed that Facebook’s bidding algorithm delivered STEM career advertisements to significantly more men than women, despite the advertiser not specifying a gender preference. To combat these issues, platforms have had to implement rigid standards, such as Meta’s removal of targeting options for housing, employment, and credit ads following lawsuits alleging discrimination against protected classes.
Content Safety, Disclosure, and “Code is Law”
In addition to fairness, platform policies are increasingly focused on physical safety and transparency, enforcing rules through code that prevents non-compliant content from ever being published (a concept known as “Code is Law”). The need for safety guidelines is underscored by the rise of dangerous trends like “risky selfies” taken to achieve social media fame, which have been linked to 120 to 250 fatalities, prompting platforms to ban content that encourages dangerous behavior. Regarding transparency, platforms now mandate strict “Paid Partnership” disclosures to distinguish organic content from sponsored material. This shift is partly a response to massive breaches of trust, such as the Cambridge Analytica scandal where 87 million Facebook profiles were harvested without consent. To further protect users, new policies also require mandatory labeling of AI-generated content (e.g., “Made with AI”) to prevent deception, ensuring that the “digital trail” consumers leave is not exploited by misleading synthetic media.
The Duality of Transparency in Building Trust
Within the P.A.C.T. framework, the ‘Trust’ pillar is foundational, yet its success relies heavily on the nuanced execution of transparency.
Transparency is not merely a legal compliance box to check; it is the primary driver of “Cognitive Trust,” which is established when users clearly understand the “how” and “why” behind AI-driven decisions and automated recommendations.
However, this concept presents a challenging duality for marketers. On one hand, adopting “enlightened data principles” — where organizations are transparent and educate users — transforms data privacy from a burden into a relationship-building opportunity that empowers consumers. On the other hand, marketers must navigate the “privacy paradox,” where consumers explicitly claim to value privacy and transparency but often exhibit contradictory behavior by ignoring disclosures due to “consent fatigue” or a desire for convenience.
Consequently, effective transparency must go beyond dense legal jargon to become a “humanized” interaction that respects consumer autonomy, ensuring that the transparency provided is both meaningful enough to build emotional trust and simple enough to avoid overwhelming the user.