An Analysis of the Banning Surveillance Advertising Act: Pros, Cons, and Potential Loopholes

2023 Legislation Competition Winner

By Charlotte Kahan

November 21, 2023

Introduction

Introduced in January 2022 by Representatives Eshoo of California and Schakowsky of Illinois, H.R. 6416 is a legislative effort to protect personal privacy in the face of ever-increasing data collection by technology companies such as Alphabet and Meta. Congress should pass H.R. 6416 because the benefits of the bill significantly outweigh the concerns of its critics. However, before it is passed, two changes should be made to the original text which, while seemingly minor, will close potential loopholes and improve the likelihood that the bill will achieve its intended ends.

Analysis of H.R. 6416: Strengths and Weaknesses

H.R. 6416, called the “Banning Surveillance Advertising Act,” would prohibit the use of personal data in targeted advertising. Technology companies, or “advertising facilitators,” can extrapolate a great deal of sensitive information about users, such as race, national origin, gender identity, sexual orientation, and other characteristics, based on users’ online behaviors. Those companies can sell that data to advertisers, who in turn target their ads to specific consumers. The ways in which users engage with platforms such as Facebook and Instagram are deeply personal — the posts they like, articles they read, and videos they watch are all indications of what is uniquely salient to those users.1“[A]dvertisers can circumvent existing limitations on targeting users based on their interests in sensitive topics like religion and sexual orientation.” See Till Speicher et al., Potential for Discrimination in Online Targeted Advertising, 81 Proc. of Mach. Learning Rsch. 1, 4 (2018), http://proceedings.mlr.press/v81/speicher18a/speicher18a.pdf. The bill aims to prevent advertisers from purchasing this sensitive data and using it to strategically target ads, particularly towards users with protected-class statuses.

In a 2021 Brookings study, Jinyan Zang concluded that “discrimination by race and ethnicity on Facebook’s platforms is… a violation of the existing civil rights laws that protect marginalized consumers against advertising harms and discrimination by race and ethnicity, especially in the areas of housing, employment, and credit.” Those violations are real and significant. A lawsuit filed in 2018, for example, alleged that Facebook’s advertising platform “enable[s] landlords and real estate brokers to bar families with children, women and others from receiving rental and sales ads for housing.” ProPublica found that, even after Facebook publicly vowed to redouble its efforts to root out discriminatory advertisements, housing ads excluding African Americans, Jews, Spanish speakers, and other groups could still be posted to the site. If technology platforms cannot or will not police themselves, then the FTC, state attorneys general, and members of the public should be able to do so. This is what H.R. 6416 provides.

Furthermore, evidence suggests that targeted advertising contributes to the existence of ideological echo chambers that expose users only to viewpoints that align with their existing beliefs — a kind of algorithm-imposed political segregation that is deeply harmful to American democracy and civil discourse. Targeted advertisements hasten the polarization of the electorate by “reinforc[ing] existing group opinions and, by extension, shift[ing] the entire group’s ideology to the extreme.” They also keep people siloed in ways that are not merely ideological. For example, the ACLU explains that “[w]hen employers in male-dominated fields advertise their jobs only to men, it prevents women from breaking into those fields…Because no women saw these ads, they were shut out” of learning about particular job opportunities. The ubiquity of targeted advertisements, and the pernicious sorting they encourage, threaten to segregate not just forums of civic debate but also the places where people live and work.

The drawback of H.R. 6416, at least from the viewpoint of advertising facilitators, is that the elimination of targeted ads would pose a threat to their business models. However, contextual advertising (ads based on content a user is engaging with at a given moment in time) would still be permissible under H.R. 6416, and advertising facilitators could continue to target ads based on broad location data such as a user’s municipality. For these reasons, the economic concerns of industry actors may not be entirely justified. A 2019 study found that “when the user’s cookie is available” — that is, when advertisers can target ads — the “publisher’s revenue increases by about 4%”. The authors of the study note that, while the “increase is significant from a statistical perspective,” from “an economic perspective, the increase corresponds to an average increment of just $0.00008 per advertisement.”2Veronica Marotta et al., Online Tracking and Publishers’ Revenues: An Empirical Analysis(May 2019) (on file with the Workshop on the Economics of Information Security), https://weis2017.econinfosec.org/wp-content/uploads/sites/6/2019/05/WEIS_2019_paper_38.pdf. Targeted advertisements may confer greater profits to industry in other, more difficult-to-measure ways, but early empirical work suggests that the economic impact of H.R. 6416 on advertising facilitators like Facebook or Twitter may not be as significant as those platforms fear. The fact that targeted ads “based on individual user data didn’t even really exist until the past decade” — that is, advertising facilitators historically made their money by selling non-targeted ads — is itself an indication that the business model can succeed without the invidious use of personal data.

Nevertheless, there is a risk that Facebook and other platforms could respond to the passage of legislation like H.R. 6416 by erecting paywalls or designating certain services as “premium” content available only for paying users, in order to recapture a potential loss in profits. Such a result would disadvantage low-income users and therefore harm some of the vulnerable groups that the legislation itself was designed to protect.

Proposed Changes to H.R. 6416

“Close Proximity”

 Section 2(a)(2)(i) of H.R. 6416 provides that advertisers may disseminate “contextual advertisements,” which are ads that are based on information “(I) that the individual is viewing or with which the individual is otherwise engaging; or (II) for which the individual searched.” The next clause, labeled (ii), introduces potentially problematic language: a permissible contextual advertisement must be “displayed or otherwise disseminated in close proximity (emphasis added) to information described in clause (i).”

The phrase “close proximity” could raise questions of statutory interpretation. First, there is the issue of what kind of proximity applies here. It could presumably mean closeness in time: for example, if a user searches a platform for “blue sneakers,” perhaps an advertisement from Nike showing its blue Air Force 1 shoes must become visible to the user within two minutes. Alternatively, “proximity” could mean the relevance of the topic, irrespective of the time between a user’s engagement with content and the appearance of an advertisement. Perhaps the ad for blue Nike Air Force 1 shoes is proximate enough, in a thematic sense, to “blue sneakers” that it could permissibly be disseminated to the user several weeks after he made the search query.

The second, related question raised by the phrase “close proximity” is just how close the proximity must be. Is a 30-minute gap between the user’s initial engagement with content and the subsequent appearance of an advertisement “proximate” enough? Can Nike target the “blue sneaker” searcher for months after the original query, bombarding the user with advertisements filled with splashy blue apparel?

In short, lack of clarity about the meaning of “close proximity” in Section 2(a)(2)(ii) may provide technology companies with just enough latitude to game the system in creative ways. It could also leave the FTC, as a designated enforcement agency, in the unenviable position of deciding on unclear grounds which advertisements are permissibly “contextual” and which are impermissibly “targeted” on the basis of personal information like browsing history. To avoid these potential pitfalls, H.R. 6416 should provide more exact specifications about what constitutes “close proximity.”

“Recognized Place” & “Indian Lands”

H.R. 6416 provides an “exception for targeting based on recognized place” in Section 2(c), and Section 4(17)(A)(ii) specifies that Indian lands are included in the definition of “recognized place.” The bill therefore allows advertisers to target ads based on a user’s association with Indian lands, and, by logical extension, a user’s protected-class status as a Native American person. One can imagine a variety of discriminatory outcomes. Companies might wish to target their advertisements specifically toward Native Americans because of pernicious assumptions and stereotypes about their culture. They might also choose to deliberately exclude users on Indian lands from certain employment postings. Section 4(17)(A)(ii) should be deleted from the bill in the interest of protecting an already-vulnerable population from the harmful impact of targeted advertisements.

Conclusion

Given the crucial privacy and equity interests at stake, and the relative import of those interests as weighed against the profit incentives of an industry worth trillions of dollars, H.R. 6416 should be passed by Congress.


Charlotte Kahan, J.D. Class of 2025, N.Y.U. School of Law.

Suggested Citation: Charlotte Kahan, An Analysis of the Banning Surveillance Advertising Act: Pros, Cons, and Potential LoopholesN.Y.U. J. Legis. & Pub. Pol’y Quorum (2023).

  • 1
    “[A]dvertisers can circumvent existing limitations on targeting users based on their interests in sensitive topics like religion and sexual orientation.” See Till Speicher et al., Potential for Discrimination in Online Targeted Advertising, 81 Proc. of Mach. Learning Rsch. 1, 4 (2018), http://proceedings.mlr.press/v81/speicher18a/speicher18a.pdf.
  • 2
    Veronica Marotta et al., Online Tracking and Publishers’ Revenues: An Empirical Analysis(May 2019) (on file with the Workshop on the Economics of Information Security), https://weis2017.econinfosec.org/wp-content/uploads/sites/6/2019/05/WEIS_2019_paper_38.pdf.