Wednesday, November 07, 2007

Notes from the FTC Townhall on Privacy

While coverage of last week's FTC Townhall on Privacy issues is still pouring in, I thought I'd take some time today to link to a few articles published by MediaPost that I've found particularly interesting in terms of what they say about the industry response to the testimony and issues raised. Firstly, I was quite surprised to see our old friend "proof of harm" pop up in this discussion, since I really didn't think that infringing upon citizens' rights could ever be defended as "unharmful". The "proof of harm"/media effects debate -- so often used by the press to dismiss public concerns about its media by creating polarity and confusion -- is surprisingly out of place within this debate, but as you can see from the excerpt below, has nonetheless somehow wiggled its way in. From MediaPost:
IT'S NOT HARD FOR CONSUMERS to say why they dislike intrusive ads, pop-ups served via spyware, in-box-cluttering spam, or a telephone ringing during dinner. But whatever damage is caused by targeting, or serving ads to people based on the Web sites they visit, is harder to pinpoint--which is leading some Internet industry executives to question whether the FTC even has the authority to call for the regulation of the practice.

"What we haven't seen is that real harm," Mike Zaneis, Interactive Advertising Bureau vice president of public policy, told the FTC Friday, on the second day of a meeting to address behavioral targeting. He dismissed as "speculative" advocates' concerns that companies would misuse information gleaned from monitoring Web-surfing behavior.

Consumer and privacy advocates like the World Privacy Forum say they worry that companies could make assumptions based on Web users' online activity and then use that information to consumers' disadvantage. For instance, a health insurance company might decline coverage to people whose online behavior indicates they suffer from AIDS or other costly medical conditions.

But Zaneis and other Internet executives appearing in Washington last week say that the prospect of that type of scenario shouldn't lead to broad curbs on targeted advertising. They also argue that governmental attempts to regulate behavioral targeting will hurt the online ad industry's ability to grow and innovate.

Yowza! Isn't privacy invasion in itself reason enough to call these practices into question. I'm quite shocked by this position, and very glad that we already have even minor policies in place that establish privacy rights as citizen/consumer/human rights, including Article 12 of the Universal Declaration of Human Rights:
No one shall be subjected to arbitrary interference with his privacy, family, home or correspondence, nor to attacks upon his honour and reputation. Everyone has the right to the protection of the law against such interference or attacks.

If market research doesn't qualify as arbitrary, I don't know what does. And as for attacks upon one's reputation, I think that the scenario outlined by the World Privacy Forum certainly represents a likely and quite apt example. But what about other, less obvious instances of social sorting and categorization that occurs during web-tracking and data-mining?

A second typical industry response can be found in a thought piece ("Just an Online Minute") the publication sent out on Friday (Nov. 2), discussing Esther Dyson's call for enhanced disclosure around online marketing / data-collection practices. Here, the emphasis is on how difficult it would be to put the power back into consumers' hands, blaming the fact that many users don't read privacy policies:
Call it "Disclosure 2.0." That's the term Internet guru Esther Dyson used today to describe a new type of privacy notice that might be coming to the online marketing world. Speaking at the second day of a Federal Trade Commission conference about privacy and Web ads, Dyson proposed that social networking sites will drive new types of interaction between marketers and consumers. She said that consumers -- now trained in some aspects of the art of profile creation and maintenance via sites like Facebook -- will want to wield similar control over their marketing profiles.

Dyson predicts that users will soon ask, "If I curate my profile... and if I can decide which of my friends can see which part of my profile, why can't I do that for marketers?"

It's an intriguing idea, but executing it will be another matter. There appears to be widespread agreement that very few consumers currently read privacy notices. Of course, it's not surprising that people don't interrupt their Web surfing to click on privacy links and then read policies written in page after page of dense legalese. But, Dyson said, that doesn't mean that people don't want answers to the basic questions, "Why are you showing me this ad? What is it you know?"

Of course, the position that users are to blame for the lack of privacy protection online directly contradicts a concurrent (and also typical) industry position that companies "already fully protect users' privacy interests" and that "additional regulations could harm the online advertising industry," -- a position that was expressed in another MediaPost article which appeared on the same day. Stay tuned for some links and discussion of alternative perspectives...however, I thought it would be useful to first point out how stagnant the debate about users' rights becomes when this approach is taken by industry representatives and press...repeating the same arguments over and over, taking the same (albeit contradictory) positions time and time again, even as business practices and the technologies themselves are changing and intensifying at light speed.

No comments: