I don’t know what it would take for history not to repeat itself.
The story goes something like this… Company X figures out a cool new way to collect and analyze lots of data for the purposes of better ad targeting or delivering better product recommendations. Company X comes up with a decent privacy architecture because they care about privacy. They launch. Things are going OK but business isn’t growing as fast as the investor presentation promised. Then, in a confluence of greed and idiocy, Company X does something crazy, sleezy and/or deceptive to make the business grow faster. Inevitably, someone finds out. They dig in and investigate. They write a blog post. It gets picked up. Then the lawsuits come. Then execs start resigning.
The latest story to follow this pattern is that of NebuAd, the company that snooped ISP traffic and surreptitiously modified, amongst others, Google’s home page in order to drop cookies on hapless consumers’ machines (that’s a simplification of what actually went on). I read some of the NebuAd whitepapers a while back and was struck by how they seemed to have taken the right steps from a technology standpoint to protect privacy. They were anonymizing IDs, using double hashing and other security techniques to break the link between personally identifiable information (PII) and the ad targeting profile they had created by observing traffic patterns. All that went to waste when they decided to act badly.
The point that gets lots in much of the privacy analysis in situations like that is that it is not the collection of information about consumer behavior that’s a problem. It is what companies do with that information. Or, in some cases such as NebuAd, what they do independently of that. Modifying the traffic coming from major Web properties is a terrible idea that has absolutely nothing to do with data collection and building consumer profiles.
Personally, I don’t see a problem with companies tracking consumer behavior and building ad targeting profiles as long as how they use the information is legit. Advertising is just another form of content. Targeting adds relevance and increases the quality of consumer experience. When content is relevant consumers like to see it. For example, in the age of TiVo, I have friends who watch the Superbowl just to see the ads (yes, they fast-forward the plays).
What does it mean to be legit in this context? First and foremost, it means protecting the privacy of individuals. There are lots of ways to target without disclosing PII to advertisers. Second, it means clearly documenting and explaining what you are doing. No shady, secret stuff. Absolutely no deceptive behavior. Third, in some cases, it means allowing for a clear opt-out mechanism (unless the entire program is opt-in in the first place).
In every industry there are shady companies that misbehave and, inevitably, there are cries for more regulation. The latest one I’ve seen (and the reason for this post) comes from the mobile space, as reported by Adotas.
ADOTAS — Two consumer groups demanded today that the Federal Trade Commission launch an investigation into the mobile market, focusing especially on practices that they say compromise user privacy.
The Center for Digital Democracy and the U.S. Public Interest Research Group jointly petitioned the FTC and asked that the agency look into alleged mobile marketing privacy threats and inappropriate practices targeting children, adolescents, and multicultural consumers.
The reasons for the request are that “as our petition makes clear, mobile marketers have refined a wide range of sophisticated practices that allow them to track, analyze, and target millions of Americans who increasingly rely on their phones for information.” I have an issue with that positioning. I don’t think the mere act of tracking and targeting is objectionable. And I don’t see regulation as a good solution. Mobile media is an emerging, fluid market. I can’t see how the FTC will be able to regulate it in a meaningful way without substantial inefficiencies. I would suggest an alternative approach. An industry, in this case the mobile industry, should develop clear, testable guidelines for acceptable privacy architectures & tracking/targeting behavior. Testable in this context means that it should be possible to certify within reason whether a vendor is or is not following the guidelines. There is an opportunity to leverage technology to protect privacy and identify bad players. Once the bad players are identified, the market takes over. You know how it goes. The lawsuits come. Then execs start resigning.
Pingback: Even Opt-In is Not Safe « HighContrast
The outcry against behavioral targeting reminds me of the tree huggers who insist we all dress in sackcloth and sandals and stop driving cars (or fly planes) because they ‘destroy the planet’…
Targeting is inherently positive and pro-consumer, an attempt for a better match of offerings to people’s needs. Relevance is the keyword and consumers prefer (well-)targeted offerings to mass broadcasts (or, below-the-line, outright spam). Behaviour is proven to be a much better indicator of needs than previously (and in places still widely) used demographics.
I notice that the latter is in the wording of the ADOTAS quoted protest – and they may be right (not all children or adolescents or all ‘multicultural consumers’, whatever that means, have necessarily the same needs).
Behaviour is another matter: if a kid sends a lot of SMS messages, or downloads music to their phone – that’s (far more certainly) what they need. And they love the relevant (!) adverts they receive. Young susbscribers of the free / ad-supported mobile operator Blick respond in surveys that they want more (!) adverts.
Evolving technology just enables us to analyse more data faster and automate the decisioning – turn the insight into relevant offerings. Like the business process and model itself, targeting technology isn’t ‘from the devil’ (as some would like to position it).
Like so many good things (from bread knives to nuclear energy), these marketing tools have the potential to become evil when greed kicks in and brings questionable ethics. Not a case for (over)regulation, but an opportunity (I strongly support Sim’s thoughts) to use that very same technology to protect consumers from undesirable behaviours.
Just my usual $ 0.02 of mindless chatter –
V.
Pingback: What targeted advertising and nuclear power have in common « HighContrast