Much Ado About Nothing: Why the Cookie Craze?

In the Wired article “Stop the Do Not Track Madness,” Lauren Weinstein takes issue with the controversy around “Do Not Track” functionality for browsers to block cookies.  As he notes, “emotion and political gamesmanship have often replaced common sense and logic to no good end.”  As someone who has worked with internet technologies since the development of ARPANET  in the 70s, I respect his perspective on this issue.

We see groups making an emotional appeal to the press/consumers that cookies are by default a bad thing for consumers and need to be stopped. Politicians seem to be using this argument in order to score points, but they don’t truly understand how the technologies involved work or the impact their proposals would have. There have been efforts to build privacy controls directly into the browser before the Platform for Privacy Preferences Project (P3P), but ultimately those standards were not widely adopted and the impact was minimal (I believe only Internet Explorer supports P3P).  To Mr. Weinstein’s point, preventing data collection and reducing the value of web ads to advertisers is going to hurt websites that are offering content and services at no cost to the visitor.

Industry self-regulation efforts to date have worked to provide transparency and multiple levels of control to consumers when it comes to interest data being collected online.

  • The first level of control that the consumer has is the cookie itself – cookies are inherently temporary storage mechanisms.
  • The second level of control is restricting the kind of data that companies can collect.
  • A third level of control over most companies in the space is to completely open the kimono and be transparent to the consumer about the information we have in a cookie about them and give them the ability to change those preferences or delete the cookie altogether.

We were one of the early advocates for consumer transparency – our preference manager was one of the first projects our R&D team built. By collecting only aggregate data points,  not collecting personally-identifiable data, keeping that data for only a short period of time, and being transparent to the consumer about what targeting attributes are associated with their cookies, we believe the trade-off to more relevant ads that are more valuable to advertisers and websites will make for a positive web experience – for all parties involved.

Power to the Publisher: Data Theft – Attribution as Well as Retargeting

A brilliant article in the blog Exchange Wire entitled “Cookie Directive: How To Kill Off European Publishers While Giving a Monster Monopoly a Competitive Advantage” recently made me think about a publisher’s data strategy having to plan not just for collection and targeting, but measurement and attribution.  A lot of conversations in the industry now are giving attention to the idea of data theft from publishers – at eXelate we’ve built a tool called DataShield to help publishers see what third-party tags are loading on their site.  DataShield keeps a log of these alerts – not just spot-checking for theft with tools like Fiddler or Firebug – and allows publishers to audit what partners are firing through their ad units, social media widgets, etc.

But potential collection of data for re-targeting is only one threat to the publisher’s business; another threat might be more alarming: in some cases, marketers are attributing results of their campaigns based on post-view conversion tracking.  During my time at, we saw a cannibalization of some CPA campaigns when publishers had hard-coded textlinks or 120×60 buttons that cookie those consumers with the same advertiser.  The consumer may have clicked on the 300×250 Vonage ad we served, but they may not have converted immediately…if they returned to Vonage later and signed up, the most recent pixel might have been the textlink in their web email service, even though the user didn’t look at or click on that placement.

Now that Google and Facebook tags are increasing on publisher pages, publishers are no longer asking visitors to login – “I don’t want to know anything about my visitors, but I want Facebook to know everything about them” – and the way that marketers are attributing results to a campaign are going to be skewed to the pixel rather than the environments that offered the best placements, ad formats and creative that drove meaningful interactions.

Publishers need to compete in the attribution battle, and ask the advertiser if they’re going to be evaluated on metrics like post-view conversions.  Publishers are spending a tremendous amount of time producing valuable content to draw visitors, offering high engagement ad units and highly targeted audience segments – in short, creating premium ad placements that should be measured with effectiveness studies like Nielsen NetEffect that dig deeper into online media consumption and offline purchases.  Are publishers asking themselves how their advertisers are measuring campaign effectiveness?

Inefficient to Run Online Media Campaigns – Is it a Buying or Creative Issue?

I had the privilege to attend Pubmatic’s AdRevenue on October 13. Thanks to the Pubmatic team and Doug Weaver for putting such a great conference together.

During one of the panels, Martin Gilliard from The MIG referenced a common sentiment I’ve heard at many industry events: despite the technological efficiencies of interactive ad serving, it is still MORE expensive to run online marketing campaigns vs. offline marketing campaigns (e.g. buying ads on TV).

With tools to automate the RFP process, buyer and publisher ad serving tools, rich media technologies and all the other systems involved, it doesn’t seem that the issue is caused by the media buying process. Is it instead the durability of creative?  Is a TV schedule more efficient to deliver because all those spots and the ad impressions they represent are all executed against the same six 30-second spots?  Or the print schedule that runs throughout the year in 20-50 magazines rotates the same eight full-page ads?

From my time spent on the interactive agency side, very few campaigns ran over the course of months against the same creative and with fewer than three ad formats.  In TV, there’s the :15 spot, the :30 spot, a rare :60 spot and :10 Promo IDs (which I was never able to keep on a media plan).  When clients asked about burnout of a spot, the general response was that “a bad spot is burned after the first impression served, and effective spots can run for decades”.

On another panel at AdRevenue, Susan Grossman from MasterCard Advisors noted that the data they have access to allows them to turn information into insights and measure effectiveness by tracking actual purchases.  By using that kind of data to identify the appropriate audience to target and the demo/psychographic profile insights that can help get the message correct, online display ads should hopefully have enduring durability and allow those conversations to work over longer periods of time.