Digital Analytics 2011 – Part 1

Since starting Ad World IT in January, I have been looking in more detail at the data that agencies, advertisers and publishers use to analyse digital activity.  This ranges from performance, through to behavioral and audience targeting.

Risks and Issues to the Future of Digital Data

This first article goes some way to summarising what I have found so far in relation to data issues and I will continue with further articles on other areas of data analytics.  Hopefully you will find something of interest from the articles that follow.

Current and Future Data Issues

The first place to start is regarding the risks and issues relevant to current methods of capturing data.  With the new EU legislation that becomes active on May 26th 2011 (refer to my previous article on 16th May), there is every possibility that new ways of capturing data for analysis will be needed to address the following issues:

  • Privacy
  • Ownership
  • Quality
  • Volume

Methods used for capturing information about users and activity probably needs to mature in the same way that the web has matured over the last few years and stop relying on cookie-type technology that was never intended for the purposes for which it is now being used.  Perhaps with HTML5 and the ability to manage and use local data storage, a new way of tracking will be created that gives some benefit to the consumer, as well as to the advertising industry.  In the meantime, HTTP cookies are the main vehicle for keeping track of usage on and across sites.

Privacy

Even though laws are being introduced and some feel more will come, there is a confidence that if the industry can prove that it is self-regulating and protecting internet users, then laws will not be required that potentially hinder the advertising business from providing better quality and relevant advertising to the consumer.  The IAB are representing the industry in this regard and there is a feeling that some headway is being made. However, some of the major players in the business are trying to create preemptive methods to protect themselves from any accusations of not being compliant.  A European site has been created by the IAB to help with self-regulation and to explain behavioral advertising to consumers http://www.youronlinechoices.com/uk/.  For publishers taking part, it enables consumers to turn off behavioral advertising when visiting their sites.  It does not turn off advertising, but means it will not be so personal and potentially relevant.

As with most regulations of this sort, it is still unclear what it really means and unfortunately it will probably only become clearer through the courts.

Ghostery is a neat way for consumers to find out what is being stored or forwarded about their activity: http://www.ghostery.com/.  Tests using this tool have shown a surprising number of different services being called; but even more surprising is how quickly the page still loads!

Ownership

Publishers are becoming so much more aware of the value of data and how it relates to the users of their sites in terms of demographics and activity.  The issue of publishers discovering 3rd parties using and collating their own data bubbles to the surface now and again, but one day this might well boil into an explosion.

This is sometimes such a difficult area. Depending who you speak to: the advertiser, the agency or the publisher, they will all have claims over the same data and feel strongly about it.  The benefit for the publisher is that they have access to all the data and can relate it to registration information.

How this would ever get controlled is impossible to say so I assume it will continue to be done by contractual agreements and self-regulation of sorts, but data is money, so this will be an ongoing battle.

I am sure that once again legislation will play a part in making this clearer, and there is no doubt that due to the perceived value of data these days, issues will arise more frequently in the future.

Quality

The online community prides itself by having real time and accurate information on the consumer and what they are doing and pitches this against off-line media that relies on surveys and panels.  This is true to an extent, especially with factual data and analysis. However, a great deal of analysis is down to interpretation so depending on the researcher or the application, a different set of results will provide different answers and result in different proposed plans.

Due to the current methods of tracking users, it is impossible for data to be 100% accurate and as users move around from site to site, the data becomes segregated and possibly incomplete depending on what the publisher is using for their analysis.  I am sure that that is why companies such as ComScore, who sometime get criticised for using a panel and survey method, are so successful as at least the panel is constantly and consistently monitored across all their activity to the tune of 2 million members.

One area that kept coming up in my conversations was the issue of “freshness” of data and how this can really impact on analysis and results.  In my mind this is all a part of quality of information but is impacted seriously by volumes discussed below.

Volume

The volume of data is one of the biggest challenges for data processing and data analysis tools as the data and elements that need to be captured are growing all the time.  The number of consumers using online media are growing as are the different types of media format are being used, such the increase in video, the looming connect TV avalanche and integration of mobile data with traditional digital. These added complications increase volumes tremendously.

Technology is keeping up and with the right architecture and power available, it can provide excellent performance. But the hope has to be that the data set can be kept at a manageable level as the inevitable increase happens over the next few years.  The best way of making sure this is the case is to only hold on to data that is good of quality and relevant to the analysis that is required. To achieve this, tracking and identification has to improve and the irrelevant data should be identified and if possible, stopped at source and definitely before any serious processing takes place.

Summary

This is by no means an exhaustive coverage of this topic and I am sure many will disagree with some of my points. However, it touches on some of the key areas to generate a discussion.  I am always happy for comments – positive or negative – to gain other peoples’ perspectives and to educate me more on this area.  I am also always willing to discuss matters on the phone.

Telephone: +44 (0)20 7193 6879

Email: chris.humphries@adworldit.com


Testimonials

  • There is no doubt that after 30 years in the industry, Chris’ insight into the advertising market and agency systems is unmatched. His technical and...

    Domenic Venuto, Vivaki
  • I have worked with Chris for over 4 years. He is a pleasure to work with, has great integrity, patience and understanding. His knowledge of media is...

    Nicola Fitzgerald, VivaKi UK
  • I’ve known Chris for nearly 10 years across a variety of roles. Both his technical and commercial knowledge are comprehensive giving immense confidence...

    Richard Metcalfe, Google Enterprise

Services

  • Consulting
  • CIO/CTO Services
  • Data Services
  • Digital Expertise
  • App Development
  • System Integration
  • Media Products
  • Off and Online
  • Campaign Analysis
  • Mobile Strategy
  • Supplier Selection
  • Implementation
  • IT Strategy
  • Product Direction
  • Analytics
  • Architecture

Twitter

Error: Twitter did not respond. Please wait a few minutes and refresh this page.

Share

Contact Us