The Role and Value of Automation in Adopting New Ad Tech

Published July 25, 2018
Given the continuous advancements in digital advertising technology and the steady pace of consolidation in the industry, it’s almost inevitable that publishers and media organizations will upgrade their existing or migrate to entirely new platforms and toolsets. That’s true of all of the essential components of the ad tech stack, including order management systems (OMS), ad servers and billing software.

It’s no surprise, then, that publishers and media firms are looking for the most efficient and effective ways to handle upgrades and migrations from old to new systems. That’s where automation tools come in. Certainly, automating certain data extraction and migration processes can be highly beneficial. In fact, how you structure the data and how you use automation in data migration can make the difference between outstanding and mediocre ROI and determine whether or not you fulfill the business case for new or upgraded systems.

But, as with so much else in the ad tech world, the devil is both in the details and in the big-picture context. Automation can’t be viewed as a one-size-fits-all undertaking, because the nuances of ad tech environments vary considerably across media companies. Business processes are different across media brands. So too are “secret sauce” product offerings and audience profiles.

So what’s the role of automation? In our experience, automated tools should do the heavy lifting in migrating massive amounts of data off legacy technology and onto new systems and in testing that the data comes over accurately and appropriately. The benefits are faster timelines, fewer resources needed and lower error rates in migrating data.

Mastering the data details

Automation alone doesn’t make for a perfect migration. Without careful mapping and modeling of data sets and product taxonomies and a detailed understanding of the differences between old and new systems, publishers are at risk of “garbage in, garbage out” scenarios and “apples-to-oranges” mismatches. Few systems allow for direct, 1-to-1 transfers of data. Line items and fields simply don’t match up exactly across, say, Operative OMS and Google DSM systems and ad servers. Thus, it’s necessary to understand how specific types of data are treated across systems.

Similarly, successful migrations require that templates and mapping models reflect specific product, fulfillment and billing requirements. In other words, the use of automated tools should reflect business needs and objectives, rather than being defined by what automation can or can’t do.

Automation in the big picture

Thinking contextually about the key steps in a data migration clarifies the role and value of automation. For example, automation is a relatively straightforward process when extracting data. And transformation templates can help automate the process of preparing data to be loaded into a new OMS, ad server or other system. The load process is automated, too, of course.

But these migration steps can – and should – be backed up by robust and iterative testing and validation processes within sandbox environments. Accelerated validations and systemic verification ensures data is migrating in line with business requirements and project objectives. But, again, automated tools must be designed and configured so they “know” what needs to be validated – not just that data came over, but that it was transferred to the right fields. Did quantitative data come over properly? Did open text and qualitative information end up in the right fields?

In this way, automation helps reduce validation time frames. It also allows project teams to refine transformation logic based on the results of initial migrations to staging environments. The goals are simple:
  • To ensure data transformation and transfer issues are identified and addressed before migration to a new OMS
  • To reduce the time it takes to import data after each test run, thereby facilitating a seamless migration into production environments
A series of automated test runs, with multiple validation points and other specific criteria for both quantitative and non-quantitative data, should be conducted prior to the production run. Again, this is how media companies can avoid the dreaded “garbage in, garbage out” and “apples-to-oranges” situations.

So how do you make these adjustments? A combination of expertise and experience is essential. The key is to recognize how product taxonomies, account and campaign data, invoice line items, metrics, impressions, product definitions and attributes and other elements differ across the two systems. For example, FatTail allows for “flexible” ad sizes which means they can be entered/updated during line item creation, while other systems require you to define an ad size during product creation and cannot be altered during order entry.

More broadly, if your data is of questionable quality in system A, you may want to make refinements before moving it to system B. No wonder that moving to a new OMS often leads publisher to consider rationalizing their product portfolios and otherwise enhancing their data.

It’s best to know the common pitfalls and risks in moving from software A to software B. That way, automated tools and pre-configured templates can be adjusted to align or to solve for publishers’ specific and unique needs, product portfolios and IT environments. The challenges and complexities are directly proportionate to the size and structure of the companies. Aggregating campaigns and data across systems for a multi-brand media conglomerate is much more difficult than it is for standalone properties.

The bottom line is that automation accelerates effective transitions for publishers adopting new ad tech. But the most effective automation is designed with both specific company and IT details and big-picture business objectives in mind.

Are you ready to get more value out of your data?