Cardinal Path’s response to COVID-19 Cardinal Path is sharing all we know to help marketers during COVID-19.  Learn more.

For many companies, 80% of site engagement revenue comes from 20% of the most valued visitors – this is just one of the many rationales for devising and implementing personalization strategies in your marketing roadmap. Anthony Mills, Manager, Analysis and Optimization at Cardinal Path, delivered this finding, along with many other details and approaches, during his highly informative webinar, 5 Personalization & Testing Strategies to Upgrade Your Customer Experience.

After speaking to a virtual conference room full of digital marketers at the American Marketing Association (AMA), Mills took questions from those who wanted to gain greater insight and understanding of what it takes to devise a successful approach to personalization. Here is a round-out of the Q&A:

Curious about your thoughts on what is a solid testing program and the key steps in creating a testing program? Can you give us a brief overview?

One of the success factors for a high-impact testing program is a backlog of quantified, vetted, and prioritized testing ideas. Testing ideas are going to come from many different places in your organization. I suggest you centralize testing and then put in numbers (KPI baseline, average daily visitor traffic through experience, estimated run time, expected KPI lift, etc.) so that everyone knows what is required in each test idea.

To have a thorough testing backlog, you will need to have somebody who has the time and knows how to work analytics to find the data behind these test ideas. Not all ideas can be quantified, but most can. When organizations have a thorough backlog of ideas, with estimated revenue or engagement impact, we can start sprinting against that backlog and make some gains against it, so that our insights build, and we can refine from there.

Can you emphasize which tests deliver the best ROI?

What we at Cardinal Path traditionally like to do from an ROI standpoint, and it goes against prevailing wisdom, is to start testing hypotheses that are closest to the event you are trying to affect. So let’s say you are an Ecommerce company starting off the year and you want to improve order rate as one of the key KPIs for your business.

In that case, we would look at how much we could potentially lift that order rate over the year. I start with the overall annual KPI growth targets we hope to achieve. Tests in the backlog should align to that objective. Let’s take an example where for someone to actually complete an order, they have to be on the check-out screen – the last page where they are about to complete that order.

If we find that a fair amount of people are exiting from that particular page, we start with those sections first, because they generally have a more one-to-one revenue relationship without having to do a whole bunch of mapping. By doing that upfront, if we can get an early lift, it starts to pique interest in an organization. I like to start next to the event we are trying to lift, and then move up the funnel. So, I like to work backward.

Can personalization be applied to professional businesses, such as law firms?

Yes, absolutely, we have provided personalization testing for financial institutions. Banks, for example, have lead forms, often with complex needs – clients could be looking for asset servicing, investment banking, or wealth management advice. Some of the lead forms have greater value and higher lead scores than others. What may find that 10% of this audience has very valuable activities – meaning they were going beyond overall investment services and were specifically going into which funds and services they might be interested in.

And so when it came to the lead form, rather than saying, “Give us your name,” or “Tell us what you’re interested in, and we will reach out to you,” we identified the person the lead would actually be speaking with. We geared this toward those who showed a high level of interest and need for someone to reach out to them personally.

Rather than having a generic lead form, we would provide the name of the person from the bank who would be personally reaching out to them, along with branch details. We would tell the client to expect a call from this staff person, so they had the expectation, a name, and a face. This meant there was no longer a vagueness for the client after submitting their information.

We do a lot of print. Curious if you have any best practice for tracking print to digital, other than vanity URL in the print piece?

Print it is tough. Vanity URLs are my go-to. In the absence of vanity URLs, the other could be a loose association as to which markets that print was sent out to. If it’s a national campaign, that is going to be tough. If it’s a print campaign that goes out in a specific city, with a particular offer, you can use Geolocation features and can start to assume who may have been exposed to that print campaign. It’s fuzzy, but you can get closer, and you can test it.

You can even create a little holdout group in that city so that they don’t receive a personalized experience. For example, 80% receive a personalized experience, and holdout 20%, and you could hone in on which cities or markets received it.

The other way with print is related to connecting coupons to a Customer Relationship Management (CRM) or a Point of Sale (POS) – something that understands when a particular coupon was used in a store. As long as you have tracking downstream systems, you can incorporate that into your targeting criteria as well.

My team is currently running tests, but not seeing great results. Do you have any insights on what we might be missing?

When you do not see the results you are after, a couple of things are potentially happening. Typically, we don’t run any tests before we know how long it is going to take to run and until we have targeted and expected lift on an identified KPI.

If you are unsuccessful on a test, it is likely that analytics weren’t thoroughly vetted before the test went live. For instance, you may be running an A/B test, and you find that you don’t have any significant lift on a creative variation that you thought would be much better than a controlled experiment.

What you may find after running that test and breaking down those results of that test in your analytics platform is that maybe the majority of your traffic are return visitors, from a particular media or device, and that is really what is causing your KPI  fluctuations. By knowing those unique performance differences, you can hopefully start flushing out why that creative isn’t working for that group.

Do you have questions about testing & personalization? Contact us today.