Testing & Optimization Program Design

During my years at Ansira, I was in charge of overseeing multiple optimization and testing programs for clients. These CRO programs were focused on improving conversions and driving continuous learning from design experiments for an improved customer experience. Clients included Aflac, KinderCare, The Barrel Room, Michaels, GoHealth and Chuck E. Cheese.

The work consisted of research, ideating, testing, learning, and then we'd repeat that process continuously. Our team leveraged a combo of analytics data and qualitative insights as the foundation for our design hypotheses and test builds.

What did I specifically bring to this work? I introduced and advocated for qualitative research to be a type of "testing" that provided valuable learnings. I expanded the definition of "testing" from purely analytical outcomes. I was also the solo designer on these projects so I helped to establish the programs from the ground up with my cross-functional internal teammates. One thing I am proud of is that fact that I taught myself how to code for this work so I could minimize the strain on our development team when they had reduced bandwidth.

ROLE

UX Researcher
UX Designer
Front-end Developer

TEAM

1 Designer (me)
1 Researcher (me)
1 Project Manager
2 Data Science & Insights Leads
1 Developer (sometimes me)
1 QA Specialist

PROJECT DURATION

2019 - 2023

tools

Optimizely
VWO
Adobe Target
Monetate

Example test design visuals for client - The Barrel Room

OVERVIEW OF THE PROCESS & MINDSET

There's no such thing as a bad result, there are only new learnings.

The foundation of these programs was learning. One of our primary goals was to establish a culture for our clients that integrated the testing mindset into their work and spread across internal teams. Any results - positive, or negative, neutral - were learning opportunities that helped guide future design decisions. Creating the culture that viewed testing in this way was what made these programs successful. To do so, we designed a process that helped clients incrementally and continuously test and learn. Over the years, the process evolved as we continued to bring on new clients, adopt that same mindset, and learn from our own project design. The refined process (as of 2022) was what is shown below in detail:

RESEARCH & ASSESSMENT

Identifying areas to focus testing efforts and understanding why

User Interviews

Usability Testing

Heatmaps

Heuristic Analysis

Session Recordings

Surveys

Google Analytics

When kicking things off with clients we took about a month (give or take) of time to review the product experience and conduct initial research. In order to form hypotheses and potential solutions, we needed to identify key opportunities.

Determining the problem areas first and foremost was the best way to help prioritize where to focus our efforts. For each client, we were tracking towards a specific set of business goals that they outlined during project onboarding. Since the clients were in different industries these differed. Oftentimes the goals were to improve commerce conversions, increase engagement metrics, or drive growth outcomes such as increasing leads.

As the solo designer for this work at the agency, I'd create, in collaboration with the Data Science & Insights team, a comprehensive report of our findings that we shared out with clients to kick ideation off.

Example of some research and discovery insights shared in the initial meeting with clients

Testing IDeation - QUANTITY OVER QUALITY

Forming hypotheses, brainstorming ideas, and getting client involvement early

Once there was an understanding of where the problem areas were I ideated and sketched ideas for potential testing opportunities. I did this in preparation for our initial report with clients. I then created a Miro template that was used for the collaborative workshop with client stakeholders I facilitated. These working sessions were a critical step in getting a variety of individuals involved and invested in the optimization program and testing culture we were promoting.  During the session, as a group we aimed to come up with 75-100 testing ideas centered around the problem areas that were identified in the discovery work phase. With 7-10 people at each of these workshops, we never had a problem accomplishing that!

LEARNING NEW THINGS - CODING

To ease development workload in 2020, I took courses and taught myself HTML, CSS, and some JavaScript

In 2020, I taught myself how to code so that I could not only design the test variants but also build them for the clients within the A/B testing platform. The development team at this point had zero bandwidth and was feeling strapped. I didn't want this to hinder our outcomes as a team so figured what better time to learn something new. I enrolled in Treehouse courses, practiced manipulating code in the visual editor, and learned how to create custom built tests in Adobe Target and Monetate. I worked with the QA team to fix identified bugs and ensured that everything was set up properly prior to launch. For a time, I was acting as the developer on these projects.

PRIORITIZATION & RANKING - THE SCORECARD

Creating a scorecard to prioritize opportunities based on important factors that removed bias

Everyone had their favorite test ideas - often the ones they came up with (ha!). In order to reduce bias and establish a system to rank the test ideas, we created a scoring framework that weighed the "value" and the "cost" of each test proposal based on a preset list of factors. Since A/B testing is more scientific in nature, I partnered closely with our PM and Data Science & Insights team to map out test duration, minimum conversion rate change, and page traffic that were required in order to reach statistically significant results.

The scoring was totaled and the output score helped us create a system that subjectively ranked ideas and allowed us to plot them out on a calendar.

The Value

The estimate of the potential impact

The value half of the scoring was made up of 11 inputs all ranked on a 1 to 5 scale. The factors included client and team ranking, position in funnel, potential audience exposure, and the problem the test addressed (usability, qualitative insight, etc.).

The Cost

The estimated effort to create the test

The cost half of the scoring was comprised of more functional factors i.e. what it would take to implement. There were 3 primary factors - client development needs, test duration, ease of implementation.

ESTABLISHING THE PROGRAM - THE ROADMAP

With ideas ranked we worked to map out tests on a calendar to develop the roadmap for the upcoming 6 months

Once the tests were ranked we had a clear order established for the hypotheses we wanted to test. I worked with the Project Manager and Insights team to plot out the tests on a calendar factoring in build time, duration based on page, and QA. The output was a Gantt chart view of the upcoming months with tests plotted. Note that as this process was iterative the roadmap did shift if test results revealed an insight that made us rethink the defined order. It was fluid in that regard.

THE RESULTS

Below you'll find a sampling of test results or recommendations based on learnings across the CRO programs

TESTING EXAMPLE #1 - AFLAC

How might we improve the quality of incoming leads?

Increasing lead value by estimated $1,244,890

THE PROBLEM

The Aflac website was littered with CTAs for users to "Request a Quote". This lead users to a form where they'd input their information, submit, and wait for a response. Aflac, however, had identified that call leads were far more valuable. They needed to introduce a way, especially on mobile devices to make it easy for a user to connect with the sales team.

THE HYPOTHESIS

Giving users the option to choose how they wanted to connect with Aflac to receive a personalized quote would generate more qualified call leads.

- Control = Singular "Request a Quote" CTA
- Variation = Split sticky CTA with "Request a Quote" and "Call Us" options

THE Result

- 89.78% increase in phone number clicks
- 4,622 more phone call leads for the variation
- An incremental increase in lead value that equated to $1,244,890 based on Aflac's defined lead valuations projected

TESTING EXAMPLE #2 - CHUCK E CHEESE

How might we make it easier for users to compare and select a kids birthday package?

Increasing birthday funnel entrances by 35.58% and conversions by 5.62%

THE PROBLEM

Heatmaps revealed that users weren't interacting with the package options shown on the Kids Birthday Party page. The tabbed structure didn't provide an overview of packages and their details. Rather, it only revealed details on the most expensive and most comprehensive package and users were missing the fact that there were other package options. Session recordings validated that users were having a hard time navigating the page.

THE HYPOTHESIS

Making it easier for users to see all package options in relation to one another for quick compare will increase the booking funnel entrances.

- Control = Tabbed package options on Kids Birthday Party Page
- Variation A = Side-by-side party package cards
- Variation B = Side-by-side party package cards with streamlined navigation

THE Result

Both variations showed improvement over the control but Variation B showed statistically significant improvement over the control in both:
-Birthday booking visits
-Birthday booking conversions

TESTING EXAMPLE #3 - THE BARREL ROOM

How might we make it easier for mobile users to access the cart?

Increasing proceed to checkout rate by 27.05%

THE PROBLEM

Google Analytics showed that the largest drop-off in the conversion funnel was occurring from cart to checkout and in particular on mobile. Currently, when users click and add an item to cart there is no indication that an action has been taken. The interface provides no visual feedback.

The cart side fly out was received well on Desktop. There was opportunity to incorporate this same method into the mobile experience.

THE HYPOTHESIS

Introducing a modal with items that have been added to the cart will increase users completing orders.

- Control = Business as usual mobile cart notification
- Variation A = Cart modal on mobile once an item has been added to the cart

THE Result

Giving users a clear visual indication of what products were in their cart and associated costs encouraged more users to proceed to the checkout.
- Visual feedback is important to the user experience

EXAMPLE #5 - THE BARREL ROOM

How might we make it easier for users to find products?

2.41% increase in search interaction and ~1% increase in order conversions.

THE PROBLEM

The Barrel Room website had a large product inventory. The search bar, however, was hidden behind the search icon. It also was observed that on the product category pages like Red Wine, subcategories such as Red Blends weren't easily accessible and required users to filter to view rather than having dedicated landing pages.

Qualitative insights revealed that most users when shopping for wine have a particular varietal in mind and this is where their journey begins.

THE HYPOTHESIS

Updating navigation behaviors will increase the number of users adding items to their cart.

Test #1

- Control = Business as usual navigation
- Variation A = Search bar expanded and exposed in the navigation

THE LEARNINGS

Users default to search. They are used to this method of discovery and when available use it.
- Increase in search interaction
- Minimal impact on order conversions but improved discovery metrics

What I learned from this work

- Refining project processes is a continuous learning process. Incorporating retrospectives quarterly helped everyone on the project continually identify ways to improve our work.

- Stakeholders enjoy being involved in the process and collaboration is key to success. Creating a partnership is crucial to project success.

- It's important to push yourself and try new things. Challenging myself to learn coding helped me become a better designer that can communicate more effectively with development teams. A growth mindset is beneficial.