decorative yellow lines on background

The group will also feature insightful programs such as AMAs with talented influencers in the optimization world.

To kick off our first AMA, Experiment Engine CEO, Claire Vo, answered thoughtful questions submitted by the community for over an hour!

We rounded up the best questions and answers to help you gain perspective about her A/B testing philosophy.

@mikefiorillo: What’s one thing you know is true about CRO, that almost no one in the industry agrees with you on.

I don’t know if this is going to make me any friends, but I would say by and far the CRO industry as a whole overstates how truly data driven it is. I think more than is recognized, things are tested because they’re “liked”, wins are easily constructed out of shoddy data analysis. I haven’t seen anyone do a really amazing, scientific analysis really attributing revenue growth to specific CRO efforts over time.

We’re still stuck in tracking CRO performance test-by-test, but we’re not looking at the meta-analysis of how CRO fits within the ROI of our overall marketing or product efforts. EE wants to shine a light on that performance data. It’s yet to be seen how well the industry will hold up to that sort of scrutiny. I believe it has significant value and does drive revenue, but I want to see people really prove it.

@walt: How do you recommend quantifying longer term goals (e.g. quarterly/annual growth KPIs – the kind of things investors need) given the volatility of testing and the uncertain duration experiments?

1. You need to reduce as much as possible the uncertain duration of your experiments. @ej wrote a great post on the EE blog that outlined, based on our data, the ideal duration for an eCommerce test based on your traffic to maximize your chances of overall program success.

2. What I would do is set targets for both your conversion rate performance over time (output) and your conversion program performance (input) as long term goals.

You can make assumptions about a conservative win rate (10%?) and your maximum testing velocity based on traffic (reference EJ’s post) and an expected lift over time. Then, as your are tracking your KPIs, you can see if you’re failing on the inputs (are you not running as many tests as you projected? – check your organization for blockers) or the outputs (is your win rate lower than expected? check your hypotheses, execution, etc.).

If your velocity is high, your execution seems decent, and you’re still not getting wins, there’s probably an product/market problem not an optimization problem. But often, I see most teams fail on the inputs side.

@taliagw: As fellow Founder and CEO I’d love to hear what you have in mind for the next few years for the company.  What are your main goals, how do you plan to stand out and what makes you guys unique?

Awesome question and “hey!” to fellow female founder and CEO (almost wrote CRO there, guess that’s fine, too!)

Our goal is pretty simple, help teams to “always be testing.”

That means:

Giving them the tools they need to run an amazing conversion program, without cobbling together stuff

Access to the best minds in CRO, design, development, and copywriting so resources are never a blocker

Powering them with data that helps them objectively prove CRO is driving ROI for their business to their peers and bosses

We’re a bit unique in that we’re delivering a service that requires human creativity and expertise (like all of you have) but building really powerful technology and data behind so it can scale beyond what a traditional agency or consultancy could (or would want) to do.

And on the expert side (that’s all of y’all!) we’re building ways for you to put data behind your performance, access really cool and fun projects, and test new concepts in a low-risk way. I think that’s the fun part.

@stefan: What are the pros and cons of having CRO internally and externally within an organization?

For organizations with CRO internally, where do you feel it has the best place (department wise – product, marketing, tech, hybrid..)? I know this is a big topic for many organizations especially since we’re still such a young domain and many opinions.

1. Please, please, please have an internal champion for optimization in your organizations! Preferably at the executive level! I think it is SO critical to have someone who really believes in data-driven optimization and can help politically smooth over challenges when you get a string of losses. Plus, one of the major advantages of having CRO expertise internally is keeping those precious learnings internal to your organization.

That being said, tactical execution of CRO is not something that necessarily needs to be built of kept in house. But as a strategic owner of your program, 9/10 times I recommend having someone inside run the show.

And then leveraging experts, tools, agencies, etc. to power their program, but not take it away from them.

2. It really just depends on your internal structure. It should live with the team that’s most incentivized to drive your conversion (or whatever metric) up—if you put it in the product team but the marketing team is the only one that cares about conversion rate, it’s not going to get the attention it needs.

Places I’ve seen testing programs fail: design orgs, development orgs—the incentives are least aligned.

(follow up to #1, the con of this approach is sometimes the data driven person does not exist in the organization :flushed: —then bring in someone externally to train/create that person/culture!)

@jackreamer: My question is based on Peep Laja’s quote: “True optimization work is data-driven: nothing is ever done randomly.” So what’s the best way to use data to back up your CRO strategy and identify the best tests to run?

I’m not sure there is anything to say here that hasn’t already been said: use quantitative and qualitative data to drive your hypotheses for you testing programs. That being said, the data can’t just be what you should test but how your testing program should be run. Which means you can’t just use data to drive an individual test, but it has to inform your overall testing strategy, velocity, and analysis.

And also check yourself when you think you’re being data driven and you’re not. AKA that nagging “but that couldn’t possibly work!” voice that you say out loud or in your head.

@shanelle: What’s the #1 thing you wish more people knew about A/B testing?

Every week you’re not running a test you’re delaying your next win by 2 months.

You know that poster: F-it, ship it? (maybe only @jeremy and the #developers out there will get it.)

We need one that’s like F-it, test it.

@ryanfarley founder of Lawnstarter : it seems like so much of CRO is just making sure that you’re maximizing texts per time period.   what makes someone a CRO expert, aside from someone who follows your motto of “always be testing”?

I’ve clearly indoctrinated him in the AlwaysBeTesting crew, because I also strongly believe in test velocity maximization as a big lever in test success.

But what makes a “CRO expert”? That’s a really important question to ask, when positioning yourself as a CRO expert or evaluating someone who is claiming to be one (like our EE expert network, which is invite only!)

I don’t see CRO as a silo’ed skill set where you can easily evaluate or qualify someone, but these are the traits I see in people who run amazing programs:

  1. Pragmatism – Willing to let go of the status quo, has their eye on a specific goal and is willing to do what it takes (even if its uncomfortable) to achieve it.
  2. Data-driven – Can check their ego and preferences at the door. Super hard.
  3. Organized – Shout out to @andra.bond who will keep a weekly testing cadence come hell or high water. Moves process along.
  4. Insightful – Can draw reasonable conclusions from loose data or qualitative feedback. Thinks about the long-term impact of their decisions.
  5. Design-literate – Can communicate hypotheses in a way that can get executed on by design or development teams. Doesn’t just say “I kind of want this to look better.”
  6. Persuasive – Can “sell” CRO internally and can maintain engagement around a testing program even when things aren’t going well.
  7. Wants to buy EE – proves you’re super smart at CRO 😛

@luiz: Who is on your “Optimization Dream Team” and what are their roles?

I want the #experiment-engine Expert Network, with all of my CRO friends in the world in there. Which means if I need an amazing landing page variation, @taliagw and @mikefiorillo could go head to head on a hypothesis.

And I could leverage, at any time, the right expert for the job. So if I’m crying in Optimizely (you know you do it) I can pull in an awesome developer, or if I’m flat-out of good headline ideas, @jimmy could bring a fresh take to it.

That’s the whole vision of the Expert Network and the scale it gets you. A team of 1000 experts that are on demand, excited to play, and ready to help me where I need it!

“Every week you’re not running a test you’re delaying your next win by 2 months.”