a woman holding a phone

Raman Bhatia is responsible for delivery and performance of CheapOAir and OneTravel native apps and leads a global team of 30 app developers, interacting closely with product and design. Before Fareportal, he managed teams whiling remaining hands-on and close-to-the-code for apps at Citi, MXM and SapientNitro.

Members of the optimization community, Optiverse asked Raman questions around mobile strategy for native and web apps and mobile A/B testing. You can read the full AMA with Raman on Optiverse and find a ton of great resources for mobile optimization on the Optimizely knowledge base.

Q: Do you have any thoughts on what to prioritize first to drive engagement? Or do you have suggestions on what data to look at to find the high-impact opportunities? - John H

A: We usually evaluate each new feature based on KPIs. We then rank order them from ones that could move the needle most on KPI’s (Conversion Ratio, Attachment Rate, App Downloads etc.) and then look at the “available space” in the upcoming release (by “available space” I mean the number of experiments that can run without interfering with each other) We then schedule them accordingly which gives us the entire roadmap for the releases and Optimizely tests for the next three months or so.

Things get pushed out or in as features get delayed, and then a test that could not be done in one release will need to be moved to the next. The moved test could upset the balance of tests if it interferes, in which case the “available space” has to be reallocated, which could cause some other test to be moved out into the next release, and so on.

Q: When setting objectives what are your thoughts on attention often being a zero-sum game?  i.e., by increasing interaction with one feature you will reduce interaction with another.   -Matt Clark 

A: That is one of the central problems to be solved. At Fareportal, we optimize interactions by making sure features are presented contextually. For example, present the ancillary products after the main one is purchased, because presenting all together would be counterproductive. That of course is the simple case in an e-commerce app situation.

For a more complex scenario, such as an several equal features vying for attention, increasing interaction with one could decrease another, much like a machine with connected parts. So in an e-commerce situation say, you may have products on a screen competing for attention, and your strategy would be to drive the user to follow a certain order based on  context (ex. user has purchased an air-ticket, now highlight the hotel cross-sell)

Context decreases the cost of attention i.e. if you design the interface with the right amount of context, the user’s attention is ‘directed’ towards the next target, requiring less attention and thinking on the part of the user. Basically you have been ‘gifted’ just X amount of attention by this user, so make sure you use it economically before you run out. Use the attention to drive behaviour with clever design, tweaking, measuring, optimizing incrementally to get it right.

In other words, I agree with your notion of attention being a ‘zero sum game’, but that is not necessarily a deal-breaker; it just means you need to find ways to use it smarter and more economically. That’s what we constantly attempt to do at Fareportal.

Q: Can you speak a little about a testing strategy for native apps? -A Vaughn

A: We have a roadmap of features out into the future and our release schedule and team capacity/bandwidth, and then lay out the tests in an optimal way given the constraints of dev resources, and best use of “available space” in each upcoming release. By “available space” I mean the number of experiments that can run without interfering with each other, as mentioned in a previous answer.

So let’s say the releases, and Optimizely tests are planned out as say (R1, O1), (R2, O2), (R3, O3) etc. where R1 is the 1st release and O1 is the set of Optimizely test features fitted into that first release, R2 is the second release… and so on.

We then package run the O1 tests in R1, and as soon as a test is complete, if it is a winner, we set the variation at 100%, and if a loser we set it to 0% (for mobile apps be aware that, in order to do this in Optimizely, there is a multi-step process needed). We keep it this way until R2 is deployed.

Then in R2 we package the O2 code blocks, as well as remove the code for any failed O1 tests. For the O1 tests that passed in R1, we move the code-blocks to the main code base for R2.

Q: We have very different product teams for mobile and web. Do you have a separate testing idea funnel and testing goals for web and mobile? Or do you bring everyone together in an effort to have unified testing? -John H

A: We have different product and different development teams for mobile apps and web, and each channel designs and runs tests independently. Many of our app features are developed on the lines of what we have on the web, so we do take cues from them (plus the web team has been testing a lot longer and have more testing expertise than the app team).

At the same time, apps obviously have a distinct experience, and many unique mobile only features; and for all these reasons apps have a unique test idea funnel. App product and design, with inputs from app devs, come up with the tests. The app dev team then creates the code blocks, live variables etc. The app design and product teams then set up the test, and with the app tech team try these out in preview mode, and finally, when the app launches with the new tests baked in, the tests are started.

Even though there are separate teams and separate execution for Web and Mobile apps, the teams do get together weekly to discuss test results, share experiences and advise each other on further testing.

There are also some areas where the app and web teams do run tests in sync, for example for app downloads that take place from ad placements on web/mobile web, and the downloads and conversions that arise due to these.

Q: For mobile testing, how do you deal with a dozen device sizes? -Martijn S

A: For iOS it is easier as there is a smaller variety, but EVERY TYPE of device MUST be tested as there is a chance that the simulator may not be enough. We have had cases of crashes or significantly poorer performance on the device than simulator. Of course you can’t buy every iOS device there is (especially since new ones keep coming out) so you have to test some only on the simulator especially for the older devices. You will of course need to buy the latest physical devices — right now the 6s and 6s Plus.

For Android the considerations are similar but a bit harder. You will need more physical devices  and more versions of Android to test with (almost all iOS users typically upgrade to the latest iOS version right away, Android users are a LOT slower at doing so) So you need to be strategic and smart about the set of Android devices you want to buy for testing. Usually it would be a set of four to five — a Google Nexus, Samsung Galaxy Note leading the pack and then others.

I know that sounds like a Kafkaesque struggle with a lot of running and playing catchup, and especially so if you have a global presence and have multi-lingual and international apps deployed in several countries, and user bases with different device mixes and habits.

Some savvy folks out there have already seized the opportunity and provided amazing cloud  based services for this (for example Keynote DeviceAnywhere) where you can pick any device at any location and deploy your .apk or .ipa to it for testing.