A/b Testing Book
- User Needs are often latent, undiscovered, and unrealized by users themselves. It is thus difficult to predict the actual user behaviour in a live enviroment.
Example-
Yelp's Team discovered, by pouring throug h user data, that unexpectedly large number of users were taking advantage of a feature buried deep within the site. This was a feature which allowed users to post reviews of local businesses.(Source- Hacking Growth Book)
2. Substituting A/B Testing with Pre-Post Analysis could be a Bad Idea
Example-
a- A Tour and Travel Company can have seasonality as a factor which could affect conversions in Pre-Post Analysis.
b- A Sales Company doing A/B Testing needs to ensure that days under which pre-post analysis was done were similar in nature.
(A month end closing might see a jump in overall Conversions vs Month starting)
or
(% of New vs Old agents during Pre and Post analysis needs to be taken care of. An old agent may perform better in conversions than New agent. If in Post analysis, % of old agents are higher, conversions might shoot up.)
3. User Inertia
Sometimes Repeat Users don't like a new (and perhaps better) interface as they have got used to the way old website worked. Our Control vs Test Audience must hence be broken down into Repeat Users vs New Users to understand why is a particular experiment not performing and % of Repeat audience who have not liked the feature.
Example-
a) Device Inertia -
Users who are looking at a product of e-commerce site in mobile site/app may be happy to look at same product on desktop due to large pictures and visibility but may not do it just because of perceived effort of switching ON the desktop device and searching the same product there.
b) Momentum Behaviour -
Users have become accustomed to going through Auto instead of choosing an Uber/Ola because of their perceived effort/usual routine even though they are aware that comfort and cost are better in the latter.
4. Looking at the Averages and How it affects our analysis
Example on how it could project wrong picture-
Case- "An Indian Restaurant starts offering Continental Food and after one month of its launch, they found that it has got a great response in terms of sales volume. However, after deep diving they found out that majority of sale came only in the first 10 days after which there was a steep decline in sale volume. The reason was extra-excitement/ response from customers in the initial days which faded down later."
5. Measuring the wrong Metric
Sometimes it is possible that we have not considered the right metrics which are getting impacted with new feature launch.
Example
- Uber has launched a new feature to start 2 way communication with a messenger type facility post booking.
Herein, Increase in conversions would be a wrong metric to track at present or just checking engagement would not be a sufficient metric to call this a success/Failure.
Some of the Correct Metrics shall be-
a) Time post trip booking to trip start (Reduction expected)
b) Trip Cancellation's Volume /User (Reduction expected)
c) User Happiness/ Trip NPS. (Ratings shall go up post feature launch)
d) Drivers' Ratings of User shall be improved (Similar Reasoning)
e) Repeat Users (Improvement Expected overtime)
f) Geographically, User Happiness shall be improved where local language of Cab Drivers are not understood by general audience.
g) Reduction in Calls Post Launch between Driver and Customer (Less Effort)
Also, here the base for Test Set shall be all users who are repeatedly using this feature in a span and have adopted them in their daily lives. It is possible that some old users have still not adopted this feature and they should be kept out of equation.
6. Not finding out Why?
More than a success/Failure of experiment, it is important to understand why an experiment has failed as it will improve scope for understanding user behaviour better and opening up more successful experiments in future. Moreover, it will help us understand whether it was a specific segment which didn't like the feature or was it just a seasonality behaviour which led to a failure for experiment.
I would like to highly recommend this Book- "A definitive Guide to A/B Testing" written by Divakar Gupta to everyone who wants to gain first hand knowledge about A/B Testing and Experiments.
This book is available on Amazon. Please click here to get redirected to the Page.
Written by-
Rohit Lal
Linkedin :- https://www.linkedin.com/in/rohit-kumar-lal-b1425a55/
A/b Testing Book
Source: https://medium.com/@rohitlal/a-definitive-guide-to-a-b-testing-book-review-insights-23948262534f
Posted by: vecchionothembeffe.blogspot.com

0 Response to "A/b Testing Book"
Post a Comment