I wanted to perform two A/B tests on an app using Firebase A/B Testing with Remote Config.
The problem is that the two tests audiences should be mutually exclusive. Forming part of both experiments might pollute the results.
I've thought in setting a Firebase Analytics user property when the user enters in the Experiment 1 and excluding this property value from Experiment 2 audience, but I'm afraid that the user enters in both experiments simultaneously when fetching the Remote Config values.
Is there a better solution for preventing the user from entering on both experiments?
(For the purpose of this answer, I'm assuming you're talking about the new A/B testing framework we just launched last week)
So right now, you can't really ensure mutually exclusive experiment groups with the new A/B testing framework. If you specify that 10% of your users are in experiment A and 10% are in experiment B, then a small portion of your users in experiment B (specifically, about 10% of them) will also be in experiment A.
The good news is that those users from experiment A should be evenly distributed among your variants in experiment B. But still, if you find yourself in a case where you feel like these experimental users will favor one variant over another (and thereby skew your results), you have two options:
Run your A/B tests serially instead of in parallel. Just wait until you've stopped your first experiment before running your second.
If it makes sense, try combining them into a single multi-variant experiment. For example, let's say experiment A is adding a faster sign-in flow, and experiment B is pushing your sign-in flow until later in the process. You could try creating a multi-variant experiment like this:
+---------------------+---------------+----------------+ | Group | Sign-in speed | Sign-in timing | +---------------------+---------------+----------------+ | Control | (default) | (default) | | Speedy | Speedy | (default) | | Deferred | (default) | Deferred | | Speedy and Deferred | Speedy | Deferred | +---------------------+---------------+----------------+
The benefit here is that you'll get some extra insight into whether being in both experiments really does affect your users in the ways you're suspecting.
I would set property
with a random number between 1~10 only on installation.
Then you should be able to do "exclusive A/B testing" by filtering users with it.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With