The Power of Firebase Remote Config: A/B Testing

Image Source


Since I joined The Weather Network Android team, I have seen the power of Remote Config and how it has benefited our users and team significantly. Remote Config can be used for A/B testing, feature flag implementations, and dynamic configuration of resources used in the app such as texts or images.  We make better product decisions through A/B testing and less App updates because some parts of our App are dynamically configured. Hopefully, you can find something that will help your team and company. In this blog, I will specifically talk about how to use Remote Config for A/B testing.

According to Wikipedia, A/B testing (also known as bucket tests or split-run testing) is a randomized experiment with two variants, A and B.  A/B testing is a way to compare two versions of a single variable, typically by testing a subject’s response to variant A against variant B, and determining which of the two variants is more effective. For example, we recently did an experiment with 20% of our users. Half of the users in the experiment were given an older video experience (variant A), and the other half were given a new one (variant B). We aimed to maximize ad revenue, so our goal was to see which video experience generated the most views (which variant was most effective).

Where to see your A/B test results?

Inside your Firebase Play Console, you can find the A/B Testing page under the Grow section. As we can see from our A/B testing result for our New Video Experience, we got the “possible improvement found” message, which means our improvement, according to the data is a small improvement. You can see from the graph below that  Firebase generates different results: Insufficient data, Baseline is probably best, Possible improvement found, and Clear improvement found. Although the new video experience has not given us a clear improvement, it has improved and it is safe for us to roll out to 100% of our users.  

How did we set up this experiment?

Firebase console lets you create experiments by setting up  your target and distribution, goals and variants. 

1. Targeting and distribution

Since this is a brand new feature, our relevant audience is the 20% production users that we rolled out to with version  x.x.x.xxxx, containing the new feature. There are other conditions you can add to narrow your targeting population, such as user audience, device language, country/region and predictions. 

2. Goals

We used three built-in goals such as Retention (4-7 days) and Crash-free users. You can find all built-in goals and what they mean here. We also used a custom goal ab_videoPlayed. We need to log this event inside our app where the video is being played. Those three goals help us understand if the new video experience helps us have more video click through, better user retention and stable build. 

3. Variants 

We have two variants: the control group and the new video experience. 

For details on how to create this experiment, see here. Please try out your different variants on your testing device before starting your experiment, and this video will cover how to do that.

How to interpret your experiment results?

It is recommended that you wait for at least two weeks to have an effective experiment. You should also have enough users for the statistics to provide valuable information, or else you will get  see something like this:

Luckily, at The Weather Network, we have millions of users. 

After 5 weeks of experiment, we can see the number for ab_videoPlayed varies between -0.41 to 2.29%. The narrower the range between the two numbers, the more confident we are with the result. For our case, this range is good. If it’s between -25% to 35%, then the number doesn’t increase confidence level to say one variant is better than another. 

See here for more explanation on how to interpret results.

How did we do it?

I will not go into too much technical details as there are many online tutorials you can follow. However, I will talk about the technical strategies that you should consider when implementing A/B testing with remote config. 

1. Loading Remote 

We activate and fetch Remote Config at Application onCreate, to ensure that all values are updated at Application level and not at a specific activity or fragment. This way every time a user launches the application, we will fetch new values. If we only put it in a specific activity, new value will be fetched when you visit that activity. This could be good, if you don’t need to fetch for any other activities or fragments in your application. Firebase recommends a few different loading strategies. When our team needed to use Remote Config to configure something that happens during the onboarding screen which happens during splash screen, we made fetch successfully before onboarding starts. Onboarding only happens once, if the implementation has faults, then our users will not see that again unless they uninstall the app. 

Alo keep in mind, you can configure your app for testing to fetch with zero delay, but Remote Config fetch every 12 hours by default. You can override this 12-hour rule for production as well, but make sure you don’t end up making too many calls to cause your app to be throttled.

2. Types of parameter values that can be used on Remote Config

By default, remote config supports Boolean, Long, Double and String. We use Json, since we can do more advanced configuration with Json that contains more complex data. Ensure you build Json parsing properly in your application to handle that. Google’s Gson ( is a great built-in tool to deserialize Json. We return the default remote config value when serializing fails. 

GsonBuilder().serializeNulls().create().fromJson(configJsonString, ?: default

3. Logging custom events using Firebase Analytics

To use custom events such as our example “ab_VideoPlayed”, simply call the following when video is played. 

 FirebaseAnalytics.getInstance(context).logEvent("ab_VideoPlayed", null);

Sample Code:

I have created a sample app to demonstrate the below that is different from loading strategy that we are using:

I want to test the two different variants below between the two photos I have chosen to see which Kobe photos will get more clicks to the link below. 

The image is by default variant B, I don’t want users to see the photo change to Variant A while the App is in the foreground. Here is the code snippet to not apply the image change till the user launches the app the next time. 

class MainActivity : AppCompatActivity() {

    private lateinit var remoteConfig : FirebaseRemoteConfig
    private lateinit var imageView: ImageView
    private lateinit var textView: TextView

    override fun onCreate(savedInstanceState: Bundle?) {

        textView = findViewById(
        imageView = findViewById(
        remoteConfig = FirebaseRemoteConfig.getInstance()


    private fun setupRemoteConfig() {

        //override debug mode fetch time to be 0 = 0
        val fetch = remoteConfig.fetch()

        //if first time fetch successfully, do not thing to UI
        fetch.addOnCompleteListener {
            if (it.isSuccessful) {

        //use cached remote data to set up ui, no UI changes till next clean launch after app is killed
        if (remoteConfig.getBoolean("image_config")) {
        } else{

    private fun setupHyperlink() {
        val text =
            "<a href=''> The Kobe and Vanessa Bryant Family Foundation </a>"
        textView.text = Html.fromHtml(text)

        textView.setOnClickListener {
            //This logs a custom event and can be used as a goal for A/B test
            //Before you can use this event for A/B, you need to click on it a few times for Firebase to log
            //this event in their console 
            FirebaseAnalytics.getInstance(this).logEvent("link_clicked", null)
            val browserIntent =
                Intent(Intent.ACTION_VIEW, Uri.parse(""))

See full code here to my github repository. 


A/B testing is a great tool to use when you want to compare effectiveness among different variants before releasing it to production. However, this is only an effective method for applications with enough users to draw successful statistical conclusions. There is no set number of how many users are required for A/B testing. Please try with a simple test to see if A/B testing is for you. 

Firebase provides very readable results that do not require you to have a Math degree in order to figure out which variants to pick. Setting up the experiment is simple and straightforward with built-in goals or easy to implement custom events/goals. 

Lastly, make sure you pick a good remote loading strategy based on whether you have a splash screen, when and where you need your remote values to be used. Those strategies could be 1) Activate and fetch, and update UI right away 2) have a loading screen or splash screen, but dismiss when timed out 3) Activate and fetch, don’t update UI. Update UI next time when launch. 

Thank you for reading! Please reach out to me if you have any questions. 

Special Thanks: 

  • Avais Sethi, my tech lead, for answering questions 
  • Fernando Martin Toro, my manager, for proof and approving 
  • Tinashe Mzondiwa and Ahmed Mushtaha, my teammates, for proof and approving


  1. A/B Test like a Pro
  2. Firebase A/B Testing
You May Also Like