Overview
This course will cover the design and analysis of A/B tests, also known as split tests, which are online experiments used to test potential improvements to a website or mobile application. Two versions of the website are shown to different users – usually the existing website and a potential change. Then, the results are analyzed to determine whether the change is an improvement worth launching. This course will cover how to choose and characterize metrics to evaluate your experiments, how to design an experiment with enough statistical power, how to analyze the results and draw valid conclusions, and how to ensure that the the participants of your experiments are adequately protected.
Syllabus
- Overview of A/B Testing
- This lesson will cover what A/B testing is and what it can be used for.,How to construct a binomial confidence interval for the results.,How to decide whether the change is worth the launch cost.
- Policy and Ethics for Experiments
- How to make sure the participants of your experiments are adequately protected.,What questions you should be asking regarding the ethicality of experiments.,The four main ethics principles to consider when designing experiments.
- Choosing and Characterizing Metrics
- Learn techniques for brainstorming metrics.,What to do when you can’t measure directly.,Characteristics to consider when validating metrics.
- Designing an Experiment
- How to choose which users will be in your experiment and control group.,When to limit your experiment to a subset of your entire user base.,Design decisions affect the size of your experiment.
- Analyzing Results
- How to analyze the results of your experiments.,Run sanity checks to catch problems with the experiment set-up.,Check conclusions with multiple methods, including a binomial sign test.