Allocation¶

The allocation module provides some utils to be used before running A/B test experiments. Groups allocation is the process that assigns (allocates) a list of users either to a group A (e.g. control) or to a group B (e.g. treatment). This module provides functionalities to randomly allocate users in two or more groups (A/B/C/…).

Let’s import first the tools needed.

[1]:

import numpy as np
import pandas as pd
from abexp.core.allocation import Allocator
from abexp.core.analysis_frequentist import FrequentistAnalyzer


Complete randomization¶

Here we want to randomly assign users in n groups (where n=2) in order to run an A/B test experiment with 2 variants, so called control and treatment groups. Complete randomization does not require any data on the user, and in practice, it yields balanced design for large-sample sizes.

[2]:

# Generate random data
user_id = np.arange(100)

[3]:

# Run allocation
df, stats = Allocator.complete_randomization(user_id=user_id,
ngroups=2,
prop=[0.4, 0.6],
seed=42)

[4]:

# Users list with group assigned

[4]:

user_id group
0 0 1
1 1 1
2 2 1
3 3 1
4 4 1
[5]:

# Statistics of the randomization: #users per group
stats

[5]:

group 0 1
#users 40 60

Note: Post-allocation checks can be made to ensure the groups homogeneity and in case of imbalance, a new randomization can be performed (see the Homogeneity check section below for details).

Blocks randomization¶

In some case, one would like to consider one or more confounding factor(s) i.e. features which could unbalance the groups and bias the results if not taken into account during the randomization process. In this example we want to randomly assign users in n groups (where n=3, one control and two treatment groups) considering a confounding factor (‘level’). Users with similar characteristics (level) define a block, and randomization is conducted within a block. This enables balanced and homogeneous groups of similar sizes according to the confounding feature.

[6]:

# Generate random data
np.random.seed(42)
df = pd.DataFrame(data={'user_id': np.arange(1000),
'level': np.random.randint(1, 6, size=1000)})

[7]:

# Run allocation
df, stats = Allocator.blocks_randomization(df=df,
id_col='user_id',
stratum_cols='level',
ngroups=3,
seed=42)

[8]:

# Users data with group assigned

[8]:

user_id level group
0 0 4 1
1 1 5 2
2 2 3 2
3 3 5 1
4 4 5 0
[9]:

# Statistics of the randomization: #users per group in each level
stats

[9]:

group 0 1 2
level
1 70 70 70
2 64 63 63
3 62 64 64
4 69 69 68
5 68 68 68

Multi-level block randomization

You can stratify randomization on two or more features. In the example below we want to randomly allocate users in n groups (where n=5) in order to run an A/B test experiment with 5 variants, one control and four treatment groups. The stratification will be based on the user level and paying status in order to create homogeneous groups.

[10]:

# Generate random data
np.random.seed(42)
df = pd.DataFrame(data={'user_id': np.arange(1000),
'is_paying': np.random.randint(0, 2, size=1000),
'level': np.random.randint(1, 7, size=1000)})


[11]:

# Run allocation
df, stats = Allocator.blocks_randomization(df=df,
id_col='user_id',
stratum_cols=['level', 'is_paying'],
ngroups=5,
seed=42)

[12]:

# Users data with group assigned

[12]:

user_id is_paying level group
0 0 0 6 2
1 1 1 1 1
2 2 0 1 0
3 3 0 1 3
4 4 0 5 1
[13]:

# Statistics of the randomization: #users per group in each level and paying status
stats

[13]:

group 0 1 2 3 4
level is_paying
1 0 19 17 19 18 19
1 15 17 18 18 18
2 0 17 17 14 17 17
1 18 17 16 18 17
3 0 16 16 16 15 16
1 19 19 19 19 19
4 0 12 12 12 12 11
1 15 15 15 14 15
5 0 18 18 17 16 17
1 17 18 19 18 19
6 0 18 19 19 18 18
1 16 15 16 16 15

Homogeneity check¶

Complete randomization does not guarantee homogeneous groups, but it yields balanced design for large-sample sizes. Blocks randomization guarantees homogeneous groups based on categorical variables (but not on continuous variable).

Thus, we can perform post-allocation checks to ensure the groups homogeneity both for continuous or categorical variables. In case of imbalance, a new randomization can be performed.

[14]:

# Generate random data
np.random.seed(42)
df = pd.DataFrame(data={'user_id': np.arange(1000),
'points': np.random.randint(100, 500, size=1000),
'collected_bonus': np.random.randint(2000, 7000, size=1000),
'is_paying': np.random.randint(0, 2, size=1000),
'level': np.random.randint(1, 7, size=1000)})

[14]:

user_id points collected_bonus is_paying level
0 0 202 6580 1 4
1 1 448 4075 0 5
2 2 370 2713 1 6
3 3 206 3062 0 3
4 4 171 3976 0 5

Single iteration

In the cell below it is shown a single iteration of check homogeneity analysis.

[15]:

# Run allocation
df, stats = Allocator.blocks_randomization(df=df,
id_col='user_id',
stratum_cols=['level', 'is_paying'],
ngroups=2,
seed=42)

[16]:

# Run homogeneity check analysis
X = df.drop(columns=['group'])
y = df['group']

analyzer = FrequentistAnalyzer()
analysis = analyzer.check_homogeneity(X, y, cat_cols=['is_paying','level'])

analysis

[16]:

coef std err z P>|z| [0.025 0.975]
user_id -3.000000e-04 0.000000 -1.505000e+00 0.132 -0.001000 0.0001
points 2.000000e-04 0.001000 3.660000e-01 0.714 -0.001000 0.0010
collected_bonus 6.935000e-05 0.000044 1.559000e+00 0.119 -0.000018 0.0000
C(is_paying, Treatment('1'))[T.0] 8.000000e-03 0.127000 6.300000e-02 0.950 -0.240000 0.2560
C(level, Treatment('3'))[T.1] -1.180000e-02 0.215000 -5.500000e-02 0.956 -0.433000 0.4090
C(level, Treatment('3'))[T.2] 1.440000e-02 0.226000 6.400000e-02 0.949 -0.429000 0.4580
C(level, Treatment('3'))[T.4] -1.646000e-16 0.213000 -7.740000e-16 1.000 -0.417000 0.4170
C(level, Treatment('3'))[T.5] -1.628000e-16 0.215000 -7.570000e-16 1.000 -0.422000 0.4220
C(level, Treatment('3'))[T.6] -1.628000e-16 0.214000 -7.590000e-16 1.000 -0.420000 0.4200

The check_homogeneity function performs univariate logistic regression per each feature of the input dataset. If the p-value (column P>|z| in the table above) of any variables is below a certain threshold (e.g. threshold = 0.2), the random allocation is considered to be non homogeneous and it must be repeated. For instance, in the table above the variable collected_bonus is not homogeneously split across groups p-value = 0.119.

Multiple iterations

[17]:

# Generate random data
np.random.seed(42)
df = pd.DataFrame(data={'user_id': np.arange(1000),
'points': np.random.randint(100, 500, size=1000),
'collected_bonus': np.random.randint(2000, 7000, size=1000),
'is_paying': np.random.randint(0, 2, size=1000),
'level': np.random.randint(1, 7, size=1000)})

[17]:

user_id points collected_bonus is_paying level
0 0 202 6580 1 4
1 1 448 4075 0 5
2 2 370 2713 1 6
3 3 206 3062 0 3
4 4 171 3976 0 5

In the cell below we repeatedly perform random allocation until it creates homogeneous groups (up to a maximum number of iterations). The groups are considered to be homogeneous when the p-value (column P>|z|) of any variables is below a certain threshold (e.g. p-values < 0.2).

[18]:

# Define parameters
rep = 100
threshold = 0.2

analyzer = FrequentistAnalyzer()

for i in np.arange(rep):

# Run allocation
df, stats = Allocator.blocks_randomization(df=df,
id_col='user_id',
stratum_cols=['level', 'is_paying'],
ngroups=2,
seed=i + 45)
# Run homogeneity check analysis
X = df.drop(columns=['group'])
y = df['group']

analysis = analyzer.check_homogeneity(X, y, cat_cols=['is_paying','level'])

# Check p-values
if all(analysis['P>|z|'] > threshold):
break

df = df.drop(columns=['group'])

analysis

[18]:

coef std err z P>|z| [0.025 0.975]
user_id -1.000000e-04 0.000000 -5.640000e-01 0.573 -0.001000 0.000
points 2.000000e-04 0.001000 3.200000e-01 0.749 -0.001000 0.001
collected_bonus 2.449000e-05 0.000044 5.520000e-01 0.581 -0.000063 0.000
C(is_paying, Treatment('1'))[T.0] 1.570000e-02 0.127000 1.240000e-01 0.901 -0.232000 0.264
C(level, Treatment('3'))[T.1] -1.180000e-02 0.215000 -5.500000e-02 0.956 -0.433000 0.409
C(level, Treatment('3'))[T.2] -1.440000e-02 0.226000 -6.400000e-02 0.949 -0.458000 0.429
C(level, Treatment('3'))[T.4] -9.064000e-17 0.213000 -4.260000e-16 1.000 -0.417000 0.417
C(level, Treatment('3'))[T.5] -9.236000e-17 0.215000 -4.290000e-16 1.000 -0.422000 0.422
C(level, Treatment('3'))[T.6] -9.237000e-17 0.214000 -4.310000e-16 1.000 -0.420000 0.420