Massive difference between A/B testing variations and the winner


I did set up an A/B test and we got a winning variation (OR: 23.4%) that was sent to the remaining audience but we recorded a poor OR of 1.8%.

As I seeded myself into the audience, I received the winning variation into my inbox (not spam).

2% open rate seems to be wrong both variations recorded approx 23% open rate.

We have already waited 48 hours and I assume we won’t see much more open.

How can you explain this massive difference?


There are actually a LOT of conditions that can drastically affect this. Simply sending at a different time of day can do it. Also, I assume you didn’t send both attempts to the same list OR you sent different emails on both attempts? Either of those conditions can also affect rates. Just a different subject line can give you massive differences. So I’m not saying I know any of these conditions are the case but I can say that so many conditions can notably affect these numbers that without more details on time, subject line, email content, majority server/client, prior engagement differences etc, it’s hard to say for sure. That is actually the reason for A/B split testing. Not just to tell the difference between two different emails but rather and more often, to see the results of more “minor” changes like different wording in a subject line.

@John_Borelli > I realise that the time of the day affects Open Rate, but such a drastic change is very unusual.

You bring a good point in regards to the audience of variation A, variation B and winning variation: Does Infusionsoft automatically split the three lists without duplicate? It is my assumption that it does.

Because if it doesn’t then yes, it would affect OR.

1 Like