When the morning news starts talking about A/B Testing of course my ears perk up, and I eat my cereal a little slower. But when it becomes a conversation about how ethical it is to use customer usage data to iteratively improve the product, it spawns this blog post.
It’s hard to believe that A/B testing would be a controversial thing, much less a topic for the morning news. Most of us are probably very fascinated by the test that Facebook recently ran on 600,000 + users. They tested the impact of positive versus negative sentiment in users feed to see if it correlated to the sentiment of the users posts. For me personally having a NLP background, and a being big fan of A/B Testing I had to give Facebook a golf clap for what they accomplished.
But for those not in the space, the process of A/B testing felt like an invasion of privacy. Will this issue repeat in the future?
To the vast majority of users out there, they have no idea that the version of Google, Facebook, Etsy etc. that they are seeing at any given moment in time, is likely not the same that their friend is seeing. And that the behavior they demonstrate on the site is being used to help refine it.
Today a select few applications are able to leverage the power of large numbers and continuous delivery to achieve the mecca of A/B testing. But when they do it’s awesome and powerful. And this is at the heart of a fully enabled DevOps pipeline, and Growth hacking.
So the last thing any of us would expect is the world to turn against the idea. After all the reason they use these applications is because they continuously improved in a way that seems magically guided by user input. No need for focus groups, just show 10 versions of your site to hundreds of thousands of people in the world and see which one “does” the best.
What might not be too palatable to users is the fact that the measure for quality is usually related to the accuracy of serving ads that get a lot of clicks. So we feel exploited. But this economy of free cool web applications in exchange for eyeballs is a system the world of non-technical users asked for and essentially created. They just did not know what happened behind the scenes.
Thus you can tell where I stand. Just deal with it, and you asked for it. I say, way to go Facebook, and I would love to see more published reports of A/B tests run at high volume similar to this. But what really interests me is how this will impact the future of how development teams A/B test.
For now and until market education changes, or the ad economy changes, developers are going to have to think twice about the types of test they run, or more likely that results that they publish. It might encourage a less transparent approach to A/B Testing, or it just might open the eyes to the world of the power of BigData, and crowd intelligence.
I would love to hear from those who find the testing offensive, and the reasons why.