Margaret

Margaret

Editor at PragProg

Spotlight: Leemay Nassery (Author) AMA and Interview!

Building a Culture of Experimentation with
Leemay Nassery
@leemay

With the right mindset, architecture, and processes, any team can use A/B testing to make better product decisions.

We sat down with Leemay Nassery, engineering leader and author of Practical A/B Testing and Next-Level A/B Testing, to talk about how teams can start small, learn fast, and use experiments to both grow and streamline their products.

INTERVIEW

Watch the complete interview here:

WIN!

We’re giving away one of Leemay’s books to one lucky winner! Simply post a comment or a question in the ask me anything (AMA) below, and the Devtalk bot will randomly pick a winner at a time of the author’s choosing…then automatically update this thread with the results!


INTERVIEW (abridged)

Introducing Leemay

Leemay Nassery is an engineering leader specializing in experimentation and personalization. With a notable track record that includes evolving Spotify’s A/B testing strategy for the Homepage, launching Comcast’s For You page, and establishing data warehousing teams at Etsy, she firmly believes that the key to innovation at any company is the ability to experiment effectively.

On starting small…

New to A/B testing? Keep it simple. You need a way to tag users, a way to serve them the right version, and a data pipeline to measure the impact. “You don’t have to have something that’s rolled out entirely end-to-end.“ Leemay advises. “For example, if you own the backend service that serves as the front door between your mobile or desktop client and the server-side systems, you could introduce a feature flag system into that backend service to run your first basic A/B test.”

On culture and capability…

How you use A/B testing depends on your platform capacity and the culture of your product organization. “Some organizations A/B test as much as they can, with the understanding that A/B testing is cheap (versus the cost of launching a feature that degrades metrics); their experimentation platform, their system architecture, and the way they run A/B tests enable any idea to be tested,” says Leemay. “There are other situations where it’s a bit of a hybrid approach: some changes are A/B tested because the team needs to understand the cost of introducing that change into production, but some aren’t, because the platform’s capacity or the rigor required to enable the test takes too much time. In those cases, teams might do analysis after the fact,” she continues. “The approach is very contextual. It depends on the A/B testing platform itself and the larger culture of the product organization.”

On efficiency and removing features…

A/B tests can help you clean up your code base. “If you’re running degradation tests, you’re removing code that…potentially isn’t serving your product or engineering needs,” Leemay notes. “A/B tests can kind of illustrate that where otherwise…the feature would just persist.” When you have a system that is carrying the weight of ten years of code, the ability to remove things can be powerful. Beyond just tidying the code base, this practice keeps products leaner, easier to maintain, and more adaptable to future changes. A/B testing gives you the evidence to confidently decide which features truly add value and which ones can be retired. Too often, teams leave behind code or features that aren’t impactful, but keeping them around only creates extra QA overhead, more integration testing, and unnecessary complexity in the long run.

On curiosity…

Success metrics tell part of the story, but the real value comes from digging into how a change affects the full user experience. Leemay says, “It’s very easy to run an A/B test and guard rail success metrics—they are the gains that we expect—and then continue forward. It takes an extra level of curiosity to say, ‘Let’s look deeper—like is there a specific user that has a specific device type…in a part of the world that has lower bandwidth? What is the impact of that?’ But then, even if you understood that, like let’s say an A/B test evaluated a new feature, and for users that had poor bandwidth, the metrics weren’t as great. What would you do with that?” Leemay continues. “That’s a key part of the curiosity. Once you know something, will you be able to do something as a result?”

On gains and goal posts…

Incremental improvements eventually hit a ceiling, making it necessary to reassess your goals and sometimes rethink the underlying approach. Leemay illustrates this idea as, “If you are working on a product that, for example, has no machine learning and you introduce your first algorithm, that’s like going from zero to one, you’re you’re likely going to see some pretty impressive gains. But then you keep iterating on that model, that same model, and you keep making adjustments—adding new features, tweaking the design slightly—you will get to a point where you’re squeezing a stone and you’re not able to get more gains from it.” She continues, “Sometimes you have to switch the goalpost; you have to pivot…revisit the underlying technology.” Leemay notes, “When practicing A/B tests on a long enough timeline on the same product space or the same product area…it’s a bit harder to move the metrics.”


Now that you know her story, add Leemay’s books to your library today! Don’t forget to use coupon code devtalk.com to save 35 percent on the ebooks:

book-next-level-a-b-testing

book-practical-a-b-testing


Follow Leemay:

Substack, @experimenting

X, @leemaynassery

Linkedin, leemaynassery


YOUR TURN!

We’re now opening up the thread for your questions! Ask Leemay anything! Please keep it clean, and don’t forget that by participating, you automatically enter the competition to win one of her ebooks!

Most Liked

alvinkatojr

alvinkatojr

Hi Leema,

I’m a bit new to all of this, so pardon the naivete. But since you are here, let’s get to it.

  1. Your book has no software requirements, but it offers sound practical advice. If I were to start right now, what software would you recommend?
  2. Also, on software, I noticed in your substack, you used Convert(and even interviewed them), but their solution seems pricey for starters/beginners. What are some affordable options for AB testing?
  3. Next, the A/B testing phobia. To me the hard part of A/B testing is not really the application/implementation of it(after all it’s just thoughts, math and software) I think that what companies and people seem to fear most is what negative feedback the testing process may reveal i.e. this feature sucks, I hate this update, get rid of this, bring back the old, if you don’t change it back, we are leaving!. Have you experienced any cases like this in your career? Any advice for the phobics?
  4. Finally, how heavy or demanding is the number crunching? I know theoretically that we have metrics and data pipelines, so from the hardware side of things, we are good, but what about the human resources? Since we are testing hypotheses, the math geek in me is already thinking of null hypotheses, probabilities and the like. But what about the non-math geeks? Should they be scared? Do we need some data scientists/statisticians?
  5. Bonus question: What is the native pronunciation of your first name, Leemay? From the video, both yours and Dave’s pronunciations sounded very American and the “y” at the end was silent.

Also, fun fact, your surname Nassery sounds somewhat similar to the Swahili word: Jasiri, meaning brave, which coincidentally shares the same historical meaning in Persian for your surname :slight_smile:

Thanks in advance for the answers, and thank you, team Pragmatic, for another AMA!

leemay

leemay

Author of Practical A/B Testing

Thank you @Margaret for the spotlight!

leemay

leemay

Author of Practical A/B Testing

Hey @alvinkatojr Thanks for the thoughtful questions. I’ll do my best to answer them.
To start, you’re so right…convert and optimizely are powerful but can feel expensive for smaller teams. Affordable (or free) options include platforms like GrowthBook (open source) – lets you run experiments with your existing analytics stack and PostHog which is an open-source product analytics tool with experimentation built in.

That being said, if youre just starting off consider forking the feature flag platform that might already exist where you work OR even a manual “split” test (two different versions of a user base, directing traffic in a manual way) is a good starting point. The key isn’t the fanciest platform but instead, designing a clean test and learning how to interpret results.

As for the phobia question. This is so real. I’ve seen teams hesitate to test because theyre used to simply shipping straight to production and consider that a “win” in itself. The engineering and product culture should ideally be open to insights, if theyre going to ship the feature regardless and arent a metrics-focused organization then..it’ll be hard to shift towards an experimentation driven company. I think the best advice here is to do somewhat of a skunk works effort where you run an a/b test to illustrate the value proposition, and if you can tie it to a feature that product has major interest in…more likely to be successful.

The number crunching can sound intimidating (p-values, power analysis, null hypotheses), but much of the heavy lifting is built into most A/B testing platforms (if it exists) OR you could rely on a data scientist for this area. That being said, statistical literacy does help, but you don’t need a PhD to make sound decisions.

Lastly..thats so interesting regarding my name translation. I learned something new!

Where Next?

Popular Community topics Top

PragmaticBookshelf
A PragProg Hero’s Journey with Dr. Venkat Subramaniam @venkats How do you grow a successful career as a software developer while...
New
PragmaticBookshelf
A Hero’s Journey with VM (Vicky) Brasseur @vmbrasseur VM (Vicky) Brasseur, author of Forge Your Future with Open Source, discuss...
New
PragmaticBookshelf
Author Spotlight James Stanier @jstanier James Stanier, author of Effective Remote Work , discusses how to rethink the office as we e...
New
New
PragmaticBookshelf
Author Spotlight Rebecca Skinner @RebeccaSkinner Welcome to our latest author spotlight, where we sit down with Rebecca Skinner, auth...
New
New
New
PragmaticBookshelf
Author Spotlight: Sophie DeBenedetto @SophieDeBenedetto The days of the traditional request-response web application are long gone, b...
New
PragmaticBookshelf
Author Spotlight: Stephen Bussey @sb8244 What’s better than a development language built so programmers will love it? Two languages b...
New
Margaret
Ask Me Anything with Ellie Fairholm and Josep Giralt D’Lacoste @elliefairholm and @Gilacost On February 24 and 25, we are giving you ...
New

Other popular topics Top

AstonJ
A thread that every forum needs! Simply post a link to a track on YouTube (or SoundCloud or Vimeo amongst others!) on a separate line an...
New
New
AstonJ
This looks like a stunning keycap set :orange_heart: A LEGENDARY KEYBOARD LIVES ON When you bought an Apple Macintosh computer in the e...
New
Exadra37
Oh just spent so much time on this to discover now that RancherOS is in end of life but Rancher is refusing to mark the Github repo as su...
New
PragmaticBookshelf
Create efficient, elegant software tests in pytest, Python's most powerful testing framework. Brian Okken @brianokken Edited by Kat...
New
mafinar
This is going to be a long an frequently posted thread. While talking to a friend of mine who has taken data structure and algorithm cou...
New
Help
I am trying to crate a game for the Nintendo switch, I wanted to use Java as I am comfortable with that programming language. Can you use...
New
PragmaticBookshelf
Get the comprehensive, insider information you need for Rails 8 with the new edition of this award-winning classic. Sam Ruby @rubys ...
New
AstonJ
Curious what kind of results others are getting, I think actually prefer the 7B model to the 32B model, not only is it faster but the qua...
New
Fl4m3Ph03n1x
Background Lately I am in a quest to find a good quality TTS ai generation tool to run locally in order to create audio for some videos I...
New