
Margaret
Spotlight: Leemay Nassery (Author) AMA and Interview!
Building a Culture of Experimentation with
Leemay Nassery
@leemay
With the right mindset, architecture, and processes, any team can use A/B testing to make better product decisions.
We sat down with Leemay Nassery, engineering leader and author of Practical A/B Testing and Next-Level A/B Testing, to talk about how teams can start small, learn fast, and use experiments to both grow and streamline their products.
INTERVIEW
Watch the complete interview here:
WIN!
We’re giving away one of Leemay’s books to one lucky winner! Simply post a comment or a question in the ask me anything (AMA) below, and the Devtalk bot will randomly pick a winner at a time of the author’s choosing…then automatically update this thread with the results!
INTERVIEW (abridged)
Introducing Leemay
Leemay Nassery is an engineering leader specializing in experimentation and personalization. With a notable track record that includes evolving Spotify’s A/B testing strategy for the Homepage, launching Comcast’s For You page, and establishing data warehousing teams at Etsy, she firmly believes that the key to innovation at any company is the ability to experiment effectively.
On starting small…
New to A/B testing? Keep it simple. You need a way to tag users, a way to serve them the right version, and a data pipeline to measure the impact. “You don’t have to have something that’s rolled out entirely end-to-end.“ Leemay advises. “For example, if you own the backend service that serves as the front door between your mobile or desktop client and the server-side systems, you could introduce a feature flag system into that backend service to run your first basic A/B test.”
On culture and capability…
How you use A/B testing depends on your platform capacity and the culture of your product organization. “Some organizations A/B test as much as they can, with the understanding that A/B testing is cheap (versus the cost of launching a feature that degrades metrics); their experimentation platform, their system architecture, and the way they run A/B tests enable any idea to be tested,” says Leemay. “There are other situations where it’s a bit of a hybrid approach: some changes are A/B tested because the team needs to understand the cost of introducing that change into production, but some aren’t, because the platform’s capacity or the rigor required to enable the test takes too much time. In those cases, teams might do analysis after the fact,” she continues. “The approach is very contextual. It depends on the A/B testing platform itself and the larger culture of the product organization.”
On efficiency and removing features…
A/B tests can help you clean up your code base. “If you’re running degradation tests, you’re removing code that…potentially isn’t serving your product or engineering needs,” Leemay notes. “A/B tests can kind of illustrate that where otherwise…the feature would just persist.” When you have a system that is carrying the weight of ten years of code, the ability to remove things can be powerful. Beyond just tidying the code base, this practice keeps products leaner, easier to maintain, and more adaptable to future changes. A/B testing gives you the evidence to confidently decide which features truly add value and which ones can be retired. Too often, teams leave behind code or features that aren’t impactful, but keeping them around only creates extra QA overhead, more integration testing, and unnecessary complexity in the long run.
On curiosity…
Success metrics tell part of the story, but the real value comes from digging into how a change affects the full user experience. Leemay says, “It’s very easy to run an A/B test and guard rail success metrics—they are the gains that we expect—and then continue forward. It takes an extra level of curiosity to say, ‘Let’s look deeper—like is there a specific user that has a specific device type…in a part of the world that has lower bandwidth? What is the impact of that?’ But then, even if you understood that, like let’s say an A/B test evaluated a new feature, and for users that had poor bandwidth, the metrics weren’t as great. What would you do with that?” Leemay continues. “That’s a key part of the curiosity. Once you know something, will you be able to do something as a result?”
On gains and goal posts…
Incremental improvements eventually hit a ceiling, making it necessary to reassess your goals and sometimes rethink the underlying approach. Leemay illustrates this idea as, “If you are working on a product that, for example, has no machine learning and you introduce your first algorithm, that’s like going from zero to one, you’re you’re likely going to see some pretty impressive gains. But then you keep iterating on that model, that same model, and you keep making adjustments—adding new features, tweaking the design slightly—you will get to a point where you’re squeezing a stone and you’re not able to get more gains from it.” She continues, “Sometimes you have to switch the goalpost; you have to pivot…revisit the underlying technology.” Leemay notes, “When practicing A/B tests on a long enough timeline on the same product space or the same product area…it’s a bit harder to move the metrics.”
Now that you know her story, add Leemay’s books to your library today! Don’t forget to use coupon code devtalk.com to save 35 percent on the ebooks:
Follow Leemay:
Substack, @experimenting
Linkedin, leemaynassery
YOUR TURN!
We’re now opening up the thread for your questions! Ask Leemay anything! Please keep it clean, and don’t forget that by participating, you automatically enter the competition to win one of her ebooks!
Most Liked

alvinkatojr
Hi Leema,
I’m a bit new to all of this, so pardon the naivete. But since you are here, let’s get to it.
- Your book has no software requirements, but it offers sound practical advice. If I were to start right now, what software would you recommend?
- Also, on software, I noticed in your substack, you used Convert(and even interviewed them), but their solution seems pricey for starters/beginners. What are some affordable options for AB testing?
- Next, the A/B testing phobia. To me the hard part of A/B testing is not really the application/implementation of it(after all it’s just thoughts, math and software) I think that what companies and people seem to fear most is what negative feedback the testing process may reveal i.e. this feature sucks, I hate this update, get rid of this, bring back the old, if you don’t change it back, we are leaving!. Have you experienced any cases like this in your career? Any advice for the phobics?
- Finally, how heavy or demanding is the number crunching? I know theoretically that we have metrics and data pipelines, so from the hardware side of things, we are good, but what about the human resources? Since we are testing hypotheses, the math geek in me is already thinking of null hypotheses, probabilities and the like. But what about the non-math geeks? Should they be scared? Do we need some data scientists/statisticians?
- Bonus question: What is the native pronunciation of your first name, Leemay? From the video, both yours and Dave’s pronunciations sounded very American and the “y” at the end was silent.
Also, fun fact, your surname Nassery sounds somewhat similar to the Swahili word: Jasiri, meaning brave, which coincidentally shares the same historical meaning in Persian for your surname
Thanks in advance for the answers, and thank you, team Pragmatic, for another AMA!

leemay
Hey @alvinkatojr Thanks for the thoughtful questions. I’ll do my best to answer them.
To start, you’re so right…convert and optimizely are powerful but can feel expensive for smaller teams. Affordable (or free) options include platforms like GrowthBook (open source) – lets you run experiments with your existing analytics stack and PostHog which is an open-source product analytics tool with experimentation built in.
That being said, if youre just starting off consider forking the feature flag platform that might already exist where you work OR even a manual “split” test (two different versions of a user base, directing traffic in a manual way) is a good starting point. The key isn’t the fanciest platform but instead, designing a clean test and learning how to interpret results.
As for the phobia question. This is so real. I’ve seen teams hesitate to test because theyre used to simply shipping straight to production and consider that a “win” in itself. The engineering and product culture should ideally be open to insights, if theyre going to ship the feature regardless and arent a metrics-focused organization then..it’ll be hard to shift towards an experimentation driven company. I think the best advice here is to do somewhat of a skunk works effort where you run an a/b test to illustrate the value proposition, and if you can tie it to a feature that product has major interest in…more likely to be successful.
The number crunching can sound intimidating (p-values, power analysis, null hypotheses), but much of the heavy lifting is built into most A/B testing platforms (if it exists) OR you could rely on a data scientist for this area. That being said, statistical literacy does help, but you don’t need a PhD to make sound decisions.
Lastly..thats so interesting regarding my name translation. I learned something new!
Popular Community topics










Other popular topics










Latest in Practical A/B Testing
Latest in Next-Level A/B Testing
Categories:
Sub Categories:
Popular Portals
- /elixir
- /rust
- /wasm
- /ruby
- /erlang
- /phoenix
- /keyboards
- /rails
- /js
- /python
- /security
- /go
- /swift
- /vim
- /clojure
- /emacs
- /haskell
- /java
- /onivim
- /typescript
- /svelte
- /crystal
- /kotlin
- /c-plus-plus
- /tailwind
- /gleam
- /react
- /ocaml
- /flutter
- /elm
- /vscode
- /ash
- /html
- /opensuse
- /centos
- /php
- /deepseek
- /zig
- /scala
- /sublime-text
- /textmate
- /lisp
- /nixos
- /debian
- /react-native
- /agda
- /kubuntu
- /arch-linux
- /django
- /revery
- /ubuntu
- /spring
- /manjaro
- /nodejs
- /diversity
- /lua
- /julia
- /c
- /slackware
- /markdown