
Margaret
Spotlight: Leemay Nassery (Author) AMA and Interview!
Building a Culture of Experimentation with
Leemay Nassery
@leemay
With the right mindset, architecture, and processes, any team can use A/B testing to make better product decisions.
We sat down with Leemay Nassery, engineering leader and author of Practical A/B Testing and Next Level A/B Testing, to talk about how teams can start small, learn fast, and use experiments to both grow and streamline their products.
INTERVIEW
Watch the complete interview here:
WIN!
We’re giving away one of Leemay’s books to one lucky winner! Simply post a comment or a question in the ask me anything (AMA) below, and the Devtalk bot will randomly pick a winner at a time of the author’s choosing…then automatically update this thread with the results!
INTERVIEW (abridged)
Introducing Leemay
Leemay Nassery is an engineering leader specializing in experimentation and personalization. With a notable track record that includes evolving Spotify’s A/B testing strategy for the Homepage, launching Comcast’s For You page, and establishing data warehousing teams at Etsy, she firmly believes that the key to innovation at any company is the ability to experiment effectively.
On starting small…
New to A/B testing? Keep it simple. You need a way to tag users, a way to serve them the right version, and a data pipeline to measure the impact. “You don’t have to have something that’s rolled out entirely end-to-end.“ Leemay advises. “For example, if you own the backend service that serves as the front door between your mobile or desktop client and the server-side systems, you could introduce a feature flag system into that backend service to run your first basic A/B test.”
On culture and capability…
How you use A/B testing depends on your platform capacity and the culture of your product organization. “Some organizations A/B test as much as they can, with the understanding that A/B testing is cheap (versus the cost of launching a feature that degrades metrics); their experimentation platform, their system architecture, and the way they run A/B tests enable any idea to be tested,” says Leemay. “There are other situations where it’s a bit of a hybrid approach: some changes are A/B tested because the team needs to understand the cost of introducing that change into production, but some aren’t, because the platform’s capacity or the rigor required to enable the test takes too much time. In those cases, teams might do analysis after the fact,” she continues. “The approach is very contextual. It depends on the A/B testing platform itself and the larger culture of the product organization.”
On efficiency and removing features…
A/B tests can help you clean up your code base. “If you’re running degradation tests, you’re removing code that…potentially isn’t serving your product or engineering needs,” Leemay notes. “A/B tests can kind of illustrate that where otherwise…the feature would just persist.” When you have a system that is carrying the weight of ten years of code, the ability to remove things can be powerful. Beyond just tidying the code base, this practice keeps products leaner, easier to maintain, and more adaptable to future changes. A/B testing gives you the evidence to confidently decide which features truly add value and which ones can be retired. Too often, teams leave behind code or features that aren’t impactful, but keeping them around only creates extra QA overhead, more integration testing, and unnecessary complexity in the long run.
On curiosity…
Success metrics tell part of the story, but the real value comes from digging into how a change affects the full user experience. Leemay says, “It’s very easy to run an A/B test and guard rail success metrics—they are the gains that we expect—and then continue forward. It takes an extra level of curiosity to say, ‘Let’s look deeper—like is there a specific user that has a specific device type…in a part of the world that has lower bandwidth? What is the impact of that?’ But then, even if you understood that, like let’s say an A/B test evaluated a new feature, and for users that had poor bandwidth, the metrics weren’t as great. What would you do with that?” Leemay continues. “That’s a key part of the curiosity. Once you know something, will you be able to do something as a result?”
On gains and goal posts…
Incremental improvements eventually hit a ceiling, making it necessary to reassess your goals and sometimes rethink the underlying approach. Leemay illustrates this idea as, “If you are working on a product that, for example, has no machine learning and you introduce your first algorithm, that’s like going from zero to one, you’re you’re likely going to see some pretty impressive gains. But then you keep iterating on that model, that same model, and you keep making adjustments—adding new features, tweaking the design slightly—you will get to a point where you’re squeezing a stone and you’re not able to get more gains from it.” She continues, “Sometimes you have to switch the goalpost; you have to pivot…revisit the underlying technology.” Leemay notes, “When practicing A/B tests on a long enough timeline on the same product space or the same product area…it’s a bit harder to move the metrics.”
Now that you know her story, add Leemay’s books to your library today! Don’t forget to use coupon code devtalk.com to save 35 percent on the ebooks:
Follow Leemay:
Substack, @experimenting
Linkedin, leemaynassery
YOUR TURN!
We’re now opening up the thread for your questions! Ask Leemay anything! Please keep it clean, and don’t forget that by participating, you automatically enter the competition to win one of her ebooks!
Popular Community topics










Other popular topics










Latest in Practical A/B Testing
Latest in Next-Level A/B Testing
Categories:
Sub Categories:
Popular Portals
- /elixir
- /rust
- /wasm
- /ruby
- /erlang
- /phoenix
- /keyboards
- /rails
- /js
- /python
- /security
- /go
- /swift
- /vim
- /clojure
- /emacs
- /haskell
- /java
- /onivim
- /svelte
- /typescript
- /crystal
- /kotlin
- /c-plus-plus
- /tailwind
- /gleam
- /ocaml
- /react
- /flutter
- /elm
- /vscode
- /ash
- /opensuse
- /html
- /centos
- /php
- /deepseek
- /zig
- /scala
- /textmate
- /lisp
- /sublime-text
- /nixos
- /debian
- /react-native
- /agda
- /kubuntu
- /arch-linux
- /revery
- /ubuntu
- /django
- /spring
- /manjaro
- /nodejs
- /diversity
- /lua
- /julia
- /c
- /slackware
- /markdown