ManningBooks

ManningBooks

Devtalk Sponsor

AI Model Evaluation (Manning)

Before deploying an AI model into production, you need to know more than just its accuracy. Will it be fast enough for your users? Will it scale under real-world traffic? Can you trust its decisions in critical scenarios? AI Model Evaluation (Manning Publications) gives you the practical tools and strategies to answer these questions—and more—so you can ship AI systems that actually work in the real world.

Leemay Nassery

Before deploying an AI model into production, you need to know more than just its accuracy. Will it be fast enough for your users? Will it scale under real-world traffic? Can you trust its decisions in critical scenarios? AI Model Evaluation (Manning Publications) gives you the practical tools and strategies to answer these questions—and more—so you can ship AI systems that actually work in the real world.

What you’ll learn in AI Model Evaluation:

  • Build diagnostic offline evaluations to uncover hidden model behaviors
  • Use shadow traffic to simulate production conditions safely
  • Design A/B tests to measure real business and product impact
  • Spot nuanced failures with human-in-the-loop feedback
  • Scale evaluations with LLMs as automated judges

Author Leemay Nassery (Spotify, Comcast, Dropbox, Etsy) shares real-world insights on what it really takes to prepare models for production. You’ll go beyond standard accuracy metrics to evaluate latency, user experience, and long-term impact on product goals.

Inside the book:
Each chapter explores a different evaluation method, from offline testing and A/B experiments to shadow deployments and qualitative analysis. Hands-on examples, including a movie recommendation engine, make it easy to apply these techniques to your own AI projects.


Don’t forget you can get 45% off with your Devtalk discount! Just use the coupon code “devtalk.com” at checkout :+1:

Most Liked

peterchancc

peterchancc

We started exploring AI apps with LLMs, so this book should be a good reference for evaluating the open-source LLMs that we plan to use.

ManningBooks

ManningBooks

Devtalk Sponsor

Definitely. Here are some questions to help your team that the book addresses clearly:

  1. What happens if your model is “accurate” offline but tanks your engagement metrics in production — how would you know why?
    (Follow-up: Do you have evaluation strategies beyond just accuracy or F1?)

  2. When was the last time your team measured the system latency impact of a new AI model before launching it?
    (And what if the model slowed down page load time by 200ms — would you catch it before it hits users?)

  3. If a model makes worse predictions for a specific user segment, do you catch that in your current evaluation process? Or are those failures only visible after a launch?

  4. Before you ship a model, do you know how it affects:

  • Feature latency?
  • Cold start performance?
  • Infrastructure cost at scale?
    (Or are you finding out during the fire drill after launch?)

Are you still using the same evaluation metrics your team used 3 years ago?
(What if the nature of your product or user behavior has changed — and your evaluations are now stale?)

Hope this helps.

Cheers

peterchancc

peterchancc

Thanks!

Where Next?

Popular Ai topics Top

ManningBooks
Based on Ilya Sutskever’s famous “must-read” list of ~30 AI papers, this book walks you through the research that shaped today’s deep lea...
New
New
New
ManningBooks
Erlang and OTP in Action teaches you the concepts of concurrent programming and the use of Erlang’s message-passing model. It walks you t...
New
ManningBooks
AI Governance: Secure, privacy-preserving, ethical systems presents a structured playbook for safely harnessing the potential of Generati...
New
ManningBooks
AI agent technology is changing fast! This totally revised Second Edition of AI Agents in Action by Micheal Lanham guides you through the...
New
pragdave
Build a prototype in a weekend or a full product in a month or two. Untangle legacy systems, improve tests and documentation, and tackle ...
New
ManningBooks
Dr Luca Belli, co-founder and former research lead for Twitter’s Machine Learning Ethics, Transparency and Accountability team, has been ...
New
ManningBooks
Rearchitecting LLMs: Structural techniques for efficient models turns research from the latest AI papers into production-ready practices ...
New
ManningBooks
CUDA for Deep Learning shows you how to work within the CUDA ecosystem, from your first kernel to implementing advanced LLM features like...
New

Other popular topics Top

PragmaticBookshelf
Ruby, Io, Prolog, Scala, Erlang, Clojure, Haskell. With Seven Languages in Seven Weeks, by Bruce A. Tate, you’ll go beyond the syntax—and...
New
PragmaticBookshelf
Write Elixir tests that you can be proud of. Dive into Elixir’s test philosophy and gain mastery over the terminology and concepts that u...
New
AstonJ
What chair do you have while working… and why? Is there a ‘best’ type of chair or working position for developers?
New
AstonJ
Just done a fresh install of macOS Big Sur and on installing Erlang I am getting: asdf install erlang 23.1.2 Configure failed. checking ...
New
AstonJ
I ended up cancelling my Moonlander order as I think it’s just going to be a bit too bulky for me. I think the Planck and the Preonic (o...
New
AstonJ
I have seen the keycaps I want - they are due for a group-buy this week but won’t be delivered until October next year!!! :rofl: The Ser...
New
New
PragmaticBookshelf
Develop, deploy, and debug BEAM applications using BEAMOps: a new paradigm that focuses on scalability, fault tolerance, and owning each ...
New
AstonJ
This is cool! DEEPSEEK-V3 ON M4 MAC: BLAZING FAST INFERENCE ON APPLE SILICON We just witnessed something incredible: the largest open-s...
New
Fl4m3Ph03n1x
Background Lately I am in a quest to find a good quality TTS ai generation tool to run locally in order to create audio for some videos I...
New