While ML model development is a challenging process, the management of these models becomes even more complex once they’re in production. Shifting data distributions, upstream pipeline failures, and model predictions impacting the very data set they’re trained on can create thorny feedback loops between development and production.
In the presentation below, “ML System Design for Continuous Experimentation,” featuring Gideon Mendels from Comet in the Stanford MLSys Series, we’ll:
- Examine some naive ML workflows that don’t take the development-production feedback loop into account and explore why they break down.
- Showcase some system design principles that will help manage these feedback loops more effectively.
- Explore several industry case studies where teams have applied these principles to their production ML systems.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1
Sign up for the free insideAI News newsletter.