Opening MLOps

SHIN·2022년 11월 8일
0

Why do we need MLOps?

To ensure the model's (service) agility in an inconsistent environment.

Components

  1. Server infra: Cloud/ On-premise(Private server)

  2. GPU infra: Cloud GPU/ Local GPU

  3. Serving
    • Batch serving: Provide service periodically
      - use stored data
    • Online serving: Provide service constantly
      - use stream of data
      - need caution to avoid bottleneck
      - expandability required
    • ex. Cortex labs, tensorflow serving
  4. Experiment, Model Management
    - Storing by-product of experiments.(Model artifacts, metric, meta data, images, hyperparams etc.)
    - ex. mlflow
  5. Feature Store
    - Storing ML features
    - ex. feast-dev
  6. Data Validation
    - To ensure that production data is similar to research stage data
    - TFDV: feature validation
    - AWS Deequ: Data Quality measure, Data Unit test

  7. Continuous Training
    - Retrain
    - with new data, simple periodic retrain, changing metric etc.
  8. Monitoring
    - Monitoring the effectiveness and efficiency of a model in production, to prevent performance decay
  9. All together

My Top 3 selection among components

  1. Serving
    • Isn't this the hole point of this system?
  2. Monitoring
    • Monitoring is essential to provide constant performace. I think this would be one of the most important condition for maintaining customers.
  3. Feature storing
    • To fufill customers needs, it is important to update service(model) rapidly. This could be done with only storing latest features constantly and using it to adjust model.

all images are from "practitioners_guide_to_mlops_whitepaper" google,2021

profile
HAPPY the cat

0개의 댓글