train a simple Pytorch model to test the MLflow lib
Go to file
deng 0e6a5b8925 update service 2023-04-11 11:56:07 +08:00
env replace bash script to docker-compose to build server 2023-03-07 16:33:45 +08:00
service update service 2023-04-11 11:56:07 +08:00
.gitignore package unsupported ml model 2023-03-10 15:03:21 +08:00
README.md package unsupported ml model 2023-03-10 15:03:21 +08:00
docker-compose.yaml replace bash script to docker-compose to build server 2023-03-07 16:33:45 +08:00
get_registered_model_via_rest_api.py replace bash script to docker-compose to build server 2023-03-07 16:33:45 +08:00
log_unsupported_model.py package unsupported ml model 2023-03-10 15:03:21 +08:00
predict.py package unsupported ml model 2023-03-10 15:03:21 +08:00
test_pytorch_m1.py update file description and reference 2023-03-01 17:13:17 +08:00
train.py replace bash script to docker-compose to build server 2023-03-07 16:33:45 +08:00

README.md

Abstract

Try to use MLflow platform to log PyTorch model training, and pull production model from model registry to run inference⛩

Requirements

  • MacOS 12.5
  • Docker 20.10

Dirs

  • service
    • House MLflow service data, including MLflow artifacts, backend store and model registry
  • env
    • mlflow.yaml
      • conda env yaml to run this repo

Files

  • docker-compose.yaml
    • a yaml to apply docker-compose to start MLflow service with basic configuration (run docker-compose -f docker-compose.yaml up)
  • test_pytorch_m1.py
    • a script to test PyTorch on Apple M1 platform with GPU acceleration
  • train.py
    • a sample code to apply PyTorch to train a small neural network to predict fortune with MLflow logging
  • predict.py
    • a sample code to call registered model to predict testing data and save model to local file system
  • get_registered_model_via_rest_api.py
    • a script to test MLflow REST api
  • log_unsupported_model.py
    • a sample script to apply mlflow.pyfunc to package unsupported ml model which can be logged and registered by mlflow
tags: MLOps