|
||
---|---|---|
env | ||
service | ||
.gitignore | ||
README.md | ||
docker-compose.yaml | ||
get_registered_model_via_rest_api.py | ||
log_unsupported_model.py | ||
optimize_model.py | ||
predict.py | ||
test_pytorch_m1.py | ||
train.py |
README.md
Abstract
Try to use MLflow platform to log PyTorch model training, and pull production model from model registry to run inference⛩
Requirements
- MacOS 12.5
- Docker 20.10
Dirs
- service
- House MLflow service data, including MLflow artifacts, backend store and model registry
- env
- mlflow.yaml
- conda env yaml to run this repo
- mlflow.yaml
Files
- docker-compose.yaml
- a yaml to apply docker-compose to start MLflow service with basic configuration (run
docker-compose -f docker-compose.yaml up
)
- a yaml to apply docker-compose to start MLflow service with basic configuration (run
- test_pytorch_m1.py
- a script to test PyTorch on Apple M1 platform with GPU acceleration
- train.py
- a sample code to apply PyTorch to train a small neural network to predict fortune with MLflow logging
- predict.py
- a sample code to call registered model to predict testing data and save model to local file system
- get_registered_model_via_rest_api.py
- a script to test MLflow REST api
- log_unsupported_model.py
- a sample script to apply mlflow.pyfunc to package unsupported ml model which can be logged and registered by mlflow
- optimize_model.py
- a sample script to demonstrate how to use MLflow and TensorRT libs to optimize Pytorch model on edge devices and fetch it out on client