Build Custom Model Server Image¶
This section will guide you through building a custom model image.
Overview
What You’ll Need¶
- A local environment configured with Docker.
Procedure¶
In your local environment, create a new folder to place the image:
user@local:~$ mkdir custom_images && cd custom_imagesInside the
custom_images
folder, create another folder, and name itcustom_model
:user@local:~$ mkdir custom_model && cd custom_modelInside the
custom_model
folder, create the following files namedmodel.py
andrequirements.txt
:user@local:~$ touch model.py requirements.txtCopy and paste the custom model’s code inside
model.py
:custom_model_image.py1 # Copyright © 2022 Arrikto Inc. All Rights Reserved. 2 3 """This script defines a custom model without a transformer function.""" 4 5 import argparse 6 7 from typing import Dict 8 9 import torch 10 import kserve 11 12 from torchvision import models 13 14 15 DEFAULT_MODEL_NAME = "custom-model" 16 17 18 class AlexNetModel(kserve.Model): 19 """Define custom model.""" 20 21 def __init__(self, model_name: str): 22 super().__init__(model_name) 23 self.name = model_name 24 self.load() 25 26 def load(self): 27 model = models.alexnet(pretrained=True) 28 model.eval() 29 self.model = model 30 self.ready = True 31 32 def predict(self, request: Dict) -> Dict: 33 inputs = request["instances"] 34 35 # Input follows the Tensorflow V1 HTTP API for binary values 36 # https://www.tensorflow.org/tfx/serving/api_rest#encoding_binary_values 37 data = inputs[0] 38 39 input_tensor = torch.tensor(data) 40 input_batch = input_tensor.unsqueeze(0) 41 42 output = self.model(input_batch) 43 44 torch.nn.functional.softmax(output, dim=1)[0] 45 46 values, _ = torch.topk(output, 5) 47 48 return {"predictions": values.tolist()} 49 50 51 if __name__ == "__main__": 52 parser = argparse.ArgumentParser(parents=[kserve.model_server.parser]) 53 parser.add_argument('--model_name', default=DEFAULT_MODEL_NAME, 54 help='The name that the model is served under.') 55 args, _ = parser.parse_known_args() 56 57 model = AlexNetModel(model_name=args.model_name) 58 model.load() 59 kserve.ModelServer(workers=1).start([model]) Note
For this example you will use the AlexNext pretrained model. This custom model is provided in the KServe documentation. The above example comes with the following changes:
- You will not hard code the name of the model, but use
argparse
to pass it as an argument. If you do not pass a name, the model name will becustom-model
. - You will remove the code that is used to trasform the input data into a tensor.
Important
You need to create an InferenceService with the same name as the name of the model, since the the InferenceService is expecting the model to be registered in a directory named after it. If you hardcode the model’s name, then you can only create one InferenceService named after the model’s name. However, if you don’t hardcode the name, you can pass it in the
args
parameter of the predictor and the transformer containers, as an argument.- You will not hard code the name of the model, but use
Copy and paste the custom model’s dependencies inside
requirements.txt
:custom_model_requirements.txt1 kserve==0.8.0 2 torchvision 3 ray<1.7.0 4 protobuf==3.19.3 5 tornado==6.1 Inside the
custom_images
folder, create the followingDockerfile
for building the image:user@local:~$ cd .. && touch model.DockerfileCopy and paste the following code inside
model.Dockerfile
:custom_model_image.Dockerfile1 FROM python:3.7-slim 2 3 COPY custom_model custom_model 4 5 WORKDIR custom_model 6 RUN pip install --upgrade pip && pip install -r requirements.txt 7 8 ENTRYPOINT ["python", "model.py"] Build the custom model image:
user@local:~$ docker build -t custom-model-image:latest -f model.Dockerfile .
What’s Next¶
Check out how you can build a custom transformer server image.