A Simple Way to Deploy Any Machine Learning

A Simple Way to Deploy Any Machine Learning
A beginner’s guide to training and deploying machine learning models using Python

When I was first introduced to machine learning, I had no idea what I was reading. All the articles I read consisted of weird jargon and crazy equations. How could I figure all this out?

I opened a new tab in Chrome and looked for easier solutions. I found APIs from Amazon, Microsoft, and Google that did all the machine learning for me. Each hackathon project I made would call their servers and WOW — it looked so smart! I was hooked.

But, after a year, I realized that I wasn’t learning anything. Everything I was doing was described by this Nedroid comic that I modified:

Eventually, I sat down and learned how to use machine learning without megacorporations. And turns out, anyone can do it. The current libraries we have in Python are amazing. In this article, I will explain how I use these libraries to create a proper machine learning back end.

Getting a dataset

Machine learning projects are reliant on finding good datasets. If the dataset is bad, or too small, we cannot make accurate predictions. You can find some good datasets at Kaggle or the UC Irvine Machine Learning Repository.

In this article, I am using a wine quality dataset with many features and one label. Features are independent variables which affect the dependent variable called the label. In this case, we have one label column — wine quality — that is affected by all the other columns (features like pH, density, acidity, and so on).

In the following Python code, I use a library called pandas to control my dataset. pandas provides datasets with many functions to select and manipulate data.

First, I load the dataset to a panda and split it into the label and its features. I then grab the label column by its name (quality) and then drop the column to get all the features.

import pandas as pd

#loading our data as a panda
df = pd.read_csv('winequality-red.csv', delimiter=";")

#getting only the column called quality
label = df['quality']

#getting every column except for quality
features = df.drop('quality', axis=1)

datasetSplit.py

Training a model

Machine learning works by finding a relationship between a label and its features. We do this by showing an object (our model) a bunch of examples from our dataset. Each example helps define how each feature affects the label. We refer to this process as training our model.

I use the estimator object from the Scikit-learn library for simple machine learning. Estimators are empty models that create relationships through a predefined algorithm.

For this wine dataset, I create a model from a linear regression estimator. (Linear regression attempts to draw a straight line of best fit through our dataset.) The model is able to get the regression data through the fit function. I can use the model by passing in a fake set of features through the predict function. The example below shows the features for one fake wine. The model will output an answer based on its training.

The code for this model, and fake wine, is below:

import pandas as pd
import numpy as np
from sklearn import linear_model

#loading and separating our wine dataset into labels and features
df = pd.read_csv('winequality-red.csv', delimiter=";")
label = df['quality']
features = df.drop('quality', axis=1)

#defining our linear regression estimator and training it with our wine data
regr = linear_model.LinearRegression()
regr.fit(features, label)

#using our trained model to predict a fake wine
#each number represents a feature like pH, acidity, etc.
print regr.predict([[7.4,0.66,0,1.8,0.075,13,40,0.9978,3.51,0.56,9.4]]).tolist()

modelTrain.py

Importing and exporting our Python model

The pickle library makes it easy to serialize the models into files that I create. I am also able to load the model back into my code. This allows me to keep my model training code separated from the code that deploys my model.

I can import or export my Python model for use in other Python scripts with the code below:

import pickle

#creating and training a model
regr = linear_model.LinearRegression()
regr.fit(features, label)

#serializing our model to a file called model.pkl
pickle.dump(regr, open("model.pkl","wb"))

#loading a model from a file called model.pkl
model = pickle.load(open("model.pkl","r"))

pickleExample.py

Creating a simple web server

To deploy my model, I first have to create a server. Servers listen to web traffic, and run functions when they find a request addressed to them. The function that runs can depend on the request’s route and other data that it has. Afterwards, the server can send a message of confirmation back to the requester.

The Flask Python framework allows me to create web servers in record time.

In the code below, I use Flask to run a simple one-route web server. My one route listens to POST requests and sends a hello back. POST requests are a special type of request that carry data in a JSON object.

from flask import Flask
from flask import request

#code which helps initialize our server
app = flask.Flask(__name__)

#defining a /hello route for only post requests
@app.route('/hello', methods=['POST'])
def index():
    #grabs the data tagged as 'name'
    name = request.get_json()['name']
    
    #sending a hello back to the requester
    return "Hello " + name

simpleFlaskServer.py

Adding the model to my server

With the pickle library, I am able to able to load our trained model into my web server.

Our server now loads the trained model during its initialization. I can access it by sending a post request to my “/echo” route. The route grabs an array of features from the request body and gives it to the model. The model’s prediction is then sent back to the requester.

import pickle
import flask

app = flask.Flask(__name__)

#getting our trained model from a file we created earlier
model = pickle.load(open("model.pkl","r"))

@app.route('/predict', methods=['POST'])
def predict():
    #grabbing a set of wine features from the request's body
    feature_array = request.get_json()['feature_array']
    
    #our model rates the wine based on the input array
    prediction = model.predict([feature_array]).tolist()
    
    #preparing a response object and storing the model's predictions
    response = {}
    response['predictions'] = prediction
    
    #sending our response object back as json
    return flask.jsonify(response)

wineQualityApi.py

Conclusion

After reading this article, you should be able to create your own machine learning back end. For more detail, you can find a full example that I made at this repository.

When you have time, I recommend taking a step back from coding and reading about machine learning. This article only teaches the bare necessities to create a model. There are topics like loss reduction and neural nets that you need to know.

Good luck and happy coding!

30s ad

Hands-On Machine Learning: Learn TensorFlow, Python, & Java!

Learn Azure Machine Learning from scratch

Artificial Intelligence - TensorFlow Machine Learning

Criando Modelos de Machine Learning

The Complete TensorFlow Masterclass: Machine Learning Models

Suggest:

Machine Learning Zero to Hero - Learn Machine Learning from scratch

Complete Python Tutorial for Beginners (2019)

Python Tutorials for Beginners - Learn Python Online

What is Python and Why You Must Learn It in [2019]

Learn Python in 12 Hours | Python Tutorial For Beginners

Python Programming Tutorial | Full Python Course for Beginners 2019