cft

My First Machine Learning App: Deploying Locally

Line by Line Explanation of the code and process of deploying the Machine Learning App locally


user

Salim Oyinlola

2 years ago | 2 min read

I'm putting this to writing about two weeks after the completion of the code for the Machine Learning model not by design nor coincidental preference but due to certain unforeseen circumstances. Apart from how hectic my preparation for the new academic semester (yes, I am a college student) has been, I initially had issues with installing some python libraries, especially regarding their build dependencies on my PC. The initial plan in place involved using Streamlit, which would be an easier means but troubles in installing the build dependencies would not let me. The other alternative was to go the Flask and jsonify route. I spent a couple of days with that. All to no avail. My lack of experience in full-stack software engineering proved to be an Achilles’ heel. Eventually, I went back to Streamlit and figured out what the issue with using it was, resolved it and it was up and running. Here’s a step-by-step detailed explanation of how I went about the local deployment. I hope you enjoy reading.

After dumping the ML model in pickle format, the .pkl file is seen in the folder where the model and .csv dataset is stored.

Thereafter, I opened up my text editor of choice and opened the folder in said folder. On here, I created two .py files app.py and predict_page .py

The predict_page.py code

  • Importing the necessary libraries
import streamlit as stimport pickleimport numpy as np

The pickle library is key in loading the model. However, a function would have to be created to import the module as saved.

def load_model(): with open('saved_model.pkl', 'rb') as file: data = pickle.load(file) return data

After creating the function, the function is then executed as shown below;

data = load_model()

Next, I accessed the different keys of the app (i.e. country, education etc.)

regressor = data["model"]le_country = data["le_country"]le_education = data["le_education"]

Next up, I created the prediction page using the function how_predict_page() as shown below;

def show_predict_page():st.title("Salim's oftware Developer Salary Prediction")

st.write("""### We need some information to predict the salary""")

countries = ("United States","India","United Kingdom","Germany","Canada","Brazil","France","Spain","Australia","Netherlands","Poland","Italy","Russian Federation","Sweden",)education = ("Less than a Bachelors","Bachelor’s degree","Master’s degree","Post grad",)

country = st.selectbox("Country", countries)education = st.selectbox("Education Level", education)

expericence = st.slider("Years of Experience", 0, 50, 3)

ok = st.button("Calculate Salary")if ok:X = np.array([[country, education, expericence ]])X[:, 0] = le_country.transform(X[:,0])X[:, 1] = le_education.transform(X[:,1])X = X.astype(float)

salary = regressor.predict(X) st.subheader(f"The estimated salary is ${salary[0]:.2f}")
  • If you are conversant with python's syntax and the way streamlit works, the code written above can be well understood.

Next thing I did was to create a app.py to run the app.

The predict_page.py code

The first step would be to import the needed external libraries as shown below;

import streamlit as stfrom predict_page import show_predict_page

The first line is to import the streamlit library and the second is used to import the show_predict_page function from the predict_page.py file.

show_predict_page()

The function is then called. And that's it! Done writing the code.

Thereafter, I went to the terminal and wrote the code underneath;

conda activate [name_of_env]

This was to ensure that I was operating in the environment I created earlier. Thereafter, I ran the app;

streamlit run app.py

The link to the beautiful app page is then seen.

Upvote


user
Created by

Salim Oyinlola

Machine Learning and Artificial Intelligence enthusiast


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles