cft

Building your first machine learning model

We’ll show you how to build a basic machine learning model. For a quick background on AI/ML, please check out this blog post for an overview.


user

Mage

2 years ago | 7 min read

In this tutorial, we’ll use the Titanic dataset to predict which passengers survived the crash. The dataset includes information on each passenger, the cabin they stayed in, their gender, and more.

High level steps for building an AI/ML model

  1. Data preparation
  2. Choose algorithm
  3. Hyperparameter tuning
  4. Train model
  5. Evaluate performance

Setup

Go to Google Colaboratory, click the top left “File” and click “New notebook”.

Download this file called titanic_survival.csv.

In your “New notebook” on Google Colaboratory, click the folder icon in the top left and then drag the file you just downloaded called “titanic_survival.csv” and drop it into that area.

Data preparation

Here are the steps when preparing data:

  1. Download and split data
  2. Add columns
  3. Remove columns
  4. Impute values
  5. Scale values
  6. Encode values
  7. Select features

1. Download and split data

We need to load the data into memory by downloading it from a website, a database, data warehouse, SaaS tool, etc. Once we download it, we can load it in memory to operate on quickly. Before we split the data, we’ll need to determine which column we want to predict. In this tutorial, we’ll predict which passengers survived.

1 from sklearn.model_selection import train_test_split

2 import pandas as pd

3

4 df = pd.read_csv('/content/titanic_survival.csv')

5 label_feature_name = 'Survived'

6

7 X = df.drop(columns=[label_feature_name])

8 y = df[label_feature_name]

After that, we need to split the data into 2 parts: 1 for training the AI/ML model (aka train set) and 1 for evaluating the performance of the model (aka test set). The train set will have 80% of the rows from the original data. There are different strategies for splitting the data; however, a common method is to stratify the data so that there is a representative number of rows in both the train set and test set.

1 X_train_raw, X_test_raw, y_train, y_test = train_test_split(

2 X,

3 y,

4 stratify=y,

5 test_size=0.2,

6 )

2. Add columns

The data you downloaded may not have all the columns you need. You may want to add a few more columns by combining existing columns or performing some sort of calculation. For example, you may want to create a column called “year” which extracts the year of a date value from the birthday column.

1 df = X_train_raw.copy()

2

3 # Add a column to determine if the person can vote

4 df['can_vote'] = df['Age'].apply(lambda age: 1 if age >= 18 else 0)

5

6 # 892 passengers can vote; aka they are 18 or older

7 df['can_vote'].value_counts()

8

9 # Cabin letter: a cabin can be denoted as B123. The cabin letter will be B.

10 df.loc[:, 'cabin_letter'] = df['Cabin'].apply(

11 lambda cabin: cabin[0] if cabin and type(cabin) is str else None,

12 )

3. Remove columns

There may be columns that you don’t think the model should learn from. For example, the model may not care about specific user IDs or email addresses (the email domain might matter). In these cases, we want to remove these columns from the data. By removing these columns, we help the model focus on what matters instead of trying to make sense of data that has no impact on the prediction. For example, a passenger’s ID probably has very little impact on whether they survived the sinking of the Titanic.

1 df = df.drop(columns=['Name', 'PassengerId'])

2

3 # Name and PassengerId is no longer a column

4 df.columns.tolist()

4. Impute values

Your data may have missing values in a particular column. The AI/ML model has a hard time knowing what to do with missing values. We can help it by filling in those missing values using some heuristic. For example, there are a lot of missing values in the “Cabin” column. For those with no known cabin, we’ll fill in the value “somewhere out of sight”. For those with missing age, we’ll use the median age to fill in those missing values.

1 from sklearn.impute import SimpleImputer

2

3 print(f'Missing values in "Cabin": {len(df[df["Cabin"].isna()].index)}')

4 df.loc[df['Cabin'].isna(), 'Cabin'] = 'somewhere out of sight'

5 df.loc[df['cabin_letter'].isna(), 'cabin_letter'] = 'ZZZ'

6

7 print(f'Missing values in "Age": {len(df[df["Age"].isna()].index)}')

8 age_imputer = SimpleImputer(strategy='median')

9 df.loc[:, ['Age']] = age_imputer.fit_transform(df[['Age']])

10

11 print(f'Missing values in "Embarked": {len(df[df["Embarked"].isna()].index)}')

12 df.loc[df['Embarked'].isna(), 'Embarked'] = 'no idea'

5. Scale values

Adjust the values of number columns to fall within similar ranges so that large numbers (such as seconds since epoch) don’t affect the prediction disproportionately as much as smaller values (such as age).

For example, if you have a column that is in seconds and a column that is in days, the difference in seconds between today and last week is 604,800 seconds. The difference in days between today and last week is 7. If we don’t scale these values, then the model will think the column with seconds has a greater distance between 2 numbers than the column with days.

There are multiple scaling strategies such as standard scaler and normalizer. For more information, check out this thread.

1 from sklearn.preprocessing import StandardScaler

2

3 scaler = StandardScaler()

4 df.loc[:, ['Age']] = scaler.fit_transform(df[['Age']])

6. Encode values

AI/ML algorithms perform mathematical operations using numbers. We must convert columns that contain strings into a number representation. A common technique is to encode categorical values. For example, we can convert the value “male” to 0 and “female” to 1. Note: we’re going to use one-hot encoding to convert these strings into numbers. For further explanation why, check out this thread.

1 from sklearn.preprocessing import OneHotEncoder

2

3 categorical_columns = ['Pclass', 'Sex', 'Embarked', 'cabin_letter']

4 categorical_encoder = OneHotEncoder(handle_unknown='ignore')

5 categorical_encoder.fit(df[categorical_columns])

6

7 # Add the new columns to the data

8 new_column_names = []

9 for idx, cat_column_name in enumerate(categorical_columns):

10 values = categorical_encoder.categories_[idx]

11 new_column_names += [f'{cat_column_name}_{value}' for value in values]

12

13 df.loc[:, new_column_names] = \

14 categorical_encoder.transform(df[categorical_columns]).toarray()

7. Select features

Now that we’ve prepared our data, we need to select the features we want our model to learn from. There are many techniques for doing this (Mage’s tool handles this automatically for you). For this tutorial, we’ll simply select the features we manually added, scaled, or encoded.

1 features_to_use = [

2 'Age',

3 'SibSp',

4 'Parch',

5 'Fare',

6 'can_vote',

7 ] + new_column_names

8 X_train = df[features_to_use].copy()

Choose algorithm

Once our data is in a state that is ready to be trained on, we must choose an algorithm to use. Different algorithms are best suited for different types of problems and different types of data. For this tutorial, we’ll use a basic algorithm called logistic regression that’ll help us classify which passengers survived the Titanic crash.

1 from sklearn.linear_model import LogisticRegression

2

3 classifier = LogisticRegression(max_iter=10000)

Hyperparameter tuning

An AI/ML model has parameters that aren’t related to the features (aka columns in the data). These “hyper” parameters control how the model behaves throughout its training. When improving AI/ML models, it’s common to try a bunch of different combinations of hyperparameters that’ll yield the best results. We’ll skip this optimization for this tutorial (keep an eye out for a future article on this topic).

Train model

We take the data that was prepared (X_train) and the actual results (y_train) for each row (e.g. whether the passenger survived the Titanic) and feed it into the model. The model will learn from looking at the values in each column and seeing what result it produces (1 for survived, 0 for not survived). Once the model learns from all the data, it will finish training and can be used to make predictions on unseen data.

1

classifier.fit(X_train, y_train)

Evaluate performance

  1. Prepare test data
  2. Use model to predict on test data
  3. Calculate model accuracy
  4. Determine baseline performance and compare

1. Prepare test data

First, we’ll prepare our test data (e.g. add columns, remove columns, impute values, scale values, encode values, and select features) in the same way we did for our train set. One caveat is that we won’t “fit” our standard scaler or our encoders because we only want to “fit” those on the train set.

Note: the code below is an exact copy of the code written above during data preparation for the train set, except we are calling functions on the variable containing the test data. A better engineering practice would be to refactor the code by creating a reusable function that accepts a Pandas dataframe as an argument, calls all the data preparation steps on that dataframe, and returns it.

Here is the code written and not refactored for clarity sake:

1 X_test = X_test_raw.copy()

2

3 # Add columns

4 X_test['can_vote'] = X_test['Age'].apply(lambda age: 1 if age >= 18 else 0)

5 X_test.loc[:, 'cabin_letter'] = X_test['Cabin'].apply(

6 lambda cabin: cabin[0] if cabin and type(cabin) is str else None,

7 )

8

9 # Remove columns

10 X_test = X_test.drop(columns=['Name', 'PassengerId'])

11

12 # Impute values

13 X_test.loc[X_test['Cabin'].isna(), 'Cabin'] = 'somewhere out of sight'

14 X_test.loc[X_test['cabin_letter'].isna(), 'cabin_letter'] = 'ZZZ'

15 X_test.loc[:, ['Age']] = age_imputer.transform(X_test[['Age']])

16 X_test.loc[X_test['Embarked'].isna(), 'Embarked'] = 'no idea'

17

18 # Scale columns

19 X_test.loc[:, ['Age']] = scaler.transform(X_test[['Age']])

20

21 # Encode values

22 X_test.loc[:, new_column_names] = categorical_encoder.transform(

23 X_test[categorical_columns],

24 ).toarray()

25

26 # Select features

27 X_test = X_test[features_to_use].copy()

2. Use model to predict on test data

Next, we use the model to predict who survives from the test data (remember we split the data earlier during data preparation). y_pred = classifier.predict(X_test)

3. Calculate model accuracy

Regression and classification models have different metrics that are used to evaluate the performance of the model. Since we’re using a classification model (even though it’s called logistic regression, it can be used for classifying), we’ll use accuracy as our metric. If there were multiple categories we’re predicting, we’ll also want to use precision and recall as a metric.

1 from sklearn.metrics import accuracy_score

2

3 accuracy = accuracy_score(y_test, y_pred)

4 print(f'Accuracy score: {accuracy}')

4. Determine baseline performance and compare

In order for us to understand how good this accuracy is, we need to establish a baseline. In this specific example, the baseline accuracy will be the number of people who didn’t survive within the test set divided by the number of rows in the test set.

1 baseline_accuracy_score = y_test.value_counts()[0] / len(y_test)

2

3 print(f'Model performance. : {accuracy}')

4 print(f'Baseline performance: {baseline_accuracy_score}')

Deploy/Integrate model

Once you trained the model and fine-tuned it to your business needs, it’s time to integrate it into your product or business operations. There are several ways of doing this: you can deploy the model to an online server where the model’s prediction can be accessed via an API request or you can set up your model to perform batch predictions and export those predictions to your data warehouse, data lake, etc.

Deploying your model, maintaining the model, keeping it up-to-date so that it makes relevant predictions, and making sure your feature data is fresh and readily available to retrieve for online predictions is time-consuming, costly, extremely complex, and a non-differentiating skillset. Instead of focusing your energy on this particular aspect, it’s common to rely on other tools for this service. A tool like Mage not only helps you prepare your data and train your model, it also helps you access your model from an API endpoint and keeps the model relevant by retraining it regularly.

Conclusion

Here is the link to the entire code.

Additional resources

Upvote


user
Created by

Mage

Give your data team magical powers: https://github.com/mage-ai/mage-ai * Integrate and synchronize data from 3rd party sources * Build real-time and batch pipelines to transform data using Python, SQL, and R * Run, monitor, and orchestrate thousands of pipelines without losing sleep


people
Post

Upvote

Downvote

Comment

Bookmark

Share


Related Articles