Directory Image
This website uses cookies to improve user experience. By using our website you consent to all cookies in accordance with our Privacy Policy.

Machine Learning Model Development Life Cycle

Author: Mansoor Ahmed
by Mansoor Ahmed
Posted: Jul 25, 2021

Introduction

Machine learning is a best way of knowledge analysis that automates analytical model building. It’s a branch of AI supported the thought that systems can learn from data, identify patterns and make decisions with minimal human intervention. Machine learning involves computers discovering how they will perform tasks without being explicitly programmed to try to to so. It involves computers learning from data provided in order that they perform certain tasks.

For easy tasks assigned to computers, it’s possible to program algorithms telling the machine the way to execute all steps required to unravel the matter at hand; on the computer’s part, no learning is required. For more advanced tasks, it are often challenging for a person’s to manually create the needed algorithms. In practice, it can end up to be simpler to assist the machine develop its own algorithm, instead of having human programmers specify every needed step. When asked to unravel a drag using machine learning, data scientists achieve this by following a sequence of steps.

Description

The first thing to try to when developing a model is to know the matter we try to unravel thoroughly. This not only involves understanding what problem we are solving, but also why we are solving it, what impact are we expecting to possess, and what the currently available solution is that simply are comparing our new solution to. My understanding of what Box said when he stated that each one models are wrong is that a model is simply an approximation of reality by modeling one or more angles of it. By understanding the matter we try to unravel, we will decide which angles of reality we’d like to model, and which of them can tolerate ignoring. We also got to understand the matter well to make a decision the way to split the info for training and evaluation.

We will then decide what quite model to use. is that the problem suitable for supervised or unsupervised learning? Are we more happy using classification or regression algorithms for this problem? What quite classification algorithm will serve us best? may be a linear model ok to approximate our reality? Do we need the foremost accurate model or one that we will easily inform its users and to the business stakeholders?

Minimal exploratory data analysis are often done here, where can check whether we’ve labels and check the cardinality of the labels, if present, to make a decision whether we are handling a classification or a regression problem. i might still save any longer data analysis until after the dataset is split into training and test sets. It’s important to limit advanced data analysis to the training set only to make sure the model’s generalizability.

Finally, we’d like to know what we are comparing our model to. What’s the present baseline that we’d like to enhance on? If there are already business rules in situ, then our model has got to be better at solving the matter at hand than these rules. To be ready to decide what proportion better it’s at solving the matter, we’d like to use evaluation metrics—metrics that are suitable for our model and also as close as possible to our business requirements. If our aim is to extend revenue, then our metric should be good at estimating the rise in revenue when our model is employed, compared to the present established order. If our aim is to extend repeat purchases no matter the revenue, then other metrics could also be more suitable.

Data Splitting

As seen in supervised learning, we train our model on a group of knowledge where the right answers (labels) are given. Learning, however, is merely half the matter. We also want to be ready to tell whether the model we built goes to try to to an honest job when used on future data. We cannot foresee the longer term, but we will use the info we have already got to gauged our model. We do that by splitting our data into parts and use one a part of the info to coach the model (the training set) then use a separate part to gauged the model (the test set). Since we

want our test set to be as close as possible to the longer term data, there are two key points to stay in mind when splitting the data:

  • Finding the simplest manner to separate the info
  • Ensure the training and test datasets are separate
Development set

When developing a model, we’d like to undertake multiple configurations of the model to make a decision which configuration gives the simplest results. To be ready to do so, we usually split the training dataset further into training and development sets. Having two new subsets allows us to undertake different configurations when training on one among the 2 subsets and evaluating the effect of these configuration changes on the opposite. Once we discover the simplest configuration, we evaluate our model with its final configuration on the test set.

Evaluating the model

Evaluating the model’s performance is important in picking the simplest algorithm for the work and to be ready to estimate how the model will perform in real world. As Box said, a model that’s wrong can still be useful.

Deploying in production and monitoring

The main reason that a lot of data scientists use Python for machine learning rather than R, for instance, is that it makes it easier to productionize your code. Python has many web frameworks to create APIs with and put the machine learning models behind. it’s also supported by all cloud providers. I find it important that the team developing a model is additionally liable for deploying it in production. Building the model in one language and then asking another team to port it into another language is error-prone. Of course, having one person or team building and deploying models might not be feasible in larger companies or thanks to other implementation constraints. However, keeping the 2 teams in close contact and ensuring that those developing the model can still understand the assembly code is important and helps to attenuate errors on account of development and production code inconsistency.

We try our greatest to not have any look-ahead bias when training our models. We hope data doesn’t change after our models are trained, and that we want our code to be bug-free. However, we cannot guarantee any of this. We may overlook the very fact that the user’s credit score is merely added to the database after they create their first purchase. We might not know that our developers decided to modify to the system of weights and measures to specify our inventory’s

weights while it had been saved in pounds when the model was trained. Due to that, it’s important to log all the predictions your model makes to be ready to monitor its performance in real world and compare it to the test set’s performance. We will also log the test set’s performance whenever you retrain the model or keep track of the target’s distribution over time.

Iterating

Often, once we deploy a model, we find ourselves with more data. Furthermore, the performance of our model isn’t bound to be an equivalent when deployed in production. This will flow from to some implementation issues or mistakes that happened during the evaluation process. Those two points mean that the primary version of your solution is usually up for improvement. Starting with simple solutions (that are often improved via iterations) is an important concept for agile programming and may be a paramount concept for machine learning. This whole process, from understanding the matter to monitoring the continued improvements on the answer, requires tools that allow us to iterate quickly and efficiently.

About the Author

Mansoor Ahmed Chemical Engineer,Web developer

Rate this Article
Leave a Comment
Author Thumbnail
I Agree:
Comment 
Pictures
Author: Mansoor Ahmed

Mansoor Ahmed

Member since: Oct 10, 2020
Published articles: 124

Related Articles