top of page
Search

Machine Learning Models Cheat Sheet

As I connected to the people's I get to know that when ever their is an interview in few day's , and your schedule is jam-packed to prepare for it or may be you are on revision mode and want to have a look on the basics concept and model of machine learning so here you go with the best blog where you will get all machine learning model at one place for quick revision and for interview preparation and also this are the most important questions asked in interview .


Machine learning Model are broadly categorized into 3 categories :

Supervised Learning

1. It is defined by its use of labeled datasets to train algorithms that to classify data or predict outcomes accurately

2. Data set has target attribute

3. Prior knowledge is there of dataset


Unsupervised Learning

1. It uses machine learning algorithms to analyze and cluster unlabeled datasets.

2. These algorithms discover hidden patterns or data groupings without the need for human intervention

3. May be or may not be Prior knowledge of the dataset


Reinforcement Learning

1. Reinforcement Learning(RL) is a type of machine learning technique that enables an agent to learn in an interactive environment by trial and error using feedback from its own actions and experiences

2. On the basis of action agent is rewarded as positive reward or negative reward.


Now Let's have deep look into each learning


Supervised Learning


  • Linear Regression

In statistics, linear regression is a linear approach to modelling the relationship between a scalar response and one or more explanatory variables. The case of one explanatory variable is called simple linear regression; for more than one, the process is called multiple linear regression.

1. What Are the Basic Assumption / when to use ? (favourite question for interview)


a) Linearity: The relationship between X and the mean of Y is linear.

b) Homoscedasticity: The variance of residual is the same for any value of X.

c) Independence: Observations are independent of each other.

d) Normality: For any fixed value of X, Y is normally distributed


2. Advantages

a.)Linear regression performs exceptionally well for linearly separable data

b).Easy to implement and train the model

c.)It can handle overfitting using dimensionlity reduction techniques and cross validation and regularization


3. Disadvantages

a.)Sometimes Lot of Feature Engineering Is required

b.)If the independent features are correlated it may affect performance

c.)It is often quite prone to noise and overfitting


4. Impact of Missing Values and outliers ?

It is sensitive to missing values as well as outliers also.


  • Logistics Regression

The logistic model is used to model the probability of a certain class or event existing such as pass/fail, win/lose.

1. What Are the Basic Assumption?

Linear Relation between independent features and the log odds


2. Advantages

a) Logistic Regression Are very easy to understand

b) It requires less training

c) Good accuracy for many simple data sets and it performs well when the dataset is linearly separable.

d) It makes no assumptions about distributions of classes in feature space.

e) Logistic regression is less inclined to over-fitting but it can overfit in high dimensional datasets.

f) Logistic regression is easier to implement, interpret, and very efficient to train.


3. Disadvantages

a) Sometimes Lot of Feature Engineering Is required.

b) If the independent features are correlated it may affect performance.

c) It is often quite prone to noise and overfitting.


4. Impact of Missing Values and outliers ?

It is sensitive to missing values as well as outliers also.


  • Decision Tree (Classifier And Regressor)

A decision tree is a very specific type of probability tree that enables you to make a decision about some kind of process.

1. What Are the Basic Assumption?

There are no such assumptions


2. Advantages

a) Clear Visualization

b) Simple and easy to understand.

c) Decision Tree can be used for both classification and regression problems.

d) Decision Tree can handle both continuous and categorical variables.

e) No feature scaling required.


3. Disadvantages

a) Overfitting

b) High variance

c) Not suitable for large datasets


4. Impact of outliers?

It is not sensitive to outliers


  • SVM

SVM uses a technique called the kernel trick to transform your data and then based on these transformations it finds an optimal boundary between the possible outputs.

1. What Are the Basic Assumption?

There are no such assumptions


2. Advantages

a.) SVM is more effective in high dimensional spaces.

b) SVM is relatively memory efficient and the risk of over-fitting is less in SVM.

c) SVM’s are very good when we have no idea on the data.


3. Disadvantages

a) More Training Time is required for larger dataset.

b) It is difficult to choose a good kernel function.

c) The SVM hyper parameters are Cost -C and gamma. It is not that easy to fine-tune these hyper-parameters. It is hard to visualize their impact.


4. Impact of outliers?

It is usually sensitive to outliers.


  • Naive Bayes

Naive Bayes classifiers are a collection of classification algorithms based on Bayes' Theorem.

1. What Are the Basic Assumption?

Features Are Independent.


2. Advantages

a) Work Very well with many number of features.

b) Works Well with Large training Dataset.

c) It converges faster when we are training the model.

d) It also performs well with categorical features.


3. Disadvantages

Correlated features affects performance.


4.Impact of outliers and missing values ?

It is usually robust to outliers and can handle the missing values.


  • KNN

KNN is used for classification and regression. In both cases, the input consists of the k closest training examples in data set.

1. Advantages

a) Quick calculation time.

b) Simple algorithm – to interpret.c.

c) Versatile – useful for regression and classification.

d) High accuracy


2. Disadvantages

a) Accuracy depends on the quality of the data.

b) With large data, the prediction stage might be slow.

c) Sensitive to the scale of the data and irrelevant features.


3. Impact of outliers?

It is usually sensitive to outliers.


  • Random Forest

Random forest built multiple decision tree and merge together to form a single robust model. It is more accurate and stable prediction and training is done with 'Bagging' method.

1. What Are the Basic Assumption?

There are no such assumptions.


2. Advantages

a) Doesn't Overfit.

b) Less Parameter Tuning required.

c) Decision Tree can handle both continuous and categorical variables.


3. Disadvantages

Biased in multiclass classification problems towards more frequent classes.


4.Impact of outliers?

Robust to Outliers.


  • Xgboost

XGBoost is an implementation of gradient boosted decision trees designed for speed and performance. It more accurate and efficient.

1. Advantages

a) It has a great performance

b) It can solve complex non linear functions.

c) It is better in solve any kind of ML usecases.


2. Disadvantages

It requires some amount of parameter tuning.


3. Impact of outliers and missing values ?

Robust to Outliers and can not handle missing values.


Unsupervised Learning


  • K-Means

The K-means clustering algorithm is used to find groups which have not been explicitly labeled in the data.

1. Advantages

a) If variables are huge, then K-Means most of the times computationally faster than hierarchical clustering, if we keep k smalls.

b) K-Means produce tighter clusters than hierarchical clustering, especially if the clusters are globular.


2. Disadvantages

a) Difficult to predict K-Value.

b) With global cluster, it didn't work well.


3. Impact of outliers?

It is usually sensitive to outliers.


  • DBSCAN

DBSCAN (Density-Based Spatial Clustering of Applications with Noise) is a popular learning method utilized in model building and machine learning algorithms. This is a clustering method that is used in machine learning to separate clusters of high density from clusters of low density.


1. Advantages

a) Does not require a-priori specification of number of clusters.

b) Able to identify noise data while clustering.

c) DBSCAN algorithm is able to find arbitrarily size and arbitrarily shaped clusters.


2. Disadvantages

a) DBSCAN algorithm fails in case of varying density clusters.

b) Fails in case of neck type of dataset.

c) Does not work well in case of high dimensional data.


3.Impact of outliers ?

Robust to Outliers.


Conclusion

I hope this attempt to summarize all the machine learning model come in handy for you at the time of your interview preparation as well as for starter for this field.


NOTE : Here I am attaching a link for more information of all topic covered above , this link will be very helpful for those who are preparing for their interview because this are all IMP question's w.r.t interviews.

( Just click the button given below to redirect the link)

Your feedback is appreciated!

Did you find this Blog helpful? Any suggestions for improvement? Please let me know by filling the contact us form or ping me on LinkedIn .

Thanks!































878 views0 comments

Recent Posts

See All
bottom of page