scikit-learn
26 Nov 2024Introduction
scikit-learn is one of the most popular Python libraries for machine learning, providing tools for supervised, unsupervised, and semi-supervised learning, as well as utilities for preprocessing, model selection, and more.
This guide explores key features of the library with practical examples.
Let’s get started by installing scikit-learn
into our environment:
Supervised Learning
Supervised learning is a type of machine learning where the model learns to map input data to labeled outputs (targets) based on a given dataset. During training, the algorithm uses these labeled examples to understand the relationship between features and outcomes, enabling it to make accurate predictions or classifications on new, unseen data. This approach is commonly used for tasks like regression (predicting continuous values) and classification (categorizing data into discrete labels).
Regression
Regression models in supervised learning are used to predict continuous outcomes. These models establish relationships between input features and a target variable. Here’s a summary of the primary types of regression models available in scikit-learn:
-
Linear Regression: A simple and interpretable model that predicts outcomes based on a linear relationship between input features and the target variable. It’s ideal for tasks like predicting house prices based on square footage.
-
Ridge and Lasso Regression: These are regularized versions of linear regression that handle multicollinearity and high-dimensional data by adding penalties to large coefficients. Common applications include gene expression analysis and other domains with many correlated features.
-
Support Vector Regression (SVR): A kernel-based approach that captures non-linear relationships between inputs and outputs, making it effective for problems like stock price prediction.
-
Random Forest Regressor: An ensemble method that uses multiple decision trees to make robust predictions. It excels in tasks such as forecasting temperature or sales trends.
-
Gradient Boosting: This method iteratively improves predictions by focusing on poorly predicted samples. It’s commonly used for complex tasks like predicting customer lifetime value.
-
K-Neighbors Regressor: This algorithm predicts based on the average target value of the nearest neighbors in feature space, often used in property value estimation.
Regression models are essential for problems where understanding or predicting a continuous trend is the goal. scikit-learn’s implementations provide a range of options from simple to complex to handle varying levels of data complexity and feature interactions.
Linear Regression
Linear Regression predicts a target variable as a linear combination of input features.
Example: predicting house prices.
In this example, the LinearRegression
predicts what the next value for y
is when given an unstudied X
value.
Predicting for [4]
gives us:
Ridge and Lasso Regression
These methods add regularization to Linear Regression to handle high-dimensional data.
This produces the following output:
The coef_
value here represents the coefficients (weights) of the features in the fitted model. These coefficients
provide insight into the importance and contribution of each feature to the target variable.
These are important for:
- Understanding feature importance By examining the magnitude of the coefficients, you can determine which features have the greatest impact on the target variable. Larger absolute values typically indicate more influential features.
- Interpret relationships The sign of each coefficient indicates the direction of the relationship. Positive implies an increase in the feature value increases the target value. Negative coefficients imply the opposite.
- Feature selection As a feature’s coefficient approaches zero, its importance is diminished and therefore can inform your decision on selecting it as a feature
- Predict target changes The coefficients are treated as multipliers here, allowing you to predict the change on any other coefficients
Support Vector Regression (SVR)
SVR captures non-linear relationships by using kernels.
The output here is:
The relationship between the features in X
and the targets in y
is now not linear. As a result, SVR predicts a
value by finding a function that fits within a tolerance margin (called the epsilon-tube) around the data. The SVR
constructor allows you to control the epsilon
: passing a smaller value will aim for a more exact fit; a larger value
will aim at better generalisation.
Random Forest Regressor
An ensemble method that averages predictions from multiple decision trees.
The output (when I run it) of this:
Run this a few more times though, and you’ll see that you get different output values. As the name suggests, Random Forrest Regressor uses randomness in building its model which may lead to slightly different values each time the model is trained. This randomness helps improve the model’s generalisation ability.
Gradient Boosting
Boosting combines weak learners to achieve higher accuracy.
The output of this is:
Gradient Boosting builds models sequentially to improve predictions by focusing on errors that it observes from previous models. It works by doing the following:
- Sequential model building Each subsequent model build attempts to correct residual errors from a previous model build
- Gradient descent optimisation Rather than fitting the target variable, gradient boosting aims at minimising the loss function
- Weight contribution Predictions from all models are combined, often using weighted sums to produce a final prediction
K-Neighbors Regressor
Predicts the target value based on the mean of the nearest neighbors.
The output of this:
K-Neighbors Regressor is a non-parametric algorithm. It relies on the similarity between data points to predict the target value for a new input. It works in the following way:
- Find neighbors The algorithm identifies the ( K ) nearest data points (neighbors) in the feature space using a distance function (Euclidean/Manhattan/Minkowski)
- Predict the value The target value for the input is computed as the weighted average of the target value of the ( K ) neighbors
Summary
Algorithm | Where It Excels | Where to Avoid | Additional Notes |
---|---|---|---|
Linear Regression | - Simple, interpretable tasks. - Problems with few features and a linear relationship. |
- Non-linear relationships. - Datasets with outliers. - Multicollinearity. |
Coefficients provide insights into feature importance. |
Ridge Regression | - Handling multicollinearity. - High-dimensional datasets. |
- Sparse datasets where some features are irrelevant. | Adds ( L2 ) penalty (squared magnitude of coefficients). |
Lasso Regression | - Feature selection (shrinks irrelevant feature weights to zero). - High-dimensional datasets. |
- Scenarios needing all features for predictions. - Datasets with high noise levels. |
Adds ( L1 ) penalty (absolute value of coefficients). |
ElasticNet Regression | - Combines Ridge and Lasso strengths for datasets with multiple feature types. | - Small datasets where simpler methods like Linear Regression suffice. | Balances ( L1 ) and ( L2 ) penalties via an l1_ratio parameter. |
Support Vector Regression (SVR) | - Capturing non-linear relationships. - Small to medium-sized datasets. |
- Large datasets (slow training). - Poorly scaled features (sensitive to scaling). |
Uses kernels (e.g., RBF) to model non-linear relationships. |
Random Forest Regressor | - Robust to outliers. - Non-linear relationships. - Feature importance estimation. |
- High-dimensional sparse data. - Very large datasets (may require more memory). |
Ensemble method combining multiple decision trees. |
Gradient Boosting Regressor | - Complex datasets. - Predictive tasks with high accuracy requirements. - Tabular data. |
- Large datasets without sufficient computational resources. - Overfitting if not regularized. |
Iteratively improves predictions by focusing on poorly predicted samples. |
K-Neighbors Regressor | - Small datasets with local patterns. - Non-linear relationships without feature engineering. |
- Large datasets (computationally expensive). - High-dimensional feature spaces. |
Predictions are based on the mean of ( k ) nearest neighbors in the feature space. |
Classification
Classification is a supervised learning technique used to predict discrete labels (classes) for given input data. In scikit-learn, various classification models are available, each suited for different types of problems. Here’s a summary of some key classification models:
- Logistic Regression A simple yet powerful model that predicts probabilities of class membership using a logistic function. It works well for both binary (e.g., spam detection) and multi-class classification tasks. Logistic Regression is interpretable and often serves as a baseline model.
- Decision Tree Classifier A tree-based model that splits data based on feature values, creating interpretable decision rules. Decision trees excel in explaining predictions and handling non-linear relationships. They are prone to overfitting but can be controlled with pruning or parameter constraints.
- Random Forest Classifier An ensemble method that combines multiple decision trees to improve accuracy and reduce overfitting. Random forests are robust and handle high-dimensional data well. They’re commonly used in applications like disease diagnosis and image classification.
- Support Vector Machine (SVM) SVMs create a hyperplane that separates classes while maximizing the margin between them. They are effective for both linear and non-linear classification tasks and work well for problems like handwriting recognition. SVMs are sensitive to feature scaling and require tuning parameters like the kernel type.
- Naive Bayes A probabilistic model based on Bayes’ theorem, assuming independence between features. Naive Bayes is fast and efficient for high-dimensional data, making it ideal for text classification problems like spam filtering.
- k-Nearest Neighbors (k-NN) A simple and intuitive algorithm that classifies based on the majority label of ( k ) nearest neighbors in the feature space. It works well for recommendation systems and other tasks where the decision boundary is complex but local patterns are important.
- Gradient Boosting Classifier A powerful ensemble technique that iteratively improves predictions by correcting errors of previous models. Gradient Boosting achieves high accuracy on structured/tabular data and is often used in competitions and real-world applications like fraud detection.
Logistic Regression
A simple classifier for binary or multi-class problems.
This will give you a list of values that are the probability of the [2.5]
value is in the classification of 0
or 1
.
At these small data levels, the output probabilities make some boundary decisions that don’t appear correct at first.
As the sample set grows and becomes more diverse, the classifications normalise.
Decision Tree Classifier
A rule-based model for interpretable predictions.
This output of this is:
As the input value moves further away from 2
, the output starts snapping to 0
.
This algorithm starts at the root node and selects the feature and threshold that best divide the dataset into subsets with the most homogeneous class labels.
The process is repeated for each subset until a stopping criterion is met, such as:
- Maximum tree depth.
- Minimum samples in a leaf node.
- All samples in a subset belong to the same class.
Random Forest Classifier
An ensemble method that reduces overfitting.
This output of this is:
Having a look at predict_proba
from the model
object, we can see the probabilities of the value being classified:
2.5
gives us a 68%
chance according to the model that we should classify as a 1
.
This algorithm works by:
- Building multiple decision trees during training.
- Each tree is trained on a random subset of data (bagging) and considers a random subset of features at each split.
- Predictions are made by majority voting across all trees.
Support Vector Machine (SVM)
Maximizes the margin between classes for classification.
The output of this is:
SVM is a supervised learning algorithm used for classification and regression tasks. SVM finds the hyperplane that best separates classes while maximizing the margin between them.
Naive Bayes
A probabilistic model based on Bayes’ theorem.
The output of this is:
Naive Bayes (based on Bayes’ theorm) calculates the posterior probability of each class for a given input and assigns the class with the highest probability.
Types of Naive Bayes Models:
- Gaussian Naive Bayes: Assumes features follow a normal distribution (continuous data).
- Multinomial Naive Bayes: Suitable for discrete data, commonly used for text classification (e.g., word counts).
- Bernoulli Naive Bayes: Handles binary/boolean features (e.g., word presence/absence).
k-Nearest Neighbors (k-NN)
Classifies based on the majority label of neighbors.
Gradient Boosting Classifier
Gradient Boosting Classifier is a supervised learning algorithm that builds an ensemble of weak learners (typically decision trees) sequentially, with each new tree correcting the errors of the previous ones.
How it works:
-
Start with a Simple Model: The process begins with a weak learner (e.g., a small decision tree) that makes initial predictions.
-
Compute Residuals: The errors (residuals) from the previous predictions are calculated.
-
Fit the Next Model: A new weak learner is trained to predict the residuals.
-
Combine Models: The predictions from all learners are combined (weighted sum) to form the final output.
-
Gradient Descent: The algorithm minimizes the loss function (e.g., log loss for classification) by iteratively updating the predictions.
Summary
Algorithm | Where It Excels | Where to Avoid | Additional Notes |
---|---|---|---|
Logistic Regression | - Binary or multi-class classification. - Interpretable and simple problems. - Linearly separable data. |
- Non-linear decision boundaries. - Complex datasets with many features. |
Outputs probabilities for class membership. Often used as a baseline model. |
Decision Tree Classifier | - Interpretable models. - Handling non-linear relationships. - Small to medium-sized datasets. |
- Prone to overfitting on noisy data. - Large datasets without pruning or constraints. |
Creates human-readable decision rules. Can be controlled using parameters like max_depth . |
Random Forest Classifier | - Robust to overfitting. - High-dimensional data. - Tasks requiring feature importance ranking. |
- Sparse datasets. - Very large datasets (can require significant memory). |
Ensemble method combining multiple decision trees. Uses bagging for improved performance. |
Support Vector Machine (SVM) | - Binary or multi-class problems with complex boundaries. - High-dimensional feature spaces. |
- Very large datasets (slow training). - Datasets requiring soft predictions (probabilities). |
Effective for small to medium-sized datasets. Requires scaling of features for optimal performance. |
Naive Bayes | - High-dimensional data. - Text classification (e.g., spam detection). - Multiclass problems. |
- Strong feature dependencies. - Continuous numerical features without preprocessing. |
Assumes feature independence. Fast and efficient for large-scale problems. |
k-Nearest Neighbors (k-NN) | - Small datasets with complex decision boundaries. - Non-parametric problems. |
- Large datasets (computationally expensive). - High-dimensional feature spaces. |
Relies on distance metrics (e.g., Euclidean). Sensitive to feature scaling. |
Gradient Boosting Classifier | - Tabular data. - High accuracy requirements for structured datasets. - Imbalanced data with class weighting. |
- Large datasets with limited resources. - Risk of overfitting if not regularized. |
Ensemble of weak learners that iteratively improves predictions. Requires careful hyperparameter tuning. |
Multilayer Perceptron (MLP) | - Non-linear decision boundaries. - Complex datasets with many features. |
- Large datasets without sufficient computational resources. - Requires careful tuning. |
Neural network-based classifier. Requires scaling of features. Can model complex patterns. |
Unsupervised Learning
Unsupervised learning is a type of machine learning where the model identifies patterns, structures, or relationships in data without labeled outputs. Instead of predicting specific outcomes, the algorithm organizes or simplifies the data based on inherent similarities or differences. Common applications include clustering (grouping similar data points) and dimensionality reduction (compressing high-dimensional data for visualization or analysis).
Clustering
Clustering is a technique in unsupervised learning that involves grouping data points into clusters based on their similarities or proximity in feature space. The goal is to organize data into meaningful structures, where points within the same cluster are more similar to each other than to those in other clusters. It is commonly used for tasks like customer segmentation, anomaly detection, and exploratory data analysis.
K-Means
K-Means is an unsupervised learning algorithm that partitions a dataset into \(K\) clusters based on feature similarity.
The output of this is:
This tells us that [1]
and [2]
are assigned the label of 0
, with [10]
and [11]
being assigned 1
.
How it works:
-
Choose Initial Centroids: Randomly select \(K\) points as the initial cluster centroids.
-
Assign Data Points to Clusters: Assign each data point to the nearest centroid based on a distance metric (e.g., Euclidean distance).
-
Update Centroids: Recalculate the centroids as the mean of all points assigned to each cluster.
-
Iterate: Repeat steps 2 and 3 until centroids stabilize or a maximum number of iterations is reached.
DBSCAN
DBSCAN is an unsupervised clustering algorithm that groups data points based on density, identifying clusters of arbitrary shapes and detecting outliers.
Clearly the 50000
and 500001
are should clearly be clustered together. The output here:
How it works:
-
Core Points: A point is a core point if it has at least min_samples neighbors within a specified radius (eps).
-
Reachable Points: A point is reachable if it lies within the eps radius of a core point.
-
Noise Points: Points that are neither core points nor reachable are classified as noise (outliers).
-
Cluster Formation: Clusters are formed by connecting core points and their reachable points.
Agglomerative Clustering
Agglomerative Clustering is a hierarchical, bottom-up clustering algorithm that begins with each data point as its own cluster and merges clusters iteratively based on a linkage criterion until a stopping condition is met.
This outputs:
How it works:
-
Start with Individual Clusters: Each data point is treated as its own cluster.
-
Merge Clusters: Clusters are merged step-by-step based on a similarity metric and a linkage criterion.
-
Stop Merging: The process continues until the desired number of clusters is reached, or all points are merged into a single cluster.
-
Dendrogram: A tree-like diagram (dendrogram) shows the hierarchical relationship between clusters.
Summary
Algorithm | Where It Excels | Where to Avoid | Additional Notes |
---|---|---|---|
K-Means | - Partitioning data into well-defined, compact clusters. - Large datasets with distinct clusters. |
- Non-spherical clusters. - Highly imbalanced cluster sizes. - Datasets with noise or outliers. |
Relies on centroids; sensitive to initializations. Requires the number of clusters (k ) to be specified beforehand. |
DBSCAN | - Finding clusters of arbitrary shapes. - Detecting outliers. - Spatial data analysis. |
- High-dimensional data. - Datasets with varying densities. - Requires careful tuning of eps and min_samples . |
Density-based approach. Does not require the number of clusters to be predefined. Can identify noise points as outliers. |
Agglomerative Clustering | - Hierarchical relationships between clusters. - Small to medium-sized datasets. |
- Large datasets (computationally expensive). - Very high-dimensional data. |
Hierarchical clustering. Outputs a dendrogram for visualizing cluster merges. |
Dimensionality Reduction
PCA
PCA is an unsupervised dimensionality reduction technique that transforms data into a lower-dimensional space while preserving as much variance as possible.
This outputs the following:
t-SNE
Visualizes high-dimensional data.
The output from this is:
NMF
Non-negative matrix factorization for feature extraction.
The output of this is:
3. Semi-Supervised Learning
Semi-supervised learning bridges the gap between supervised and unsupervised learning by utilizing a small amount of labeled data alongside a large amount of unlabeled data.
Label Propagation
Label Propagation spreads label information from labeled to unlabeled data.
The output of this shows the remainder of the data getting labelled:
Self-Training
Self-Training generates pseudo-labels for unlabeled data using a supervised model.
The unlabelled [3]
value now is returned with a value:
4. Model Selection
Model selection is the process of identifying the best machine learning model and its optimal configuration for a given dataset and problem. It involves comparing different models, evaluating their performance using metrics (e.g., accuracy, F1-score, or RMSE), and tuning their hyperparameters to maximize predictive accuracy or minimize error.
Cross-Validation
Cross-validation is a model evaluation technique that assesses a model’s performance by dividing the dataset into multiple subsets (folds) for training and testing.
The output here is:
How it works:
- Split the Data: The dataset is split into \(k\) folds (subsets).
- Train and Test: The model is trained on \(k - 1\) folds and tested on the remaining fold. This process repeats ( k ) times, with each fold used as the test set once.
- Aggregate Results: Performance metrics (e.g., accuracy, F1-score) from all folds are averaged to provide an overall evaluation.
GridSearchCV
GridSearchCV is a tool in scikit-learn
for hyperparameter tuning that systematically searches for the best combination
of hyperparameters by evaluating all possible parameter combinations in a given grid.
The output of this is:
How it works:
- Define a Parameter Grid: Specify the hyperparameters and their possible values (e.g.,
{'C': [0.1, 1, 10], 'kernel': ['linear', 'rbf']}
for an SVM). - Cross-Validation: For each combination of parameters, the model is evaluated using cross-validation to estimate its performance.
- Select the Best Model: The combination of hyperparameters that produces the best cross-validation score is chosen.
RandomizedSearchCV
RandomizedSearchCV is a hyperparameter tuning tool in scikit-learn that randomly samples a fixed number of parameter combinations from a specified grid and evaluates them using cross-validation.
The output of this is:
How it works:
- Define a Parameter Distribution: Specify the hyperparameters and their possible ranges (distributions or lists) to sample from.
- Random Sampling: A fixed number of parameter combinations is randomly selected and evaluated.
- Cross-Validation: For each sampled combination, the model is evaluated using cross-validation.
- Select the Best Model: The parameter combination that yields the best performance is chosen.
5. Feature Selection
Feature selection is the process of identifying the most relevant features in a dataset for improving a machine learning model’s performance. By reducing the number of features, it helps eliminate redundant, irrelevant, or noisy data, leading to simpler, faster, and more interpretable models
SelectKBest
SelectKBest is a feature selection method in scikit-learn that selects the top \(k\) features from the dataset based on univariate statistical tests.
The output of this is:
How it works:
- Choose a Scoring Function: Select a statistical test (e.g., ANOVA, chi-square, mutual information) to evaluate feature relevance.
- Compute Scores: Each feature is scored based on its relationship with the target variable.
- Select Top \(k\) Features: The \(k\) highest-scoring features are retained for the model.
Recursive Feature Elimination (RFE)
RFE is a feature selection method that recursively removes the least important features based on a model’s performance until the desired number of features is reached.
The output of this is:
How it works:
- Fit a Model: Train a machine learning model (e.g., Logistic Regression, SVM) on the dataset.
- Rank Features: The model assigns importance scores to the features (e.g., weights or coefficients).
- Remove Features: Eliminate the least important features (based on the scores) and refit the model.
- Repeat: Continue the process until the specified number of features is retained.
VarianceThreshold
VarianceThreshold is a simple feature selection method in scikit-learn that removes features with low variance, assuming that low-variance features do not carry much information.
The output of this is:
The zeros don’t change, so they’re stripped from the result.
How it works:
- Compute Variance: For each feature, calculate the variance across all samples.
- Apply Threshold: Remove features whose variance falls below a specified threshold.
6. Preprocessing
Preprocessing transforms raw data to make it suitable for machine learning algorithms.
StandardScaler
StandardScaler is a preprocessing technique in scikit-learn that standardizes features by removing the mean and scaling them to unit variance (z-score normalization).
This outputs the following:
How it works:
-
Compute Mean and Standard Deviation:
For each feature, calculate its mean \((\mu)\) and standard deviation \((\sigma)\). -
Transform Features:
Scale each feature \(( x )\) using the formula:
\(z = \frac{x - \mu}{\sigma}\)
This results in features with a mean of 0 and a standard deviation of 1.
MinMaxScaler
MinMaxScaler is a preprocessing technique in scikit-learn that scales features to a fixed range, typically [0, 1]
.
It preserves the relationships between data points while ensuring all features are within the specified range.
This outputs the following:
How it works:
How It Works:
- Compute Minimum and Maximum Values: For each feature, calculate its minimum \(( \text{min} )\) and maximum \(( \text{max} )\) values.
-
Transform Features: Scale each feature \(x\) using the formula:
\(x' = \frac{x - \text{min}}{\text{max} - \text{min}} \times (\text{max}_{\text{scale}} - \text{min}_{\text{scale}}) + \text{min}_{\text{scale}}\)By default, \(\text{min}_{\text{scale}} = 0 \) and \( \text{max}_{\text{scale}} = 1\).
PolynomialFeatures
PolynomialFeatures is a preprocessing technique in scikit-learn that generates new features by adding polynomial combinations of existing features up to a specified degree.
This outputs the following:
How it works:
-
Generate Polynomial Features:
Creates polynomial terms (e.g., \(( x_1^2, x_1 \cdot x_2, x_2^3 )\) for the input features up to the specified degree. -
Include Interaction Terms:
Optionally includes interaction terms (e.g., \(( x_1 \cdot x_2 )\) to capture feature interactions. -
Expand the Feature Space:
Transforms the input dataset into a higher-dimensional space to model non-linear relationships.
LabelEncoder
LabelEncoder is a preprocessing technique in scikit-learn that encodes categorical labels as integers, making them suitable for machine learning algorithms that require numerical input.
The output of is this:
-
Fit to Labels:
Maps each unique label in the dataset to an integer.
Example:['cat', 'dog', 'mouse']
→[0, 1, 2]
. -
Transform Labels:
Converts the original labels into their corresponding integer representation. -
Inverse Transform:
Converts encoded integers back into their original labels.
OneHotEncoder
OneHotEncoder is a preprocessing technique in scikit-learn that converts categorical data into a binary matrix (one-hot encoding), where each category is represented by a unique binary vector.
The output of this:
How it works:
-
Fit to Categories:
Identifies the unique categories in each feature. -
Transform Features:
Converts each category into a binary vector, with a1
indicating the presence of the category and0
elsewhere. -
Sparse Representation:
By default, the output is a sparse matrix to save memory for large datasets with many categories.
Imputer
Imputer fills in missing values in datasets.
The output of this, you can see the None
has been filled in:
How it works:
-
Identify Missing Values:
Detects missing values in the dataset (default:np.nan
). -
Compute Replacement Values:
Based on the chosen strategy (e.g., mean or median), calculates replacement values for each feature. -
Fill Missing Values:
Replaces missing values in the dataset with the computed replacements.
7. Pipelines and Utilities
Pipelines streamline workflows by chaining preprocessing steps with modeling.
Pipeline
A Pipeline in scikit-learn is a sequential workflow that chains multiple preprocessing steps and a final estimator into a single object, simplifying and automating machine learning workflows.
ColumnTransformer
ColumnTransformer in scikit-learn is a tool that applies different preprocessing steps to specific columns of a dataset, enabling flexible and efficient handling of mixed data types.
FunctionTransformer
FunctionTransformer in scikit-learn allows you to apply custom or predefined functions to transform data as part of a machine learning pipeline.
8. Neural Network Integration
Though scikit-learn is not primarily designed for deep learning, it includes simple neural network models.
MLPClassifier/MLPRegressor
MLPClassifier (for classification) and MLPRegressor (for regression) are multi-layer perceptron models in scikit-learn that implement neural networks with backpropagation. They are part of the feedforward neural network family.
How it works:
-
Layers:
Composed of an input layer, one or more hidden layers, and an output layer. Hidden layers process weighted inputs using activation functions. - Activation Functions:
- Hidden Layers: Use non-linear activation functions like ReLU (
'relu'
) or tanh ('tanh'
). - Output Layer:
- MLPClassifier: Uses softmax for multi-class classification or logistic sigmoid for binary classification.
- MLPRegressor: Uses linear activation.
- Hidden Layers: Use non-linear activation functions like ReLU (
- Optimization:
Parameters are learned through backpropagation using stochastic gradient descent (SGD) or adaptive optimizers like Adam.
Key Parameters:
-
hidden_layer_sizes
:
Tuple specifying the number of neurons in each hidden layer.
Example:hidden_layer_sizes=(100, 50)
creates two hidden layers with 100 and 50 neurons respectively. activation
:
Activation function for hidden layers:'relu'
(default): Rectified Linear Unit.'tanh'
: Hyperbolic tangent.'logistic'
: Sigmoid function.
solver
:
Optimization algorithm:'adam'
(default): Adaptive Moment Estimation (fast and robust).'sgd'
: Stochastic Gradient Descent.'lbfgs'
: Quasi-Newton optimization (good for small datasets).
-
alpha
:
Regularization term to prevent overfitting (default:0.0001
). learning_rate
:
Determines how weights are updated:'constant'
: Fixed learning rate.'adaptive'
: Adjusts learning rate based on performance.
max_iter
:
Maximum number of iterations for training (default:200
).
Conclusion
scikit-learn is a versatile library that offers robust tools for a wide range of machine learning tasks, from regression and classification to clustering, dimensionality reduction, and preprocessing. Its simplicity and efficiency make it a great choice for both beginners and advanced practitioners. Explore its extensive documentation to dive deeper into its capabilities!