Decision Trees and Ensemble Methods in Machine Learning

Decision trees are a fundamental supervised learning method that form the backbone of many powerful ensemble models. In this interactive online workshop, participants will learn the theory behind decision trees, including how they split data, handle overfitting, and work with both classification and regression problems. We will also explore advanced tree-based methods, including Random Forests, Gradient Boosting, and XGBoost. Through coding exercises in Python (Scikit-Learn, XGBoost), attendees will build and fine-tune decision tree models on real-world datasets. By the end of the session, participants will have a solid understanding of decision trees, their strengths and weaknesses, and when to apply ensemble learning techniques. No prior machine learning experience is required, but basic Python and statistics knowledge are recommended.
This workshop is ideal for researchers, data scientists, and professionals looking to apply decision trees and ensemble methods to their datasets.
By the end of this workshop, participants will be able to:
Understand the mechanics of decision trees, including entropy, information gain, and Gini impurity.
Build and visualise decision tree models using Scikit-Learn.
Evaluate decision tree performance and understand limitations, including overfitting and interpretability.
Improve decision tree models through pruning, hyperparameter tuning, and feature selection.
Explore ensemble methods, including Bagging (Random Forests) and Boosting (Gradient Boosting, XGBoost).
Interpret decision trees and ensemble models using SHAP and feature importance analysis.
Upcoming workshops
EVENTS
NEWS