This script was prepared for the first forester “hands-on” conducted inside of the MI2.AI researchers and R enthusiasts. The main goal of this event is to check the completeness level of the forester package, find some bugs, check the clarity of the user interface and obtain ideas for new modules which could increase the package’s value.

To obtain such effects, during this “hands-on”, we will make a short introduction to the forester package and show the participants how to use its functionalities. After the workshop part, the participants will have time to test the package on their ml tasks prepared beforehand.

The outline of the workshop is presented below:

Setup

Task 1: Dataset.

Task 2: Basics of AutoML with the forester.

Task 3: The train mastery.

Task 4: Advanced preprocessing.

Task 5: How to understand the output?

Task 6: Explain the model.

Task 7: Report generating.

Task 8: Your own metric.

Task 9: Free testing.

Task 10: Discussion and suggestions.

For each task, we will define WHY? are we asking you to do this and WHAT? are you supposed to do.

Setup

In order to run the code we have to install the forester package from our GitHub repository, load the package and get to know the basic dataset.

install.packages('DALEX')
install.packages('devtools')
devtools::install_github("ModelOriented/forester")
library(DALEX)
library(forester)
data(lisbon)
View(lisbon)

To understand our dataset better we will use the forester check_data() function which was designed to provide basic dataset info for the user and provide him with additional information describing the quality of the dataset. As our target column is named ‘Price’ we will provide it to the check_data() function. Typically this function executes inside of the train(), so we have to pass the output to the variable.

check <- check_data(lisbon, 'Price', verbose = TRUE)

Check data report correctly detected the issues present in the data, such as the high correlation of subset of the columns, presence of duplicate and static columns, unequal quantile bins or suspicious Id columns.

Task 1: Dataset

WHY?

Forester is not picky and accepts datasets in various formats and qualities (e.g. it may contain NA values). You will check how the forester sees the lisbon dataset.

WHAT?

Based on the check_data() function, write the following information about the dataset.

  1. The number of columns.

  2. Static columns names.

  3. Dominating values.

  4. Duplicate column names.

  5. Amount of NAs.

  6. Correlated pairs of numerical values number.

  7. Correlated pairs of categorical values number.

  8. Optional outliers number.

  9. Is there any ID column?

Answers:
  1. 17
  2. Country, District, Municipality
  3. Portugal, Lisboa, Lisboa
  4. District - Municipality
  5. 0
  6. 5
  7. 1
  8. 19
  9. yes

Task 2: Basics of AutoML with the forester

WHY?

The first task is designed to introduce you to the main function which is called train(). This function conducts all AutoML processes, including data preprocessing, model training and evaluation.

WHAT?

In this task, you are asked to assign the output of train() into the variable output1. Instead of using default parameters, we want to shorten the computation time and ask you to limit the number of bayesian optimization iterations to 0, hide the output of the function and set the number of random search iterations to 5 In the end print the ranked list from the output object.

Answers:
output1 <- train(data = lisbon,
                 y = 'Price',
                 bayes_iter = 0,
                 verbose = FALSE,
                 random_iter = 5)

output1$ranked_list

The output is a pretty complex object which is later used in other functions. The most important part is however the ranked_list, which evaluates all models trained. The output list is sorted by the main metric for each task (classification or regression). Can you guess which metric is the main for the regression task?

Task 3: The train mastery

WHY?

The second task aims to get you to know the train functions parameters even better. As plenty of the processes hidden behind train() are complex and can also be tuned in some way, this option has to be accessible from the train() interface.

WHAT?

This time you are asked to experiment with the other parameters and compare what they do. Try to create the output with ranger, xgboost and decision_tree models only. You can then set bayesian optimization iterations to 5 and random search iterations to 3. What are the differences between these two hyperparameter tuning methods’ behaviour? Finally, you can turn on the text outputs from the function.

Answers:
output2 <- train(data = lisbon,
                 y = 'Price',
                 bayes_iter = 5,
                 engine = c('ranger', 'xgboost', 'decision_tree'),
                 verbose = TRUE,
                 random_iter = 3)

output2$ranked_list

Task 4: Advanced preprocessing

WHY?

The check() function just suggests hypothetical problems with the dataset, but the train() function offers also turning on the preprocessing option.

WHAT?
  1. Perform automatic preprocessing with the appropriate parameter data. In order to reduce the time of counting functions, the number of optimization iterations Bayesian to 0 and random search to 5. Compare the input with the data after preprocessing - check which columns have been removed and compare if they match this with the information from check_data.
Answers:
output3 <- train(data = lisbon,
                 y = 'Price',
                 bayes_iter = 0,
                 verbose = FALSE,
                 random_iter = 5,
                 advanced_preprocessing = TRUE)
  1. Which model achieves the best results? Is it a model with default parameters, after Bayess optimization or a random search?
output3$ranked_list[1, ]
  1. Compare the performance of the best model trained on the data with preprocessing output3 and without output1. Have the results improved?
output3$ranked_list[1, ]
output1$ranked_list[1, ]

Task 5: How to understand the output?

WHY?

As the forester team, our goal is not only to make the training process quick and simple but also to make it understandable even for newcomers. Our extensional tools are one of the points that differentiate us from other AutoML frameworks. In this chapter, we will use these extra functions to show the full capabilities of the forester.

WHAT?

This time we will ask you to use the output created in the previous task and explore four functions provided by the forester:

  1. The first task is to save and load the output via the save() function
  2. The second one is creating a subset from the original lisbon data frame and runs the predict_new() method on this. The methods enable the user to make predictions on the data unseen by models before. Can you guess why is this function needed?
  3. The third one is associated with a well-known package in MI2.AI - DALEX. You are asked to create an explainer via explain() method and use 4. some DALEX visualization on the outcome. (Task 6)
  4. The last task is to create and read a report() outcome. (Task 7)
Answers:
save(train = output2,
     name = 'hands_on_save',
     return_name = TRUE)


new_lisbon <- lisbon[50:100, ]
predict_new(train_out = output2,
            data = new_lisbon)

Task 6: Explain the model

WHY?

Explaining the model helps you understand it better. The forester does not include its own functions to explain the model but uses a ready-made DALEX package. Thanks to the simple explain() function, it creates an object adapted to the DALEX functions.

WHAT?

Create a DALEX object from the best model from output2. Then create feature importance, use the DALEX::model_parts() function and introduce it in the chart (plot() function).

Answers:
exp_list <- forester::explain(models = output2$best_models[1],
               test_data = output2$test_data,
               y = output2$y)
exp <- exp_list[[1]]
p1 <- DALEX::model_parts(exp)
plot(p1)

Task 7: Report generating

WHY?

For convenient storage and comparison of the training results of the models, the forester offers a report-generating function - containing the most important information about the data and training. Thanks to this, it is possible to store the results of work in independent files and return them to them at any time.

WHAT?

Generate a report based on output2. Then based on the report find out which model is the best, assess the dispersion of the training set and test. Compare the feature importance chart to that created in Task 6.

Answers:
report(train_output = output2,
       output_file  = 'hands_on_report')

Task 8: Your own metric

WHY?

Forester offers some of the most commonly used metrics for model evaluation, however, it does not preclude the use of a different, proprietary metric. This allows fuller use of the forester’s potential for the individual needs of the user.

WHAT?

Use the train() function with the default parameters but with your own metric added.
It’s a good idea to turn off Bayes Optimization and decrease Random Search iterations to build models faster.

Answers:

Example:

max_error <- function(predictions, observed) {
  return(max(abs(predictions - observed)))
}
output4 <- train(data = lisbon,
                 y = 'Price',
                 bayes_iter = 0,
                 random_iter = 3,
                 metric_function = max_error,
                 metric_function_name = 'Max Error',
                 metric_function_decreasing = FALSE)

output4$ranked_list

Task 9: Free testing

WHY?

As the main goal of the “hands-on” is getting valuable feedback from this workshop, we want you to test the forester on your own datasets prepared beforehand. This way we will be able to test these features in the new environment which might generate some bugs and issues.

WHAT?

During the last task, we ask you to ‘play’ with the package, delve into its documentation and not only do everything we did before on the new data but also try to do the things we didn’t talk about during the workshop.

Task 10: Discussion and suggestions

WHY?

We want to develop and improve the forester package so your feedback is very valuable to us.

WHAT?

As a forester team we ask you to join the discussion about features which should be improved, and what would you change in the forester or add. If you happen to find any bug, please post it on our GitHub Issues page. https://github.com/ModelOriented/forester/issues