The primary aim of this hands-on exercise is to produce wall-to-wall maps of aboveground biomass (AGB; m³ ha⁻¹) for a selected area of interest (AOI) using AI-based modelling approaches. To achieve this, the entire processing workflow is implemented in R, providing a fully reproducible and open-source environment. Participants will work with a suite of R packages designed to process, analyze, and model airborne LiDAR data for forest and ecological applications.
The exercise follows a four-step workflow. First, ALS grid-based metrics are extracted at plot locations. Second, these metrics are filtered based on spatial autocorrelation and collinearity. Third, predictive models are built and tuned using linear regression algorithms in R to identify the best-performing models. The results are then compared across methods. Finally, the best model from that approach is applied to generate spatial prediction maps.
packages <- c(
"lidR",
"RCSF",
"raster",
"terra",
"sf",
"dplyr",
"ggplot2",
"plot3D",
"ranger",
"xgboost",
"usdm"
)
# install only missing packages
missing_pkgs <- packages[!packages %in% installed.packages()[, "Package"]]
if (length(missing_pkgs) > 0) {
install.packages(missing_pkgs, dependencies = TRUE)
}
# load packages quietly
suppressMessages(
lapply(packages, library, character.only = TRUE)
)LiDAR point clouds are commonly distributed in several formats, including LAS, LAZ, PLY, and others. Here, users can upload LiDAR datasets by selecting the desired file path. Once loaded, key LiDAR features such as coordinate system, point density, number of points, memory usage, and acquisition type are printed to support informed processing decisions.
# ============================================================
# 1. LiDAR File Selection
# ============================================================
las_path <- file.choose() # Select LAS/LAZ file interactively
# ============================================================
# 2. LiDAR Data Import with Fallback Reader
# ============================================================
las <- tryCatch(
readALSLAS(las_path),
error = function(e) {
message("readALSLAS() failed. Using lidR::readLAS() instead.")
lidR::readLAS(las_path)
}
) # Read LiDAR file with robust error handling
# ============================================================
# 3. Input Data Validation
# ============================================================
if (is.empty(las)) stop("LAS file is empty") # Stop if no points are present
# ============================================================
# 4. Coordinate Reference System Assignment
# ============================================================
lidR::projection(las) <- sf::st_crs(32633) # Assign projected CRS (UTM zone 33N)
# ============================================================
# 5. LiDAR Object Inspection
# ============================================================
# ============================================================
# 6. Point Density and Point Spacing (Global Metrics)
# ============================================================
# Point density (pts/m²) represents the average number of LiDAR points
# per unit area over the entire dataset. By default, it considers ALL
# returns stored in the LAS file (first, intermediate, and last returns).
# Point spacing (m or cm) is the theoretical average distance between
# neighboring LiDAR points, derived from total area and total point count.
# It is a global summary metric and does NOT describe local point spacing
# or pulse spacing.
las
# cat("------------\n")
# A <- area(las)
# cat("Point spacing (cm):", sqrt(A/npoints(las))*100, "\n")
# cat("Point density (pts/m^2):", npoints(las)/A, "\n")This step extracts a small circular subset of the LiDAR point cloud and displays it in 3D to visually illustrate canopy structure, height variation, and point distribution before further processing.
Ground classification separates terrain points from vegetation and objects, essential for terrain models and normalising LiDAR heights. Ground points are identified using the Cloth Simulation Filter (CSF: Zhang et al., 2016), which simulates a virtual cloth over the inverted point cloud. Parameters control cloth resolution, rigidity, and convergence, adapting to terrain roughness. The classified point cloud offers a reliable ground surface for terrain and canopy analysis.
Raster surfaces from classified LiDAR points depict terrain, surface height, and canopy at 0.5 m resolution: • DTM interpolates ground points for terrain elevation without vegetation. • DSM captures maximum elevation, including vegetation and structures. • CHM subtracts DTM from DSM to show vegetation height and canopy structure.
# ============================================================
# 1. Ground Point Extraction
# ============================================================
ground <- filter_ground(las_g) # Select ground-classified LiDAR points
# ============================================================
# 2. Digital Terrain Model (DTM) Generation
# ============================================================
#DTM <- grid_terrain(ground, res=0.5, algorithm=tin()) # Interpolate ground surface
DTM <- grid_terrain(ground, res = 0.5, algorithm = knnidw(k = 10, p = 2))
# ============================================================
# 3. Digital Surface Model (DSM) Generation
# ============================================================
DSM <- grid_metrics(las_g, res=0.5, ~max(Z)) # Compute canopy top surface
# ============================================================
# 4. Canopy Height Model (CHM) Calculation
# ============================================================
CHM <- DSM - DTM # Derive canopy height above groundThis visualization displays the Digital Terrain Model (DTM), Digital Surface Model (DSM), and Canopy Height Model (CHM) side by side, allowing visual comparison of terrain elevation, surface structure, and vegetation height patterns across the study area.
Height normalisation is the process of transforming raw LiDAR elevation values into heights relative to the ground surface. By removing the influence of terrain variability, this step enables vegetation and object heights to be compared consistently across areas with different elevations and slopes.
Several methods can be used for height normalisation, depending on data characteristics and terrain complexity. Common approaches include:
In this workflow, a TIN-based method is adopted because it provides a balanced and flexible representation of the terrain surface, particularly in areas with variable topography. This approach effectively preserves local ground variations while ensuring robust normalisation for canopy structure analysis.
# ============================================================
# 1. Height Normalization of LiDAR Point Cloud
# ============================================================
#nlas <- normalize_height(las, tin()) # Normalize heights using TIN-based ground model
nlas <- normalize_height(las, knnidw(k = 10, p = 2))
# ============================================================
# 2. Filtering of Non-Physical Points
# ============================================================
nlas <- filter_poi(nlas, Z>=0) # Remove negative height valuesField plots containing aboveground biomass (AGB) data are loaded from vector datasets (e.g., GeoJSON or GPKG) and reprojected to the same coordinate reference system as the LiDAR data (EPSG: 32633). This ensures accurate spatial alignment between field measurements and LiDAR-derived products.
# ============================================================
# 1. Plot File Selection
# ============================================================
plots_path <- file.choose() # Select plot vector file interactively
# ============================================================
# 2. Plot Data Import
# ============================================================
plots <- st_read(plots_path, quiet=TRUE) # Read plot spatial data
# ============================================================
# 3. Plot Identifier Assignment
# ============================================================
plots$plot_id <- seq_len(nrow(plots)) # Create unique plot IDs
# ============================================================
# 4. Target Variable Type Conversion
# ============================================================
plots$AGB_m3_ha <- as.numeric(plots$AGB_m3_ha) # Ensure numeric AGB valuesBefore proceeding with further analyses, the spatial overlap between the Canopy Height Model (CHM) and the field plots is visually inspected. This step verifies correct plot positioning, scale, and coverage, helping to identify potential misalignments or projection issues early in the workflow.
LiDAR point clouds provide detailed 3D data via X, Y, Z coordinates and return intensity, capturing the structural complexity of forests, shrubs, and vegetation (Alvites et al.,2021, 2022,2025). From these point clouds, vegetation structure is summarised using statistical metrics over spatial units such as field plots. Each plot is used to clip the normalised LiDAR point cloud, and about 69 plot/crown structural metrics are computed using rLiDAR (Silva et al., 2016), including height percentiles, intensity statistics, vertical measures, and range metrics. These variables describe canopy structure and vertical complexity. All metrics are linked to their plots for comparison with biomass measurements. The hypothesis is that denser forests have higher upper-height metrics than open forests, making these features crucial for modelling.
# ============================================================
# 1. Crown Metrics Extraction Function (Per Plot)
# ============================================================
get_plot_crownmetrics <- function(nlas, plot_sf_row, min_pts = 30){
pid <- plot_sf_row$plot_id # Store plot identifier
las_k <- clip_roi(nlas, plot_sf_row)
if (lidR::is.empty(las_k)) return(NULL) # Skip empty LiDAR subsets
d <- as.data.frame(las_k@data[, c("X","Y","Z","Intensity")])
d <- d[is.finite(d$Z) & d$Z > 0, , drop = FALSE]
if (nrow(d) < min_pts) return(NULL) # Enforce minimum point density
xyziId <- cbind(d$X, d$Y, d$Z, d$Intensity, rep(pid, nrow(d)))
m <- CrownMetrics(xyziId) # Compute crown-level structural metrics
names(m)[1] <- "plot_id"
m
}
# ============================================================
# 2. Crown Metrics Computation for All Plots
# ============================================================
metrics_list <- suppressWarnings(lapply(seq_len(nrow(plots)), function(k){
get_plot_crownmetrics(nlas, plots[k, ], min_pts = 30)
})) # Apply crown metric extraction to each plot## .........
LiDAR-derived plot metrics are harmonised with field-based biomass measurements to create a single dataset. Plot geometries are removed, identifiers standardised, and variables converted to numeric format for consistency. The metrics are joined with biomass values using plot identifiers, excluding records with missing or invalid targets. This prepares a clean, aligned dataset for machine learning analyses.
# ============================================================
# 1. Target Variable Extraction from Plot Data
# ============================================================
targets <- st_drop_geometry(plots) %>%
dplyr::select(plot_id, AGB_m3_ha) # Extract plot-level AGB targets
# ============================================================
# 2. LiDAR Metric Table Cleaning
# ============================================================
drid_metric_plots <- drid_metric_plots %>%
mutate(plot_id = as.integer(unlist(plot_id))) %>%
mutate(across(-plot_id, ~as.numeric(.))) # Ensure numeric LiDAR metrics
# ============================================================
# 3. Target Data Type Harmonization
# ============================================================
targets <- targets %>%
mutate(plot_id = as.integer(plot_id),
AGB_m3_ha = as.numeric(AGB_m3_ha)) # Align target data types
# ============================================================
# 4. Merge LiDAR Metrics with Field Targets
# ============================================================
dat_ml <- drid_metric_plots %>%
left_join(targets, by = "plot_id") %>%
filter(is.finite(AGB_m3_ha)) # Join metrics and retain valid targets
# ============================================================
# 5. Data Sufficiency Check for Modelling
# ============================================================
if (nrow(dat_ml) < 5) stop("Too few plots with metrics/target to train a model.") # Minimum sample checkIn this step, LiDAR-derived plot metrics are used as predictors to model plot-level aboveground biomass (AGB). The dataset is first restricted to numeric predictors, then cleaned of missing or non-finite values, and subsequently split into 70% training and 30% test sets to evaluate model performance on independent data.
A multiple linear regression (LM) model is fitted using the training data. Prior to model fitting, predictor multicollinearity is addressed through variance inflation factor–based variable selection (VIF- Variance Inflaction Factor), ensuring a stable and interpretable set of explanatory variables. This approach reduces redundancy among LiDAR metrics while retaining the most informative predictors.
Predictions are generated for both the training and test datasets, and model performance is evaluated using complementary accuracy metrics to assess goodness of fit, prediction error magnitude, and generalisation capability. The final linear regression model, trained on all available plot data using the selected predictors, is then used for wall-to-wall biomass mapping in subsequent steps of the workflow.
# -----------------------------
# Metrics helper
# -----------------------------
eval_metrics <- function(obs, pred){
ok <- is.finite(obs) & is.finite(pred)
obs <- obs[ok]; pred <- pred[ok]
rmse <- sqrt(mean((pred - obs)^2))
mae <- mean(abs(pred - obs))
md <- mean(pred - obs)
r2 <- 1 - sum((obs - pred)^2) / sum((obs - mean(obs))^2)
rmseR <- rmse / mean(obs) * 100
maeR <- mae / mean(obs) * 100
data.frame(R2=r2, RMSE=rmse, rmseR=rmseR, MD=md, MAE=mae, maeR=maeR)
}
# ============================================================
# 1. Reproducibility
# ============================================================
set.seed(42)
# ============================================================
# 2. Target and numeric predictors
# ============================================================
target <- "AGB_m3_ha"
features0 <- setdiff(names(dat_ml), c("plot_id", target))
features0 <- features0[sapply(dat_ml[, features0, drop = FALSE], is.numeric)]
# ============================================================
# 3. Clean dataset
# ============================================================
dat_use <- dat_ml[, c("plot_id", target, features0)]
dat_use <- dat_use %>% filter(is.finite(.data[[target]]))
dat_use <- na.omit(dat_use)
n <- nrow(dat_use)
if (n < 6) stop("Too few plots for linear regression.")
# ============================================================
# 4. 70/30 Train–Test split
# ============================================================
idx <- sample(seq_len(n))
n_train <- max(3, floor(0.70 * n))
train_id <- idx[1:n_train]
test_id <- setdiff(idx, train_id)
train <- dat_use[train_id, ]
test <- dat_use[test_id, ]
# ============================================================
# 5. VIFSTEP filtering (TRAIN only) — robust to 0-variance / NaN
# ============================================================
vif_threshold <- 7
METRIC <- dat_use %>% dplyr::select(TotalReturns:I99TH)
# remove bad columns (all NA / zero variance)
METRIC <- METRIC[, sapply(METRIC, function(x){
x <- x[is.finite(x)]
length(x) > 1 && sd(x) > 0
}), drop = FALSE]
# scale + keep complete cases
METRIC_A <- as.data.frame(scale(METRIC))
METRIC_A <- METRIC_A[complete.cases(METRIC_A), , drop = FALSE]
# vifstep
COLLINEARITY <- vifstep(METRIC_A, th = vif_threshold)
features <- COLLINEARITY@results$Variables
#
# ============================================================
# 6. Fit LM with VIFSTEP-filtered predictors
# ============================================================
final_formula <- as.formula(paste(target, "~", paste(features, collapse = " + ")))
lm_fit <- lm(final_formula, data = train)
# ============================================================
# 7. Predictions
# ============================================================
pred_train <- predict(lm_fit, newdata = train)
pred_test <- predict(lm_fit, newdata = test)
df_lm <- bind_rows(
data.frame(model = "LM_VIFSTEP", part = "train", obs = train[[target]], pred = pred_train),
data.frame(model = "LM_VIFSTEP", part = "test", obs = test[[target]], pred = pred_test)
)
# ============================================================
# 8. Metrics
# ============================================================
met_train <- eval_metrics(train[[target]], pred_train)
met_test <- eval_metrics(test[[target]], pred_test)
out <- list(
preds = df_lm,
metrics = bind_rows(
cbind(model = "LM_VIFSTEP", part = "train", met_train),
cbind(model = "LM_VIFSTEP", part = "test", met_test)
),
features = features,
model = lm_fit
)Model performance is evaluated by comparing predicted and observed plot-level aboveground biomass (AGB) for both training and independent test datasets. Accuracy is quantified using complementary metrics that describe goodness of fit, prediction error magnitude, and systematic bias. Specifically, we report the coefficient of determination (R²; Eq1), Root Mean Square Error (RMSE;Eq2-3), bias (Eq3) and Mean Absolute Error (MAE;Eq5-6), along with their relative forms normalized by the mean observed AGB. Using both absolute and relative metrics provides a robust assessment of model accuracy and generalization performance, supporting objective comparison between the Random Forest and XGBoost models.
\[ R^2 = 1 - \frac{\sum_{i=1}^{n}\left(y_i - \hat{y}_i\right)^2}{\sum_{i=1}^{n}\left(y_i - \bar{y}\right)^2} \qquad \text{(Eq1)} \]
\[ RMSE = \sqrt{\frac{1}{n}\sum_{i=1}^{n}\left(\hat{y}_i - y_i\right)^2} \qquad \text{(Eq2)} \]
\[ RMSE\% = \frac{RMSE}{\bar{y}} \times 100 \qquad \text{(Eq3)} \]
\[ MD = \frac{1}{n}\sum_{i=1}^{n}\left(\hat{y}_i - y_i\right) \qquad \text{(Eq4)} \]
\[ MAE = \frac{1}{n}\sum_{i=1}^{n}\left|\hat{y}_i - y_i\right| \qquad \text{(Eq5)} \]
\[ MAE\% = \frac{MAE}{\bar{y}} \times 100 \qquad \text{(Eq6)} \]
## model part R2 RMSE rmseR MD MAE maeR
## 1 LM_VIFSTEP train 0.6482662 36.63022 23.95285 1.456613e-13 28.40123 18.57184
## 2 LM_VIFSTEP test 0.6706846 57.47380 33.89225 -4.304927e+01 48.75735 28.75217
In this step, LiDAR-derived metrics are computed across the entire study area using a regular grid. The same 69 structural and intensity metrics used in the plot-level modelling stage are calculated at the pixel level to ensure consistency between training and prediction. Metric extraction uses the pixel_metrics() function from the lidR package, which summarises the normalised point cloud within each grid cell. The resulting raster layers represent spatially continuous maps of canopy structure and serve as predictor variables for wall-to-wall biomass estimation.
The aboveground biomass (AGB) map is generated by applying the selected model to grid-level LiDAR metrics. Biomass estimates are produced at the same spatial resolution as the field plots (23 m × 23 m) to maintain consistency between observations and predictions. Final AGB values are expressed in m³/ha or Mg/ha, resulting in a spatially explicit map of forest biomass across the study area.
## Export outputs
This step exports the main workshop deliverables to a single output folder: the final trained model (.rds), the accuracy table (.csv), the observed–predicted scatter figure (.png), and the raster products (DTM, DSM, CHM, and the predicted AGB map as GeoTIFFs) for further analysis and sharing.
out_dir <- file.path(dirname(las_path), "ForestSAT2026_03_Exercise")
dir.create(out_dir, showWarnings=FALSE)
saveRDS(lm_fit, file.path(out_dir, "final_model_lm.rds"))
write.csv(out$metrics, file.path(out_dir, "model_accuracy_metrics.csv"), row.names = FALSE)
writeRaster(DTM, file.path(out_dir,"DTM_0p5m.tif"), overwrite=TRUE)
writeRaster(DSM, file.path(out_dir,"DSM_0p5m.tif"), overwrite=TRUE)
writeRaster(CHM, file.path(out_dir,"CHM_0p5m.tif"), overwrite=TRUE)
writeRaster(pred_agb_lm, file.path(out_dir,"AGB_23m_LM.tif"), overwrite=TRUE)Linear regression provides a direct and rapid model for AGB mapping
VIF(Variance Inflaction Factor)-based filtering is essential to handle collinearity and autocorrelation among LiDAR metrics.
Good validation matters more than model complexity.