A novel Context Guided Network (CGNet) is a lightweight and efficient network for semantic segmentation. It contains the Context Guided (CG) block, which learns the joint feature of both local feature and surrounding context, and further improves the joint feature with the global context. Based on the CG block, CGNet could captures contextual information in all stages of the network and is specially tailored for increasing segmentation accuracy. 

This page describes how to apply CGNet on the ClimateNet to enable ultrafast spatio-temporal segmentation and tracking on large climate data sets. The method details could be found here.

Software Environment

NCI provides the software environment under project dk92 as below to run CGNet on ClimateNet dataset at Gadi. You need to join project dk92 through mancini to load the ClimateNet software environment.

module use /g/data/dk92/apps/Modules/modulefile
module load climatenet/24.02

You also need to join project wb00 to access the ClimateNet dataset and ob53 to access BARRA2 dataset to run the following notebook.

Launch JupyterLab session

Users can run the training, evaluation and prediction notebooks in the ARE JupyterLab session. 

  1. Go to https://are.nci.org.au/ and log in. 
  2. In the JupyterLab launch Dashboard, use the following information:
    Walltime (hours):4
    Queue: gpuvolta
    Compute Size: 1gpu

    Project: your own project (such as ab123).
    Storage: gdata/dk92+gdata/wb00+gdata/ob53+gdata/ab123
  3. In the Advanced options section: 
    Module directories: /g/data/dk92/apps/Modules/modulefiles 
    Modules: climatenet/24.02
  4. Click the 'launch' button

Training in the command line

You need to copy the whole directory "/g/data/dk92/apps/climatenet/24.02/notebooks/ClimateNet" to your working directory. It contains all necessary patches for you to run the above command directly.

Or, you could clone the ClimateNet scripts from its Github repository to your working directory at Gadi.

Enter the working directory you will see the following files 

Open the example.py file and set the train_path and inference_path as below and save it.

train_path = '/g/data/wb00/ClimateNet/v1/2021/data'
inference_path = '/g/data/wb00/ClimateNet/v1/2021/data/test'

Open a Terminal and then you can run it directory

climatenet:24.02 >python example.py 
Epoch 1:
Loss: 0.813947856426239:   6%|████████▍                                                                                                                                    | 6/100 [00:06<01:10,  1.33it/s]


Training, evaluating and predicting via Notebook

You can interactively visualise the inference results in a Jupyter notebook. If you copied the "/g/data/dk92/apps/climatenet/24.02/notebooks/ClimateNet" to your working directly, you can run the notebook named "train_eva_pred.ipynb".

You can also open an empty notebook under the same directory with the file "example.py" , and fill in the following contents in multiple cells. 

StageScriptOutputDescription
Initialisation 
import os
import nci_ipynb
os.chdir(nci_ipynb.dir())
print(os.getcwd())
YOUR_CLONED_CLIMATENET_DIRECTORY

In ARE JupyterLab session, a notebook always start from the home directory.

The nci_ipynb package automatically find the working current directorty where the notebook residues.

Then you can use ps.chdir() method to enter the current directory.

from climatenet.utils.data import ClimateDatasetLabeled, ClimateDataset
from climatenet.models import CGNet
from climatenet.utils.utils import Config
from climatenet.track_events import track_events
from climatenet.analyze_events import analyze_events
from climatenet.visualize_events import visualize_events
import xarray as xr
from os import path
import matplotlib.animation
import matplotlib.pyplot as plt
import numpy as np
import cartopy.crs as ccrs
import cartopy.feature as cfeature

config = Config('config.json')
cgnet = CGNet(config)

Load necessary packages and load the model with the configuration from the file "config.json".

You need to modify config.json to revise any settings in the model.

Train
train_path = '/g/data/wb00/ClimateNet/v1/2021/data'
train = ClimateDatasetLabeled(path.join(train_path, 'train'), config)
cgnet.train(train)
Epoch 1:
Loss: 0.7853733897209167: 100%|██████████| 397/397 [01:06<00:00,  5.99it/s]
Epoch stats:
[[3.20229173e+08 5.52423000e+05 8.92180900e+06]
 [6.38198000e+05 9.00008000e+05 8.43910000e+04]
 [5.34953200e+06 5.27290000e+04 1.45119290e+07]]
IOUs:  [0.95393992 0.40399883 0.50178884] , mean:  0.6199091973475489
Epoch 2:
Loss: 0.7715422511100769: 100%|██████████| 397/397 [01:01<00:00,  6.44it/s]
Epoch stats:
[[3.20229173e+08 5.52423000e+05 8.92180900e+06]
 [6.38198000e+05 9.00008000e+05 8.43910000e+04]
 [5.34953200e+06 5.27290000e+04 1.45119290e+07]]
IOUs:  [0.95393992 0.40399883 0.50178884] , mean:  0.6199091973475489
...

Set the train_path and load the dataset. 

Then train the dataset.

IOU ( Intersection-over-Union) metric is used to measure the model performance, which represents the agreement between two masks.

For each epoch, the IOUs are printed out.

Evaluate

inference_path = '/g/data/wb00/ClimateNet/v1/2021/data/test'
evaluate = ClimateDatasetLabeled(inference_path, config)
cgnet.load_model('/g/data/wb00/ClimateNet/v1/2021/model/new')
cgnet.evaluate(evaluate)
100%|██████████| 8/8 [00:05<00:00,  1.44it/s]
Evaluation stats:
[[4.8972318e+07 8.8410000e+04 1.5935740e+06]
 [1.6244900e+05 1.2836200e+05 5.7830000e+03]
 [1.1173280e+06 8.5810000e+03 1.8920910e+06]]
IOUs:  [0.94297076 0.3261354  0.40977793] , mean:  0.559628029629729

Set inference_path and load the dataset for evaluation. Note the loaded dataset type is "ClimateDatasetLabeled" which contains labels.

Load the checkpoint files provided in /g/data/wb00 into the model and evaluate the test dataset. 

It contains IOUs for background, TC and AR. A mean IOU is caculated among all 3 above IOUs. 

Predict
train_path = '/g/data/wb00/ClimateNet/v1/2021/data'
pred_test = ClimateDataset(path.join(train_path, 'test'), config)
cgnet.load_model('/g/data/wb00/ClimateNet/v1/2021/models/new')

Load the test dataset for prediction. The dataset type is "ClimateDataset" which only contains input features, as we don't need labels for prediction.

Load the checkpoint files provided in /g/data/wb00 into the model and evaluate the test dataset. 

class_masks = cgnet.predict(pred_test) # masks with 1==TC, 2==AR
100%|██████████| 8/8 [00:04<00:00,  1.76it/s]
Conduct the prediction to classify TC and AR.
event_masks = track_events(class_masks) # masks with event IDs
identifying connected components..
tracking components across time..
100%|██████████| 61/61 [00:52<00:00,  1.15it/s]
num TCs: 4
num ARs: 15

Identify events and classify them into TC and AR.

There are 5 TCs and 15 ARs in the inference datasets.

analyze_events(event_masks, class_masks, 'results/')


calculating centroids..
extracting event types..
calculating genesis and termination frequencies..
generating histograms..
generating frequency maps..







Plot the analyais results into figures.



  • No labels