Page tree

The BIRRP program computes magnetotelluric and geomagnetic depth sounding response functions using a bounded influence, remote reference method, along with an implementation of the jackknife to get error estimates on the result. It incorporates a method for controlling leverage points (i.e., magnetic field values which are statistically anomalous), includes the implementation of two stage processing which enables removal of outliers in both the local electric and magnetic field variables, and allows multiple remote reference sites to be used.

For more information about BIRRP and links to the relevant literature, please visit this page.

Running BIRRP on the ARE VDI App

Let's first open a terminal and load in the birrp module:

$ module use /g/data/up99/modulefiles

$ module load birrp/5.3.2_hlmt_vdi

A test dataset has been created with 16 synthetic MT test sites (B1-B16) and associated E/B time-series ready for BIRRP processing:

$ ls /g/data/up99/sandbox/birrp_test/loop_test/ 

Note that this test is mainly focusing on computational performance and not on the results of the BIRRP processing.

BIRRP script files have been created for each site. For example:

$ less /g/data/up99/sandbox/birrp_test/loop_test/B1/CP1.script 

 Let's write a simple loop to run BIRRP serially on each of our 16 MT sites:

#!/bin/bash

for f in /g/data/up99/sandbox/birrp_test/loop_test/B*;
  do
    [ -d $f ] && cd "$f" && echo Entering into $f and running BIRRP && birrp-5.3.2 < CP1.script && cd ..
  done;

An example script is available here:

$ less /g/data/up99/sandbox/birrp_test/loop_test/serial_VDI.sh

To run this example script:

$ source /g/data/up99/sandbox/birrp_test/loop_test/serial_VDI.sh

This script took approximately 27 minutes to complete. Now let's slightly edit our script to take advantage of an XLarge 16 cores OOD VDI instance:

#!/bin/bash

for f in /g/data/up99/sandbox/birrp_test/loop_test/B*;
  do
    [ -d $f ] && cd "$f" && echo Entering into $f and running BIRRP && birrp-5.3.2 < CP1.script && cd .. &
  done
wait

To run this updated script:

$ source /g/data/up99/sandbox/birrp_test/loop_test/parallel_VDI.sh

This script took 3 minutes and 32 seconds to complete.

Running BIRRP on Gadi

Now let's try processing the 16 MT sites on Gadi using 16 CPUs and 32 GB of memory. 

First, login to Gadi and load in the BIRRP module:

$ module use /g/data/up99/modulefiles

$ module load birrp/5.3.2_hlmt

Next, run:

$ less /g/data/up99/sandbox/birrp_test/loop_test/parallel_gadi.sh

to view the following script:

#!/bin/bash

#PBS -P <project_code>
#PBS -q normal
#PBS -l walltime=00:05:00
#PBS -l mem=32GB
#PBS -l jobfs=1GB
#PBS -l ncpus=16
#PBS -l software=birrp-5.3.2
#PBS -l wd
#PBS -l storage=gdata/up99

module purge
module use /g/data/up99/modulefiles
module load birrp/5.3.2_hlmt 

for f in /g/data/up99/sandbox/birrp_test/loop_test/B*;
  do 
     [ -d $f ] && cd "$f" && echo Entering into $f and running BIRRP && birrp-5.3.2 < CP1.script && cd .. &
     
  done
  wait

This script took 1 minute and 31 seconds to complete.

Here we can see that by running our BIRRP loop on Gadi using multiple CPUs, we can significantly decrease our processing time. In theory we could run thousands of different BIRRP processes at once on Gadi. 

  • No labels