Video tutorials added

To help engage new users of the gateway we have started a series of video tutorials highlighting how to use the gateway. They are incorporated throughout the site and you can also find them on our Youtube playlist!

Singularity on COSMIC2

We are really excited to start using Singularity containers on COSMIC². For those who don’t know, Singularity (and it’s predecessor, Docker) are methods to ‘containerize’ your software. This means you can design and build a custom operating system with all of the correct software dependencies

We have been wanting to do this for a little while, but were finally forced when we started to incorporate crYOLO into our software platform. In short, since crYOLO runs deep learning software Tensorflow, we needed to be running the latest version of Linux CenOS 7. However, SDSC Comet is still running CentOS 6, which means that we were at an impasse for running this software.

Enter Singularity – these containers allowed us to install Ubuntu and crYOLO into its own ‘image’, which is a standalone environment capable of running crYOLO anywhere. With this new image, all we have to do in order to run crYOLO on any CPU machine is type:

$ singularity exec sdsc-comet-ubuntu-cryolo-cpu.simg cryolo_predict.py -c config.json -w gmodel_phosnet_20181221_loss0037.h5 -i micrographs/ -o micrographs/cryolo -t 0.2

Which is a big step for those who have tried this before!

Since the built images are ~7 GB, we can’t share them directly on Github, so instead we are sharing the definition files. Please take a look and try it out if you are so inclined!

Benchmarking RELION2 GPU-accelerated jobs on Comet-GPU nodes

Below you will find the results of running the standard RELION2 benchmark on a number of different node configurations to optimize speed vs. ‘cost’ (in service units, SUs).

Optimal configuration for COSMIC² users: 8 x K80 GPUs (which is across 2 nodes) is worth it – nearly double the speed, but only a fraction more SUs to pay for it.

Fastest analysis: 12 x P100 GPUs (which made it also the most ‘expensive’)

RELION benchmarking test set (link)

  • Job type: RELION 3D Classification – v2.1.b1; 25 iterations
  • Data info: 105,247 particles; 360 x 360 pixels
  • Elapsed time: 3 hr 14 min
  • Compute type: GPU (4 x P100)
  • SUs: 19.5

 

  • Job type: RELION 3D Classification – v2.1.b1; 25 iterations
  • Data info: 105,247 particles; 360 x 360 pixels
  • Elapsed time: 1 hr 43 min
  • Compute type: GPU (8 x P100)
  • SUs: 21

 

  • Job type: RELION 3D Classification – v2.1.b1; 25 iterations
  • Data info: 105,247 particles; 360 x 360 pixels
  • Elapsed time: 1 hr 25 min
  • Compute type: GPU (12 x P100)
  • SUs: 23

 

  • Job type: RELION 3D Classification – v2.1.b1; 25 iterations
  • Data info: 105,247 particles; 360 x 360 pixels
  • Elapsed time: 3 hr 42 min
  • Compute type: GPU (4 x K80)
  • SUs: 15

 

  • Job type: RELION 3D Classification – v2.1.b1; 25 iterations
  • Data info: 105,247 particles; 360 x 360 pixels
  • Elapsed time: 2 hr 2 min
  • Compute type: GPU (8 x K80)
  • SUs: 16

 

  • Job type: RELION 3D Classification – v2.1.b1; 25 iterations
  • Data info: 105,247 particles; 360 x 360 pixels
  • Elapsed time: 1 hr 42 min
  • Compute type: GPU (12 x K80)
  • SUs: 20.4