Tesla GPU server

Machine learning and neural network course resources.

Postgraduate students can log in our Tesla GPU server in order to run Tensorflow or Pytorch code via jupyter notebook

Log in using Campus credential (User Ex.: AEOU001)

Jupyterhub Tesla GPU

( Help requests can be submitted to our helpdesk  CIM helpdesk  )

After the login the system will show a dropdown menu with two options:

  • Tensorflow version: 2.6.2
  • PyTorch version: 1.11

Choose your preferred environment and SPAWN the container.

Containers can be stopped using the Control Panel button (top right)

Notebook content will persist after a stop/restart.

Each container has 2 Cores and 4 Gb of RAM with 25 Gb of storage.

GPU RAM has not a hardcoded limitation and needs a precise setup before running code against a large dataset, but as rule of thumbs 3 Gb of VRAM are always available.

During the test and prototyping phase, we suggest this type of solution:

Allows the memory to grow proportionally

config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config, ...)

When the code needs to crunch a large amount of data we suggest to use the 0.25 factor

 gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.25)
sess = tf.Session(config=tf.ConfigProto(gpu_options=gpu_options))

Resources:

Notebook ready to use for your environment.

Jupyter notebook examples:

Introduction to Tensorflow and ML
[Download]

Advanced example:
Multi GPU CNN
[Downlaod]

Introduction to Pytorch
Torch.nn
[Download]

CIM Tech Team

Leave a Reply