Euramoo User Guide

Euramoo User Guide






Technical overview

Who can use the Euramoo cluster?

Getting a login

Accessing Euramoo

Major file systems

The batch system

Usage accounting

Modules, applications, compilers, debuggers

User training

Getting help

Free cluster connection tools




The name Euramoo comes from Lake Euramoo, a shallow dumbbell-shaped volcanic crater lake located on the Atherton Tablelands in far north Queensland.

Euramoo is the QRIScloud “Cluster-as-a-Service” offering. It provides a batch scheduler environment and is pre-populated with a range of computational application software.

The hardware and configuration of Euramoo is optimised for running multiple independent jobs, each using a small number of processors cores. It is ideally suited to large parameter sweep or ensemble applications. By contrast, modern HPC systems like Barrine, Raijin and Flashlite are designed primarily for large scale parallel computations that entail hundred or thousands of cores cooperating via message passing.

Under the hood, Euramoo is implemented using OpenStack cloud computing technology. It uses a combination of QRISCloud's NeCTAR research cloud computational resources and other resources contributed by QCIF member organisations. One advantage of this approach is that adding new compute nodes can expand the computational capacity of the cluster.



Technical overview

Euramoo is a multi-node virtual machine cluster, built on the following hardware:

  • 512 AMD cores in virtualisation servers with 64 cores, 256GB memory and 5TB of disk
  • 1080 Intel cores in virtualisation servers with 24 cores, 256GB memory and 300GB of disk
  • 80 Intel cores in virtualisation servers with 20 cores, 64GB memory, NVIDIA Tesla K20M, 5.1TB of disk

The cluster provides the following resources:

  • 3 login AMD nodes, with a load balancer.
  • A mix of compute node architectures and operating systems (see table below)
  • The open source PBS TORQUE batch system with Maui scheduler.
  • A 228TB of shared storage connected via NFS (/home and /sw file systems).
  • A 58TB of shared storage connected via NFS (/30days).
  • Home directories with tight quotas and *no* backups. Backup is the responsibility of the user.
  • The /30days filesystem providing temporary storage for staging data,
  • The 500 GB of local scratch storage on each compute nodes as $TMPDIR.
  • A range of applications including R, Matlab MCR
  • The means for users to compile additional software packages, as required.
  • An Ubuntu worker node environment running Ubuntu 14.04 LTS with the Biolinux 8 application set.

The current distribution of available nodes is summarised in the following table.
Note: The memory available for jobs to use on compute nodes is typically less than the physical memory by 2GB.

  AMD Intel Intel
CentOS Login Nodes 8C/32GB 3    
Ubuntu BioLinux 8C/32GB 8    
Ubuntu Biolinux 10C/120GB   8  
CentOS Compute 8C/32GB 48    
CentOS Compute 10C/120GB   60  
CentOS Compute 16C/48GB/GPU     4

Who can use the Euramoo cluster?

Euramoo is available for use by any researcher who belongs to a QCIF member institution or partner organisation.


Getting a login

To access to Euramoo, you need to:

1. Create a QRIScloud account.

  • Go to the QRIScloud portal (
  • Click on the “Account” link
  • Log in using your Australian Access Federation (AAF) credentials
  • Accepting the Terms and Conditions
  • Updating your user profile information (click on the “My Profile” link)

2. Once logged into the QRIScloud portal, you can then request a new service.

  • Click on “My Services”
  • Click on “Order new services”
  • Click on “Register to use Euramoo”
  • Complete the request form and submit the request.

3. Once your request has been processed you will need to generate your QRIScloud service access credentials (QSAC).

  • Click on “My Credential”
  • Click on the “Create credential” button
  • You will be presented with the username and password that can be used to login to Euramoo. Please make a careful note of these.

You will be contacted by email when your account has been registered.


Accessing Euramoo

Registered users should connect to the Euramoo cluster by using Secure Shell (SSH) to connect to
If you are connecting from a Linux or Mac system, you can use the ssh command from a command shell. For example:

$ ssh <qsac-account>
# enter your qsac password when prompted

On Windows, you can use the third-party PuTTY tool to SSH to Euramoo.

When you log in to Euramoo, you will find yourself logged in on one of three identical login nodes (euramoo1, euramoo2, or euramoo3). We recommend that you connect to the hostname euramoo to ensure you get the least loaded available login node. The euramoo hostname provides a load balancer for the three login nodes.


Major file systems



Your home directory on Euramoo is /home/$USER. It is created automatically when you login for the first time. Your home directory can be accessed on the Euramoo login nodes and the Euramoo compute nodes. The purpose of your home directory is to hold software that you have brought to the system, batch scripts and configuration files, and a relatively small amount of data.

The /home has quota control of storage and numbers of files (refer to default filesystem quota settings below).

Important Note: The /home file system is NOT backed up. If files are accidentally deleted we are unable to restore them. It is your responsibility to backup files located in your home directory. We advise you to regularly transfer any valuable files from your home directory to some other system that is backed up.


Each Euramoo user is allocated a directory on the /30days file system. It is present on the Euramoo login nodes and the Euramoo compute nodes. The main purpose is to hold large data sets on a temporary basis while you are computing against them. It is designed to be a data staging area.

Users have a quota of 3 TB (3000 TB) on the /30days file system. This file system is NOT backed up. Furthermore, files left on this file system are automatically deleted 30 days after they were created.

$TMPDIR (/scratch)

Each Euramoo compute node has a local file system called /scratch that will be used to hold data for the duration of a batch or interactive job. The batch system accesses this sotrage via the environment variable $TMPDIR which is unique for each job. The contents of $TMPDIR are deleted automatically when the job ends. The /scratch file system is typically 500Gb and shared amongst the jobs that are running on the node at any particular time.


The /sw file system is an NFS file system which contains all of the currently available software modules that can be used on Euramoo. It is present on the Euramoo login nodes and the Euramoo compute nodes, and is read-only for normal users. The presence, and the contents, of the /sw depends on the node type. There is no /sw on biolinux nodes.


The /sw-local file system provides a cache on local disk on each compute node for software that does not perform well when run over NFS. (For example, large Java-based applications.)


The RDS collections are available to use on Euramoo via network filesystem mounts. These mounts autoamtically connected when someone tries to access them. Only members of the collection project team should be able to access their collection data. Quotas and other limits imposed on the collection will also apply when collections are accessed from Euramoo.

On the nodes

The following illustrates the file systems you see on a login nodes and a typical compute node running a job.

Login Node

davidg@euramoo1:~> df -h
Filesystem                                     Size  Used Avail Use% Mounted on
/dev/vda1                                       50G  9.8G   37G  21% /
tmpfs                                           16G   12K   16G   1% /dev/shm
/dev/vdb                                       197G  504M  187G   1% /mnt             229T  357G  229T   1% /home    58T   14T   44T  24% /30days           229T  357G  229T   1% /sw            5.0T  4.3G  5.0T   1% /RDS/Q0196            5.0G  148M  4.9G   3% /RDS/Q0198


AMD compute node running job 1247

uqdgree5@eura18:~> df -h
Filesystem                                         Size  Used Avail Use% Mounted on
/dev/vda1                                           50G  2.1G   45G   5% /
tmpfs                                               16G     0   16G   0% /dev/shm
/dev/vdc                                           500G   33M  500G   1% /scratch
/dev/vdb                                           197G   60M  187G   1% /sw-local                 229T  400G  229T   1% /home        58T  2.8T   55T   5% /30days              229T  365G  229T   1% /sw uqdgree5@eura18:~>; echo $TMPDIR /scratch/1247


Intel node

uqdgree5@eura49-intel:~>; df -h
Filesystem            Size  Used Avail Use% Mounted on
/dev/vda1              30G  2.0G   27G   7% /
tmpfs                  60G     0   60G   0% /dev/shm
/dev/vdc              500G   33M  500G   1% /scratch
/dev/vdb               50G   52M   47G   1% /sw-local                229T  369G  229T   1% /home            229T  369G  229T   1% /sw       58T   11T   47T  19% /30days

Quota settings on file systems




/home 18 50,000 Indefinite but no backup
/30days 3072 50,000 Files deleted 30 days from creation
$TMPDIR 500 no limit $TMPDIR is purged at end of a job
/RDS/Qxxxx depends


Filesystem dos and don'ts

  1. Use $TMPDIR when you need local disk for your batch jobs. The $TMPDIR directory is created in /scratch automatically as part of your batch job and is removed for you automatically at the end of the job.

  2. Saving user data randomly into /scratch (outside of $TMPDIR) can adversely impact other users. Please DON'T do that.

  3. Compute Nodes are periodically rebuilt and the /scratch space is purged, so do not rely on using local disk on compute nodes unless via $TMPDIR.

  4. Don't forget that $TMPDIR is unique for each job and job-array sub-job.
    Although the path may be the same, the $TMPDIR directory will probably contain different files on different nodes and for different jobs.

  5. If you need to work with many small files, please keep them zipped together in a single archive file and copy the bigger zip file to local disk (i.e. $TMPDIR) before unpacking them to work on them in local disk space.

  6. Further information about storage is provided in storage user guide (via

The batch system

The Euramoo cluster uses the open source TORQUE Resource Manager combined with the Maui as its batch scheduler. The way to use Euramoo is to create a job script, and submit it using the qsub command. A sample submission script is provided in the section below.

For example, the command:

$ qsub -A UQ-RCC -l nodes=1:ppn=1:amd myjob.pbs

submits a job script called myjob.pbs to be run on one amd node under the account group UQ-RCC. On Euramoo it is mandatory to specify an account group and node type when submitting a job.

You can find out what groups are available to you by running the groups command. Only some of them are valid account groups (see “Usage accounting” below for further information).

You can find out what node types are available by running the command qsub -I. This will output the latest “quick hints” on job submission, and this should always include an up-to-date list of the node types. The initial set is as follows:

  • intel - a node running CentOS6 on hardware with Intel CPUs.
  • amd - a node running CentOS6 on hardware with AMD CPUs.
  • biolinux - a node running Ubuntu 14.04 with Biolinux8 pre-loaded.
  • gpu - a node with GPU hardware on board for computation.

Jobs with nodes > 1 are not permitted on Euramoo.


To get a birds eye view of the batch system, I recommend qstat -Q
(We are currently restructuring the queues.)

uqdgree5@euramoo1:~> qstat -Q
Queue              Max    Tot   Ena   Str   Que   Run   Hld   Wat   Trn   Ext T   Cpt
----------------   ---   ----    --    --   ---   ---   ---   ---   ---   --- -   ---
Interactive         50      0   yes   yes     0     0     0     0     0     0 E     0
LongWallTime       100     24   yes   yes     9    15     0     0     0     0 E     0
AMD                600   1663   yes   yes   569   138   956     0     0     0 E     0
Intel              600    322   yes   yes    37   285     0     0     0     0 E     0
BioLinux           600     86   yes   yes    37    44     5     0     0     0 E     0
GPU                  2      0   yes   yes     0     0     0     0     0     0 E     0
Render               3      0   yes   yes     0     0     0     0     0     0 E     0
DeadEnd              0      0   yes    no     0     0     0     0     0     0 E     0
workq                0      0   yes   yes     0     0     0     0     0     0 R     0


The qstat -q command also provides information about the queue limits, however the formatting is not quite right in some situations.

uqdgree5@euramoo1:~> qstat -q


Queue            Memory CPU Time Walltime Node  Run Que Lm  State
---------------- ------ -------- -------- ----  --- --- --  -----
Interactive       118gb    --    24:00:00     1   0   0 50   E R
LongWallTime       --      --    336:00:0   --   15   9 10   E R
AMD                30gb    --    168:00:0     1 138 525 60   E R
Intel             118gb    --    168:00:0     1 285  37 60   E R
BioLinux          118gb    --    168:00:0     1  44  42 60   E R
GPU                46gb    --    48:00:00     1   0   0  2   E R
Render             46gb    --    48:00:00     1   0   0  3   E R
DeadEnd            --      --       --      --    0   0 --   E S
workq              --      --    336:00:0   --    0   0 --   E R
                                               ----- -----
                                                 483  613

If you really want to know the detailed properties of a queue such as the BioLinux queue, try qmgr -c "p q BioLinux" on a login node.
RCC are working on creating a dynamic page that will provide more information with better structure.



All jobs begin their journey in the "workq" queue and get routed to the correct execution queue depending on the resource request you have made.

The default queue (workq) should be used to submit all jobs to the cluster.
Jobs are routed to a correct queue according to the resource requests.


Node types

NodeTypes are currently

  • amd - Opteron
  • intel
  • biolinux
  • gpu
  • hwrender (under development)
  • swrender (under development)

If you request a biolinux node type, it will be scheduled to a node that can accommodate the CPU/RAM request (be that AMD or Intel)


Job parameters

By design, all jobs on Euramoo are single node jobs (i.e. nodes=1: ... always).

The key parameters to adjust for a job are

  • interactive (-I) or not
  • your -A account string
  • walltime resource request
  • other resource requests
    • processors per node nodes=1:ppn=
    • nodetype (:amd, :intel, :biolinux, :gpu)
    • job memory ,mem=
    • job vmem ,vmem=
    • job memory per processor
  • emailing options -M and -m (email for job arrays has been disabled to avoid spam events)

For memory related parameters, you can use memory units like mb, gb with upper/lower/mixed case - all should be understood by the job submission filter.

The memory available for jobs to use on compute nodes is typically less than the physical memory by 2GB. The memory limits on queues (qstat -q) reflect this reality.
So, if you submit a job that requests 8 cores and 32GB on an AMD node, it may be rejected, or worse, stay in the queue indefinitely.

RCC has published several User Guides including

These user guides are for the Barrine HPC, but much of it applies generally. We will be updating those guides for the new clusters.


Usage Limits

Usage limits are imposed to fairly share the resources amongst users. They may vary over time with total workload and priority workloads.

The batch system has limits imposed to control the number of jobs users ca have queued and running. This is to fairly share the resource amongst the user community.
Queues have limits imposed on memory and cores and these form the basis of the routing of jobs to execution queues.

Usage is controlled through a parameter called (PS) that is the product of the walltime (in seconds) and the number of processors.
Think of the PS parameter as piece of cloth with time on one side and cores on the other. You can cut that cloth into lots of thin pieces (in time or numbers of cores), or a smaller number of bigger pieces.
The PS parameter is set in the job scheduler and is usually invisible to the user.
Once a user reaches their PS in their running jobs, other jobs can be submitted but will queue waiting for earlier jobs to finish.

Wall time and other job queue limits can be examined using qmgr -c "p s" command or by using the qstat -q and qstat -Q


Sample job submissions

Interactive Job Submission

qsub -I -A UQ-RCC -l walltime=04:00:00 -l nodes=1:ppn=2:intel,mem=10GB,vmem=10GB

You will need to change the accounting string to yours.


Using a job submssion file.

Copy and paste this into your own file (called filename.pbs) and modify the account string. Then qsub filename.pbs





#PBS -l nodes=1:ppn=1:intel,mem=3GB,vmem=3GB,walltime=01:00:00

#PBS -m n

#Now do some things
echo -n "What time is it ? "; date

echo -n "Who am I ? " ; whoami

echo -n "Where am I ?; pwd

echo -n "What's my PBS_O_WORKDIR ?"; echo $PBS_O_WORKDIR

echo -n "What's my TMPDIR ?"; echo $TMPDIR

echo "Sleep for a while"; sleep 1m

echo -n "What time is it now ? "; date


I created 3 job submission scripts that targeted different node types.

[root@master2 davidg]# qstat -a1n -u uqdgree5
                                                                                  Req'd    Req'd       Elap
Job ID                  Username    Queue    Jobname          SessID  NDS   TSK   Memory   Time    S   Time
----------------------- ----------- -------- ---------------- ------ ----- ------ ------ --------- - ---------
804                     uqdgree5    ShortWal amd.pbs            2014     1      1  3072m  01:00:00 R  00:02:13   eura17/0
805                     uqdgree5    ShortWal intel.pbs         26488     1      1  3072m  01:00:00 R  00:01:06   eura11-intel/9
806                     uqdgree5    ShortWal biolinux.pbs      19469     1      1  3072m  01:00:00 R  00:00:03   eura17-bio/0

The node naming convention is acknowledged as unwieldy ... fortunately users rarely need to bother with it.


Job submission scenarios

You may need to request a specific set of node attributes to achieve your computation. The following is not an exhaustive list but give you a flavour of what scenarios might unfold:

  1. Your application is single threaded and doesn't use much RAM per core - you need it to run on amd.
    #PBS -l nodes=1:ppn=1:amd
  2. You have an application that does not play nicely with others and has been known to spread out and consume all available RAM and CPU on a compute node and kill jobs belonging to other people.
    In this situation, you should ensure that your job ends up being the only job on a node it lands on. This can be done by asking for an entire AMD node:
    #PBS -l nodes=1:ppn=8:amd 
  3. You have a job which you know will require an atypical amount of RAM, and 2 CPUs, but can otherwise share a node.
    #PBS -l nodes=1:ppn=2:intel:mem=50GB
  4. You have a code that was compiled with intel MPI. MPI codes can operate on a single compute node, as long as your code does not attempt to communicate amongst the MPI "ranks" using infiniband interfaces (which euramoo does not have). Before you launch your code you may need to remind the MPI implementation that you don't have ib interfaces.
    #PBS -l nodes=1:ppn=10:intel

    export I_MPI_FABRICS=shm
  5. Your application is known to be able to utilise GPUs. Please select a whole node to minimise harm!
    #PBS -l place=excl -l nodes=1:ppn=16:gpu -A qWXYZ    <<<< use with care ... not tested

For more information about the options for PBS job submission scripts please consult the Torque user guide

When a PBS job runs it creates a temporary directory for that job. The location is stored in $TMPDIR. Importantly it is cleanly removed automatically at the end of your job. You should always use $TMPDIR instead of creating a directory in /scratch. On Barrine TMPDIR is on the local scratch disk. You should copy any outputs back to /home or /HPC/home before the job ends otherwise you won't have any results once the job finishes !


How to check progress of your job

It is best practice to utilise TMPDIR disk when your job is running.

So how do you see what is going on when your job is running in TMPDIR? Here is a step-by-step guide.

STEP 1: Create a SSH Key Pair for Euramoo

You will need to set up ssh access to compute nodes for yourself, first.

You only do this once every month if you care about network security or never again if you are completely irresponsible ;-).

On a login node, run the ssh-keygen command

ssh-keygen –t rsa –b 2048

enter a moderately good, readily typed, password (you are asked twice)

You can enter nothing as the password but that is risky.


STEP 2: Trust that New Key for Connections to Euramoo

Then append the public key to your authorized_keys file

cat $HOME/.ssh/ >> $HOME/.ssh/authorized_keys


STEP 3: Unlock the private key for your session

To avoid reentering the private key's passphrase each time you connect to a compute node, you can start an ssh agent and load the key (once per Euramoo session)


ssh-add $HOME/.ssh/id_rsa


WARNING: Do not use that key pair outside Euramoo

Best practice involves creating different keys on your different platforms (e.g. office-workstation / euramoo-login / personal-laptop) and trusting them for inbound connections to Euramoo. The “recycling” of key pairs by HPC users across multiple platforms has caused security problems in the past. We strongly discourage this practice.


STEP 4: Find and connect to the node running your job

(while job is running) use the qstat command to see where your job is running.

qstat –a1n

The left column of the output is the jobID.

The right hand column will look like euraXY-intel/blahblahblah

ssh euraXY-intel   

(or whatever the machine is called that runs your job eg eura01-intel )


STEP 5: Have a look at your job’s TMPDIR

When you are logged into the compute node

cd /scratch/jobID

Then you can use the tail command to watch an output or logfile growing

tail –f myOutputFile.txt



Batch best practices

  • Do not specify the queue ... let the batch system figure it out for you!
  • Job submissions are filtered and fixed if possible or rejected if not.
  • You should avoid using the ncpus=4 nomenclature. Instead you should use nodes=1:ppn=4 ... nomenclature instead.
  • Setting a wall time for your job is mandatory because of problems with scheduling jobs. If you forget, then the filter gives you one hour.
  • If you specify a realistic wall time (somewhat longer than your expected run time), it will usually result in your jobs being scheduled more quickly than a job with an excessively long wall-time. Note that the job will be terminated at the end of the wall time (whether it has finished or not) so you should always add a bit extra to your walltime.
  • Think carefully about whether you want to receive emails at the start and end of every job you submit (-m options).
    Email has been disabled for job arrays to avoid spamming problems experienced in mid-2016.
    If you do not want the emails then explicitly disable by using the option
    #PBS -m n
  • If you do not want email then you should ensure you keep the stdout and stderr files that are generated when your job runs. These can help you with troubleshooting and refining your resource requests (the stderr "e" file contains a summary of resourced used and requested)
  • If you have a large number of similar independent computations to perform, please consider using a PBS "job array". These allow to submit and manage many jobs via a single entry in the PBS queue. See PBS qsub man page or the PBS User Guide for more details. Given the potential to overwhelm the batch and I/O systems, please consider using a pause in your job arrays to "smear" the start times out. This can be done by including a line such as:
    (sleep $(( ($PBS_ARRAY_INDEX % 10) * 15 )))
  • Aim to get each job or sub-job to run for at least an hour (if necessary combine sub-jobs together to create a more substantial chunk of work per sub-job). This will avoid problems that arise for all users when the PBS server is turning around many many short duration jobs/sub-jobs.
  • Use TMPDIR space ... it is faster and kinder to your fellow users. You must copy your results back to permanent storage.
  • If you need to run something interactively (perhaps with X11 display) for more than a few minutes (eg. compilation or data analysis) please launch an interactive session on a compute node by issuing a command like:
    qsub -I -l nodes=1:ppn=10:intel,mem=100GB,vmem=100gb -A accountString -v DISPLAY


Usage accounting

We account for all usage on the Euramoo cluster to satisfy our stakeholders that their entitlements are being met, and to assist with planning and resource management.

The Euramoo account groups correspond either to organisational groups or to projects that span multiple organisations.
• For UQ, the account groups are broken down to the level of a School or Centre. The group qriscloud-uq will not work as an account group for job submission.
• For other organisations, the account group will be the appropriate qriscloud-xxx group for the organisation.

If you are a member of multiple accounting groups, it is important that you chose the most appropriate group when submitting jobs.


Other Accounting Tips

  1. Changing your Linux default group (using the newgrp command) at the command line does not affect accounting. Make sure you use the -A option with the appropriate account group within your submitted batch job.
  2. If your jobs are being rejected because of an invalid account group, please contact This email address is being protected from spambots. You need JavaScript enabled to view it. for assistance.


Modules, applications, compilers, debuggers

Euramoo has a wide range of software that has been available software. What is available and how it is accessed varies amongst the node types.


BioLinux Nodes have no /sw and no modules

Users of biolinux nodes should not require anything other than the software that is installed into each biolinux compute node.
All biolinux applications should answer when you call them!
You should never need to load modules when you are using a biolinux node.

The software available on biolinux nodes is summarized on the BioLinux project website:


The other node types use /sw and modules

Installed software and software development tools can be found in the /sw file system available on all nodes.

The current list of available software modules is displayed by running the command:

$ module avail

Our intention is for all centrally installed applications to have a module (if required) to set the appropriate environment for using the application.

Some users find that the “module” command does not work within PBS scripts in the batch system. This issue arises if you use a shell for PBS scripts that differs from your login environment. The work around is to initialise the modules environment itself BEFORE calling the modules command. This is done by “source-ing” the appropriate version of the initialisation file prior to loading any module(s).

For example, in a PBS script using /usr/bin/bash you would add the following at the start of the script:

source /usr/share/modules/init/bash

There are init scripts for bash, csh, ksh, perl, python, sh, tcsh and zsh environments.

If your login shell is bash, AND your PBS script begins with #!/bin/bash and/or contains

#PBS -S /bin/bash

then you should not have any problems with module loading.

Most applications are installed with a beach head directory under /sw. When there is more than one version of the software, the versions appear in directories beneath that. For example, NetCDF has versions 4.0 and 4.1.1 available under /sw/NetCDF/.

The cluster is licensed for the Intel compiler suite. There are modules that correspond to the major components:

  • C compiler and Fortran Compiler
  • Math Kernel Library (includes optimised linear algebra, FFT and other math
  • tools)
  • MPI Library

You need to load the corresponding modules to suit your requirements:

  • The module intel-tools-11 loads C, Fortran and MKL modules
  • To compile a C /Fortran program that uses MPI (Message Passing Interface) you would need to load both the intel-cc-11/intel-fc-11 AND the intel-mpi module.
  • To run a compiled MPI program you need to load the intel-mpi module.


User training

Training materials are available under /sw/doc/Support.


Getting help

In addition to user training and the associated training materials, there are a number of other ways to get help:

  • Documentation on standard Linux commands, the PBS job submission (qsub) and management commands and many other things can be viewed using the Linux man command
  • User documentation for most of the applications and tools available via the modules mechanism.
  • Do NOT post any commercial software documentation that you find in /sw on a website, or forward copies to anyone else
  • The system message of the day will occasionally carry information about forthcoming work and other outages.
  • Requests for software installation should be sent to This email address is being protected from spambots. You need JavaScript enabled to view it.
  • Requests for assistance with Euramoo should be sent to This email address is being protected from spambots. You need JavaScript enabled to view it.


Free cluster connection tools

In order to use the Euramoo cluster, you will typically need a way to connect to it from a laptop or desktop system. You may also need tools for transferring files too and from the cluster, and possibly other things. There are numerous free tools available for these tasks.

To login to Euramoo, you will need a tool that is capable of running an interactive SSH session. The tools you can use include:

  • For Microsoft Windows platforms: the third-party PuTTY or WinSCP tools are available.
  • For Mac OSX and Linux: the ssh command is included preinstalled or from your platform's package installer.

To transfer files to and from Euramoo, you will need to use an SSH-based file transfer method such as SCP or SFTP. The tools you can use for this include:

  • Cross-platform: CyberDuck (GUI) or FileZilla.
  • For Microsoft Windows platforms: WinSCP.
  • For Mac OSX: rbrowser, fugu, or the scp and sftp commands.
  • For Linux: distribution specific browsers, and the scp and ftp commands.

If you need to run an interactive application on Euramoo with a GUI, then you will need to run an X11 server on your laptop or desktop that the application can connect to:

  • For Microsoft Windows: Xming is a good (i.e. free) option.
  • For Mac OSX: X11 is available in the Utilities folder.
  • For Linux: if you have a “desktop” install (e.g. Gnome, KDE, etcetera) your system will already be running an X11 server.


 This page has been updated on May 17 2016 by David Green, UQ Research Computing Centre.


About QRIScloud

QRIScloud is a large-scale cloud computing and data storage service.  It is aimed at stimulating and accelerating research use of computing across all disciplines. 

Latest Posts

Get in touch

QRIScloud @
Axon building, 47
The University of Queensland
St Lucia, Qld, 4072

Contact us through the QRIScloud support desk, or email


Connect with us