Compute

QRIScompute

 

This service allows users to access NeCTAR cloud compute infrastructure (also known as the Australian Research Cloud) to facilitate computational research, data processing, modelling, analysis and collaboration. The research cloud operates as 'infrastructure as a service' (IaaS) and is suitable for users with skills in Linux system administration. 

QRIScompute facilities

The QRIScompute service provides users with access to:

  • Virtual machines

Users of the NeCTAR research cloud can create virtual machines (instances) with up to 16 virtual CPUs. Once you have been granted an allocation, you can create instances in any of the data centres in the NeCTAR federation.

  • NeCTAR storage

NeCTAR instances can access NeCTAR Volume Storage and Object Storage in the order of gigabytes to a terabyte. You can request this storage when requesting your NeCTAR research cloud allocation. Volume storage must requested on the node on which your instances will run.

  • NeCTAR Operating Systems

Available operating systems include mainstream Linux distributions, such as Centos, Ubuntu, Debian, Fedora, and Scientific Linux. Unfortunately, operating systems that require licenses, such as Windows Enterprise, are not available through QRIScloud

Get a Virtual Machine

Create a virtual machine. You need to apply through the NeCTAR dashboard, which is also where you manage, and gain access to your cloud compute resources once they are allocated. You can apply for up to 16 virtual CPUs (vCPUs) and 64 GB of memory for short or longer term research use. The NeCTAR dashboard allows you to perform many other management functions, and is secured using your home university's credentials via the Australian Access Federation (AAF). Click here to get started.  If you need assistance with your application, please consult your university's eResearch Analyst.

If you do not have AAF credentials through an institution, you can ask us to create an account for you that you can use for QRIScloud. The first time that you visit the NeCTAR Dashboard, a project trial (PT) project will be created to allow you to try out the facilities, whether you want to make use of this option or not. (Use of your PT is not a prerequisite for requesting a longer term allocation.) The PT has a duration of 3 months, and provides resources for creating instances with a total of 2 vCPUs. When you are ready, apply for a NeCTAR Allocation allowing you to use more NeCTAR resources, for a longer time. As project manager, you will be able to allow access to colleagues and research students who are also AAF users.

Use specialised compute

Large memory node

QRIScloud operates a small number of compute nodes that can accomodate instances with up to 1TB memory. These can be made available in blocks of time of two weeks as instances with up to 60 vCPUs and 900Gb of memory. Instances can be provisioned with large storage volumes to hold working data.
Request access to these resources.

Elastic compute

Normal NeCTAR resources are often in short supply, leading to problems with launching instances. QRIScloud has set aside some limited capacity to allow users to create instances with large numbers of vCPUs with a relatively short (up to 7 days) lifetime (elastic compute).
Request access to these resources.

GPU node

QRIScloud operates a small number of compute nodes that contain a Tesla K20m GPU. These can be made available in blocks of time of two weeks as instances. Instances can be provisioned with storage volumes to hold working data. 
Request access to these resources

    Institutional HPC

    ZODIAC – HPC Cluster at James Cook University (JCU)

    The Zodiac HPC cluster contains 1000 AMD Opteron processors, with three compute node configurations available:

    • Standard compute nodes have 24 x 2.3GHz CPU cores and 128GB of memory
    • Big memory compute nodes have 48 x 2.3GHz CPU cores and 256GB of memory
    • Fast compute nodes have 32 x 3.0GHz CPU cores and 256GB of memory.

    Zodiac utilises 128TB of disk storage split over 3 filesystems in conjunction with 400TB of tape storage.

    Job prioritisation favours new and small users. Jobs requiring more than 48 CPU cores will not run on Zodiac, as the job management system has been designed to prevent multi-node jobs from being executed.  Zodiac uses Torque+Maui for job management and scheduling, and the GNU compilers are available for software compilation.

    JCU HPC services provides additional Infrastructure-as-a-Service (IaaS) for researchers with special requirements, including Windows compute, Web Services, and Databases.

    Further information about Zodiac can be found at https://secure.jcu.edu.au/confluence/display/Public/Home.

    If you would like to find out more about Zodiac, you can contact:

        Dr. Wayne Mallett

        Phone: +61 7 4781 5084

        Email: This email address is being protected from spambots. You need JavaScript enabled to view it.

     

    Isaac Newton – HPC Cluster at CQUniversity (CQUni)

    The Isaac Newton HPC cluster contains 544 Intel and AMD CPU cores, with three compute node configurations available:

    • Standard compute nodes consists of  28 x dual Intel E5-2670 2.6 GHz 8-core CPUs (Total of 16 CPU cores) and 128GB’s of memory
    • GPU compute nodes consists of 2 x dual Intel E5-2670 2.6 GHz 8-core CPUs (Total of 16 CPU cores), 128GB’s of memory and  1 x nVidia M2075 GPU
    • Large compute node consists of  1 x quad AMD Opteron 2676 2.3 GHZ 16-core CPUs (Total of 64 CPU Cores) and 512GB’s of memory

    The Isaac Newton HPC cluster has 240 TB’s of raw shared storage.  For more details on CQUniversity’s HPC system, you are encouraged to visit https://my.cqu.edu.au/web/eresearch/hpc-systems.

    A list of HPC software available can be found at https://my.cqu.edu.au/web/eresearch/hpc-software

    Live HPC utilisation graphs can be found at https://my.cqu.edu.au/web/eresearch/usage-graphs

    Further information about CQUniversity’s HPC infrastructure can be found at https://my.cqu.edu.au/web/eresearch/hpc.

    If you would like to find out more about CQU’s Isaac Newton HPC system, you can contact:

        Jason Bell

        Phone: +61 (7) 4930 9229 (x59229) 

        Email: This email address is being protected from spambots. You need JavaScript enabled to view it.

     

    HPC Cluster at the University of Southern Queensland (USQ)

    The current USQ HPC cluster comprises:

    • 30 compute nodes, each with 2 x quad-core 2.7GHz AMD Opteron CPUs and 16GB of memory
    • 1 visualisation node consisting of Sun X4440 system with 4 x six-core 2.4GHz Opteron CPUs, 64GB of memory, and an NVidia graphics card.

    There is 180TB of shared storage.

    A new HPC cluster will be deployed soon, and is comprised of 29 compute nodes, 1 administration node, and 1 login and file server node. The compute nodes have three configurations:

    • Standard - 2 x Intel ES-2650v3 processors and 128GB of memory
    • Large Memory - 2 x Intel ES-2650v3 processors and 256GB of memory
    • GPU node - 2 x Intel ES-2650v4 processors, 128GB memory, and 2 x k80 GPUs.

     Further information on USQ's HPC can be found at: http://www.usq.edu.au/research/support-development/development/eresearch/hpc

     

    Run Nimrod and Kepler workflows

    QRIScloud offers the Kepler tool for running computational workflows, and the Nimrod tool for running parametric (parameter scan) experiments involving resources spread over a number of instances. The Nimrod tool family facilitates high-throughput science by allowing researchers to use computational models to explore complex design spaces. Models can be executed across changing input parameters. Different members of the tool family support complete and partial parameter sweeps, numerical search by non-linear optimisation, and even workflows. Further, Nimrod allows computational researchers to use a mixture of university-level infrastructure, the Australian research cloud, including QRIScloud, and commercial clouds such as Amazon Web Services. The Kepler workflow system allows scientists from multiple domains to design and execute scientific workflows. Kepler workflows model the flow of data across a series of computation steps. Scientific workflows can be used to combine data integration, analysis, and visualisation steps into larger, automated "scientific process pipelines" and "grid workflows".
    Find out more about Nimrod/Kepler in the cloud.
    Request access to these resources.

      Documentation and support

      QRIScloud support can be accessed in several ways. Additionally, QRIScloud staff have created the Virtual Wranglers portal to provide information about many aspects of NeCTAR instance operations. You are welcome to use and contribute to this store of information. Step-by-step instructions on how to set up a NeCTAR instance can be found here. Background information explaining what NeCTAR images are, and how to use them, can be found here. You are also welcome to consult your university's eResearch Analyst

        National Computational Infrastructure (NCI)

        Australia’s national research computing service, the National Computational Infrastructure (NCI), provides world-class, high-end services to Australia’s researchers, the primary objectives of which are to raise the ambition, impact, and outcomes of Australian research through access to advanced, computational and data-intensive methods, support, and high-performance infrastructure.

        NCI's peak system,Raijin, is a Fujitsu Primergy high-performance, distributed-memory cluster which entered production use in June 2013. It comprises more than 50,000 cores (Intel Xeon Sandy Bridge technology, 2.6 GHz), 160 TBytes of main memory, Infiniband FDR interconnect and 10 PBytes of usable fast filesystem (for short-term scratch space).The unit of shared memory parallelism is the node, which comprises dual 8-core Intel Xeon (Sandy Bridge 2.6 GHz) processors, i.e., 16 cores. The memory specification across the nodes is heterogeneous in order to provide a configuration capable of accommodating the requirements of most applications, and providing also for large-memory jobs. Raijin is particular suited to large scale MPI jobs which use less than 2GB per core and require low latency interconnects.

        The National Computational Merit Allocation Scheme (NCMAS) provides researchers with access to Australia’s major national computational facilities, including Raijin. The main call for applications is made annually in October for allocations to start the following January for up to 12 months. QCIF has a share in time on Raijin and is accepting applications all year round.

        Apply for QCIF's NCI share

        FlashLite

        FlashLite is a research computer that has been designed and optimised for data intensive computing. FlashLite will support applications that need large amounts of primary memory along with very high performance secondary memory. Each of the 68 nodes of FlashLite is equipped with 512 Gbytes of main memory and 4.8TB of solid-state drive (SSD). The operating software supports various programming paradigms including message passing (MPI), shared memory (OpenMP), and virtual symmetric multiprocessor (vSMP). The vSMP mode aggregates nodes to produce virtual machines with very large "main" memory address space. A data sheet is available here.

        Register to use FlashLite

        Euramoo

        QCIF, in collaboration with the UQ RCC, has launched a cloud-based computer cluster called Euramoo. Built on the NeCTARcloud, Euramoo is optimised for multiple serial jobs as opposed to large parallel ones. It supports shared memory (OpenMP) style programs as well as small footprint, low network bandwidth message passing (MPI/MPICH) workloads. It is ideally suited to large parameter sweep or ensemble applications. Euramoo offers a very flexible platform because more cores from QRIScloud can be added to it as demand increases and as cores are made available. Euramoo supports the Nimrod parameter sweep and workflow tools, providing additional capacity to this powerful high-level computational environment. Further information on Euramoo can be found on the QRIScloud documentation webpage.

        Register to use Euramoo

        About QRIScloud

        QRIScloud is a large-scale cloud computing and data storage service.  It is aimed at stimulating and accelerating research use of computing across all disciplines. 

        Latest Posts

        Get in touch

        QRIScloud @
        QCIF Ltd
        Axon building, 47
        The University of Queensland
        St Lucia, Qld, 4072

        Contact us through the QRIScloud support desk, or email support@qriscloud.org.au

         

        Connect with us