GPU Cloud Server
NVIDIA A100

NVIDIA A100 Tensor Core GPUs offer exceptional throughput and low-latency networking for industry-leading performance, powering Machine-Learning & High-Performance Computing (HPC)

 

GPU Compute

NVIDIA A100 Tensor Core GPU

Cost Effective Resources & Infrastructure

Reduce your cloud costs without sacrificing performance or reliability. mCloud delivers enterprise grade cloud infrastructure at a fraction of the cost of AWS, Google Cloud, or Azure.

Fault-Tolerant Tier IV Data Centre

Micron21 operates Australia’s first Tier IV-certified data centre, offering 100% uptime, redundant power, and high availability architecture.

24/7 Australian-based Expert Support

Our cloud specialists provide 24/7 Australian-based support, ensuring seamless deployments and efficient troubleshooting.

NVIDIA A100

High-Performance Computing

$1,668 AUD / month

Minimum Specifications

Provided as a High-Availability mCloud Virtual Cloud Server

  • GPU: NVIDIA A100 (40 GB)
  • GPU Compute: Dedicated
  • vCPU: 12 Cores - XEON Gold
  • RAM: 64 GB - DDR4
  • Storage: 500 GB - NVMe SSD
  • Bandwidth: 2 TB p/m
  • IP Address: Included
  • DDoS Protection: Shield

Introduction

Unprecedented Acceleration
at Every Scale

We utilise NVIDIA A100 Tensor Core GPUs to empower our users to be able to perform deep learning, HPC, and data analytics tasks with high-performance compute.

Representing the most powerful end-to-end AI and HPC platform for data centers, it allows technologists and researchers to deliver real-world results and deploy solutions into production at scale

With Multi-Instance GPU (MIG), the A100 can scale up efficiently or be divided into seven isolated GPU instances, offering a versatile platform that adapts dynamically to changing workload demands.

 

Why Micron21

Why Choose Micron21 for GPU Compute?

Being Australia's first Tier IV data centre, you can rest assured our GPU Compute offerings provide reliable, secure, high-calibre performance.

High-Speed Compute &
NVMe SSD Storage

Our GPU Cloud Servers utilise Intel XEON Gold CPUs and ultra-fast NVMe SSDs to deliver high-performance compute and storage. These ensure ultra fast processing speeds and rapid access to resources for even the most-demanding of applications.

DDoS Protection

All GPU Cloud Servers with Micron21 are protected via our comprehensive DDoS platform which employs multiple layers of protection to inspect, scan and filter traffic at our global scrubbing centres.

Tier IV Data Centre

Tier IV is the highest uptime accreditation that a data centre can have. Ensure the availability of your systems by combining bulletproof dedicated servers with Micron21. We're Australia's first Tier IV accredited data centre.

ISO Certified

We're up-to-date with the latest in security standards. This includes being ISO 27001, 27002, 27018 and 14520 certified; PCI compliant; and IRAP assessed.

Data Sovereignty

We're proudly 100% Australian owned and operated. In this cyber age with concerns over foreign influence, the physical sovereignty of your data is the ultimate peace of mind.

Australian Support

Our dedicated Australian-based support technicians are located in the Micron21 Data Centre. We aim to completely remove the complexity of IT management for our customers with 24/7 access available.

 

Features

Key Features of NVIDIA A100 GPUs

NVIDIA Ampere Architecture

Whether using MIG to partition an A100 GPU into smaller instances or NVLink to connect multiple GPUs to speed large-scale workloads, A100 can readily handle different-sized acceleration needs, from the smallest job to the biggest multi-node workload. A100’s versatility means IT managers can maximize the utility of every GPU in their data center, around the clock. .

Third-Generation Tensor Cores

NVIDIA A100 delivers 312 teraFLOPS (TFLOPS) of deep learning performance. That’s 20X the Tensor floating-point operations per second (FLOPS) for deep learning training and 20X the Tensor tera operations per second (TOPS) for deep learning inference compared to NVIDIA Volta GPUs

Next-Generation NVLink

NVIDIA NVLink in A100 delivers 2X higher throughput compared to the previous generation. When combined with NVIDIA NVSwitch™, up to 16 A100 GPUs can be interconnected at up to 600 gigabytes per second (GB/sec), unleashing the highest application performance possible on a single server. NVLink is available in A100 SXM GPUs via HGX A100 server boards and in PCIe GPUs via an NVLink Bridge for up to 2 GPUs.

Multi-Instance GPU (MIG)

An A100 GPU can be partitioned into as many as seven GPU instances, fully isolated at the hardware level with their own high-bandwidth memory, cache, and compute cores. MIG gives developers access to breakthrough acceleration for all their applications, and IT administrators can offer right-sized GPU acceleration for every job, optimizing utilization and expanding access to every user and application.

High-Bandwidth Memory (HBM2E)

With up to 80 gigabytes of HBM2e, A100 delivers the world’s fastest GPU memory bandwidth of over 2TB/s, as well as a dynamic randomaccess memory (DRAM) utilization efficiency of 95%. A100 delivers 1.7X higher memory bandwidth over the previous generation.

Structural Sparsity

AI networks have millions to billions of parameters. Not all of these parameters are needed for accurate predictions, and some can be converted to zeros, making the models “sparse” without compromising accuracy. Tensor Cores in A100 can provide up to 2X higher performance for sparse models. While the sparsity feature more readily benefits AI inference, it can also improve the performance of model training.

Artificial Intelligence

Deep Learning Training

AI models are exploding in complexity as they take on next-level challenges such as conversational AI. Training them requires massive compute power and scalability.

NVIDIA A100 Tensor Cores with Tensor Float (TF32) provide up to 20X higher performance over the NVIDIA Volta with zero code changes and an additional 2X boost with automatic mixed precision and FP16. When combined with NVIDIA® NVLink®, NVIDIA NVSwitch™, PCI Gen4, NVIDIA® InfiniBand®, and the NVIDIA Magnum IO™ SDK, it’s possible to scale to thousands of A100 GPUs.

For the largest models with massive data tables like deep learning recommendation models (DLRM), A100 80GB reaches up to 1.3 TB of unified memory per node and delivers up to a 3X throughput increase over A100 40GB.

 

Conclusion

Experience GPU-Accelerated
Cloud Computing

Our mCloud platform, built on robust OpenStack architecture, now offers the ability to integrate powerful NVIDIA GPUs directly into your virtual machines through GPU passthrough technology. This allows virtual machines to access the full capabilities of a physical GPU as if it were directly attached to the system, bypassing the hypervisor’s emulation layer and providing near-native performance.

Enhance your cloud capabilities with GPU acceleration by integrating dedicated GPUs into your mCloud virtual machines and take your computing to the next level.

 

See How Much You Can Save with mCloud

Customize your cloud and compare costs instantly against AWS, Google Cloud, and Microsoft Azure. Get more for less with enterprise-grade performance.

  • Transparent Pricing: No hidden fees or surprises.
  • Enterprise-Grade for Less: High performance at lower costs.
  • Instant Comparison: See real-time savings.

Sign up for the Micron21 Newsletter