Learn how to configure storage volumes and network ports for your H100 and H200 GPU instances to support machine learning workloads, model training, and inference deployments.
Provisioning: Automatically available when instance starts
Persistence: ❌ Erased when instance is terminated
Performance: High-speed NVMe SSD (up to 7,000 MB/s)
Use Cases: Active training, temporary data, cache, scratch space
Onboard Storage by Configuration:
GPU Type
Region
Onboard Storage Size
H100
us-central-1
18TB
H100
eu-north-4
10TB
H100
uk-southeast-3
24TB
H200
uk-central-3
24TB
Onboard storage is erased when the instance is terminated. Always save important data to persistent storage or external services before terminating an instance.
Optional Add-on Storage Volumes
Cost: Additional hourly charges apply
Provisioning: Created separately, then attached to instances
Persistence: ✅ Survives instance termination
Performance: Network-attached (up to 1,000 MB/s)
Use Cases: Long-term data, model checkpoints, datasets, shared resources
Size: 100GB - 10TB per volume (customizable)
Availability: Currently only in us-central-1 region (if you need storage in other regions please contact us)
Persistent storage volumes are created and managed separately from instances. They can be:
Attached to running instances after creation
Detached from one instance and reattached to another
Retained even after all instances are terminated
Shared between multiple instances (read-only mode)
Persistent storage is currently only available in the us-central-1 region. For other regions, please contact us at [email protected].
Onboard storage is automatically mounted and ready to use when your instance starts:
Copy
Ask AI
# View all available storagedf -h# Onboard storage is typically mounted at:# /home/ubuntu (root volume for OS and user files)# /mnt or /data (additional onboard storage space)# Check onboard storage usagedu -sh /home/ubuntu/*du -sh /mnt/*
The exact mount points may vary by instance configuration. Use df -h or lsblk to see all available storage.
Persistent storage incurs additional hourly charges. Check current pricing in the console.
2
Attach to Instance
After creating the volume:
Go to your running instance details
Click “Attach Storage”
Select your persistent volume from the list
The volume will be attached as a block device (e.g., /dev/vdb)
3
Mount and Use
SSH into your instance and mount the volume:
Copy
Ask AI
# Check if volume is attachedlsblk# Format if new volume (only do this once!)sudo mkfs.ext4 /dev/vdb# Create mount pointsudo mkdir -p /mnt/persistent# Mount the volumesudo mount /dev/vdb /mnt/persistent# Set permissionssudo chown -R $USER:$USER /mnt/persistent# Make mount persistent across rebootsecho "/dev/vdb /mnt/persistent ext4 defaults 0 2" | sudo tee -a /etc/fstab
# View all storage available on your instancedf -h# Check disk usage by directorydu -sh /*# Monitor I/O performanceiostat -x 1
Your onboard storage is automatically available and includes:
System root volume (OS and applications)
Additional data volume (varies by configuration: 2TB - 24TB)
2
Manage Persistent Storage Volumes
If you’ve created persistent storage (us-central-1 only):
Copy
Ask AI
# List block devices to find your persistent volumelsblk# Persistent volumes appear as /dev/vd* devices# Mount your persistent volumesudo mkdir -p /mnt/persistentsudo mount /dev/vdb /mnt/persistent# Make mount persistent across rebootsecho "/dev/vdb /mnt/persistent ext4 defaults 0 2" | sudo tee -a /etc/fstab
3
Transfer Data Before Termination
Remember: Onboard storage is erased when the instance is terminated!
Before terminating an instance:
Copy
Ask AI
# Option 1: Copy to persistent storage (if available)rsync -avP /home/ubuntu/important-data/ /mnt/persistent/backup/# Option 2: Upload to S3aws s3 sync /home/ubuntu/models/ s3://my-bucket/models/# Option 3: Upload to Google Cloud Storagegsutil -m cp -r /home/ubuntu/checkpoints/ gs://my-bucket/checkpoints/# Option 4: Create tar archive and uploadtar -czf models.tar.gz /home/ubuntu/models/curl -T models.tar.gz https://transfer.sh/models.tar.gz
Clean temporary files regularly to maintain performance
Persistent Storage Optimization (if available):
Network-attached with up to 1,000 MB/s throughput
Best for long-term storage, not active training
Use for model archives and dataset libraries
Consider compression for infrequently accessed data
Managing Limited Storage (2TB configurations):
Copy
Ask AI
# Monitor disk usage closelywatch -n 60 'df -h | grep -v tmpfs'# Clean package cachespip cache purgeconda clean --all -yapt-get clean# Remove old Docker images if using containersdocker system prune -a -f# Stream large datasets instead of downloading# Example with TensorFlow:dataset = tf.data.TFRecordDataset(["s3://bucket/data.tfrecord"])
Data Lifecycle Management:
Copy
Ask AI
# Set up automated cleanup for temporary filesfind /home/ubuntu/cache -type f -mtime +1 -deletefind /tmp -type f -mtime +1 -delete# Compress old checkpointsfind /home/ubuntu/checkpoints -name "*.ckpt" -mtime +7 -exec gzip {} \;# Archive completed experimentstar -czf experiment-$(date +%Y%m%d).tar.gz /home/ubuntu/experiments/completed/
# Local machine: Create SSH tunnelssh -L 6006:localhost:6006 ubuntu@[instance-ip] -i ~/.ssh/hyperbolic_key.pem# On instance: Launch TensorBoardtensorboard --logdir=/mnt/ml-data/logs --port=6006# Access at: http://localhost:6006
Copy
Ask AI
# Local machine: Forward custom port (e.g., 5000)ssh -L 5000:localhost:5000 ubuntu@[instance-ip] -i ~/.ssh/hyperbolic_key.pem# On instance: Run your servicepython app.py --port=5000# Access at: http://localhost:5000
Symptoms: Persistent volume not visible or mount failsSolutions:
Copy
Ask AI
# 1. Check if persistent volume is attachedlsblk# Look for /dev/vdb or similar# 2. Check if it has a filesystemsudo file -s /dev/vdb# 3. If "data" (no filesystem), format it (ONLY for new volumes!)sudo mkfs.ext4 /dev/vdb# 4. Create mount point and mountsudo mkdir -p /mnt/persistentsudo mount /dev/vdb /mnt/persistent# 5. Fix permissionssudo chown -R $USER:$USER /mnt/persistent# 6. Make persistent across rebootsecho "/dev/vdb /mnt/persistent ext4 defaults 0 2" | sudo tee -a /etc/fstab
Note: Persistent storage must be created in the web console first, then attached to your instance.
Disk Space Running Low
Symptoms: Training fails, services crash, unable to save checkpointsSolutions:
Copy
Ask AI
# 1. Check what's using spacedu -sh /* 2>/dev/null | sort -rh | head -20df -h# 2. Clean temporary files and cachesfind /tmp -type f -mtime +1 -deletefind ~/cache -type f -mtime +7 -deletepip cache purgeconda clean --all -yapt-get clean# 3. Compress old checkpointsfind ~/checkpoints -name "*.ckpt" -mtime +3 -exec gzip {} \;# 4. If you have persistent storage, move data thereif mountpoint -q /mnt/persistent; then rsync -avP ~/models/ /mnt/persistent/models/ rm -rf ~/models/old_versions/fi# 5. For limited storage (2TB), use external storage# Upload to S3 and delete local copiesaws s3 sync ~/outputs/ s3://my-bucket/outputs/ --delete-removed# 6. Remove Docker images if using containersdocker image prune -a -fdocker system prune -a -f --volumes
Prevention Tips:
Set up automated cleanup in cron
Use persistent storage for long-term data (if available)