You are currently viewing Supercharging OCI Object Storage with Rclone Mount on a High-Performance Windows Shape

Supercharging OCI Object Storage with Rclone Mount on a High-Performance Windows Shape

When you think about Object Storage, your first instinct is probably that it’s great for durability and cost, but not exactly known for blazing-fast file sharing over SMB. But what if you could expose an OCI Object Storage bucket as a Windows file share with 700–900 MB/s throughput on large video files?

Yes, it’s possible — and surprisingly smooth — when you pair Rclone mount with a Windows shape (16 OCPUs) and a 120 VPU / 1.6TB block volume acting as a high-speed cache layer.

In this guide, I’ll show you how to do exactly that, explain the multipart upload/download mechanics, and break down the specific Rclone flags that make this setup fly.

The Core Idea

We use Rclone mount to map an OCI Object Storage bucket to a drive letter (like X:) on Windows. Then we configure Rclone to:

  • Use aggressive caching backed by a high-throughput block volume.
  • Enable multipart parallel streams to squeeze every drop of performance from OCI.
  • Serve everything via SMB / Windows File Sharing, so users and applications just see a normal network drive.

Why Multipart Transfers Matter

Object Storage isn’t like a traditional filesystem — it thrives on parallel I/O. Instead of moving files in a single stream, multipart transfers split large files into chunks and upload/download them in multiple simultaneous threads.

This is why you never want to rely on default single-stream behavior.

  • Multi-thread download → aggressively pulls chunks in parallel.
  • Read-ahead and chunk streaming → keeps buffers full to avoid idle time.
  • High VPU block volume → handles the temporary caching load without bottlenecking.

With the right threading and chunk size strategy, OCI Object Storage behaves closer to high-end NAS performance for large sequential files.

The Tested Command (with ~700–900MB/s throughput)

Here’s the exact Rclone command that works beautifully in this scenario:

rclone.exe mount oci:trascoding X: --vfs-cache-mode full --vfs-cache-max-size 500G --vfs-cache-max-age 24h --vfs-read-chunk-size 256M --vfs-read-chunk-size-limit off --buffer-size 1G --dir-cache-time 2m --poll-interval 15s --cache-dir "D:\" --links --transfers 12 --checkers 8 --multi-thread-streams 18 --multi-thread-cutoff 64M --no-check-certificate --no-modtime --vfs-fast-fingerprint --disable-http2 --vfs-read-chunk-streams 64 --vfs-read-ahead 1G

Breakdown of Key Options (Why They Matter)

OptionPurpose
--vfs-cache-mode fullEnables full file caching — essential for SMB access and random reads.
--cache-dir "D:\"Points cache to high-throughput 120 VPU block volume.
--vfs-cache-max-size 500GAllows large cache footprint to handle multi-user access.
--vfs-read-chunk-size 256M + --vfs-read-chunk-size-limit offStarts big and stays big — ensures sustained throughput.
--multi-thread-streams 18Multipart parallel streams for downloading chunks simultaneously.
--multi-thread-cutoff 64MEnables threaded downloads for files bigger than 64MB.
--vfs-read-chunk-streams 64Spawns up to 64 concurrent chunk fetchers — ideal for high OCPU shapes.
--buffer-size 1GKeeps a large in-memory buffer to avoid stall between chunks.
--vfs-read-ahead 1GPrefetches future chunks before they’re requested — great for video playback/sequential reads.
--disable-http2Avoids HTTP/2 overhead/issues with some OCI endpoints — improves stability.
--transfers 12 & --checkers 8Balances parallel transfer slots without overwhelming OCI limits.
--vfs-fast-fingerprintAvoids unnecessary checksum scans — faster metadata operations.

Windows File Share Integration

Once mounted to X:, just:

  1. Enable Windows File Sharing (SMB) on that drive or folder.
  2. Assign permissions.
  3. Clients connect normally — they never know they’re pulling from Object Storage.

This setup is ideal for:

  • Media workflows (e.g., transcoding farms, video review, editing over network).
  • Backup/archive browsing.
  • Hybrid cloud file distribution.

Real-World Performance

With the 16 OCPU Windows shape and the 1.6TB / 120VPU cache disk, this configuration achieved:

700–900 MB/s sustained read throughput on 20GB+ sequential video files from OCI Object Storage.

That’s local NVMe-like performance, coming straight from Object Storage via Rclone + caching. Insane.

Final Tips for Maximum Speed

  • Always use a high VPU block volume — OCI performance scales with VPU, not just size.
  • Monitor cache usage and increase --vfs-cache-max-size if needed.
  • For media workflows, consider disabling antivirus scanning on the mount path.
  • If you need write performance too, adjust --transfers upward carefully and test.

Conclusion

With the right tuning, Rclone mount turns Object Storage into a high-speed file server backend, especially when paired with a powerful compute shape and fast block volume cache.

It’s a cost-efficient, scalable, and surprisingly elegant way to blend object storage economics with traditional SMB usability — without sacrificing performance.

Leave a Reply