TuringData All-Flash Storage Appliances: TuringFlash F9100/F9200 Series

Unlock the Full Potential of Your AI Infrastructure

Maximize GPU Utilization, Minimize Costs, and Drive AI Efficiency
All-Flash Storage Optimized for AI and Beyond
The TuringData All-Flash Distributed Storage Appliances—TuringFlash F9100/F9200 Series—are groundbreaking AI-native storage solutions designed to unlock the full computing power of your GPUs. Powered by the TuringData file system and equipped with PCIe 5.0 NVMe SSDs and 400Gbps InfiniBand/Ethernet networks (with RoCE support), the TuringFlash series appliances deliver ultra-low latency, best-in-class performance, and exceptional efficiency for the world’s most data-intensive AI and HPC environments.
Fastest Performance to Power the Most Demanding Workloads
with Just3Nodes
7.5M IOPS
Speed up all your AI apps
480GB/s Throughput
Drive mission-critical workloads
90%+ GPU Utilization
Maximize your GPU investment
The Proven Foundation for Your AI Factory

Blazing Speed for Unmatched GPU Efficiency

Deliver up to 480GB/s bandwidth and 7.5M IOPS in a three-node storage cluster by removing data movement delays, ensuring GPUs have instant access to critical data.

Built for Flexibility

Deploy anywhere with ease. TuringFlash offers out-of-the-box simplicity, intuitive setup, and consistent, reliable performance with agile scaling and seamless integration into modern orchestration environments.

Optimization for Many-to-One Traffic

Eliminate network congestion and traffic hotspots caused by many-to-one traffic in large-scale GPU clusters, ensuring smooth and efficient data flow.

Simple Data Lifecycle Management

Simplify data management using a smart tiering solution with policy-based and automatic cold-data migration to object storage, reducing costs while maintaining seamless access.

Key Features

GPUDirect

Supports NVIDIA GPUDirect® Storage, achieving high performance to meet AI workload requirements.

Elastic Data Network

Integrates multiple network planes within storage clusters to accelerate both AI model training and inference.

Multi-Tenancy

Supports multi-tenant deployments with per-directory QoS and quota management, ensuring each user or team receives predictable performance and fair resource allocation.

Distributed Metadata

A non-volatile, distributed metadata architecture ensures superior resilience, scalability, and performance.

Unified Global Namespace

Enables real-time read/write access to data across edge, core, and cloud with a unified global namespace.

Smart DataLoad

Seamlessly integrates object storage and file systems, enabling data movement driven by business needs and efficient management.

GPUDirect

Supports NVIDIA GPUDirect® Storage, achieving high performance to meet AI workload requirements.

Elastic Data Network

Integrates multiple network planes within storage clusters to accelerate both AI model training and inference.

Multi-Tenancy

Supports multi-tenant deployments with per-directory QoS and quota management, ensuring each user or team receives predictable performance and fair resource allocation.

Distributed Metadata

A non-volatile, distributed metadata architecture ensures superior resilience, scalability, and performance.

Unified Global Namespace

Enables real-time read/write access to data across edge, core, and cloud with a unified global namespace.

Smart DataLoad

Seamlessly integrates object storage and file systems, enabling data movement driven by business needs and efficient management.

Setting a New Benchmark for

Performance and Cost Efficiency

TuringFlash can be deployed and fully operational with just three nodes, running stably while providing unparalleled performance and cost efficiency.
Storage Appliance

AI Moves Fast—So Should You
Start with TuringData Today