Scalable storage tailored to build and run generative AI on Kobayashi AI Cloud.
Feed datasets to your GPU cluster at maximum speed — cutting training cycles and enabling low-latency model inference.
Maximize AI infrastructure goodput with high-speed shared storage: accelerate checkpoint read/write during multi-host training.
Store unlimited unstructured data (text, video, etc.) to streamline multi-modal AI training workflows.
Fully S3-compatible object storage for:
Seamlessly migrate data across storage classes (within the same service) to align with your data strategy.
A high-speed shared filesystem built exclusively for AI workloads — delivering scalable performance for parallel AI compute and unlimited capacity scalability. It’s the go-to choice for training & inference workflows, combining cost efficiency, ease of use, and a robust feature set.
Block network volumes built for booting and running virtual machines. Choose from 3 tiered options — tailored to your performance, reliability, and cost requirements:
* Performance varies based on bucket data structure, write concurrency, and upload process configuration.
** Burst performance measured in a 254-GPU/254-CPU host cluster (during real customer multi-modal training workloads).