Azure Container Storage v2: Whats New?
Storage
1. Okt 2025 04:11

Azure Container Storage v2: Whats New?

von HubSite 365 über John Savill's [MVP]

Principal Cloud Solutions Architect

Azure Container Storage latest: AKS CSI integration, local NVMe support, VM SKUs, durability and performance insights

Key insights

  • Azure Container Storage v2 is a major update that targets simpler deployment and faster block storage for Kubernetes.
    The video reviews what's new, why it matters for AKS (Azure Kubernetes Service), and shows a short demo.
  • NVMe performance drives the biggest gains — v2 reports up to ~7× higher IOPS and much lower latency versus v1.
    That makes it a better fit for latency-sensitive workloads like databases, AI inference, and messaging queues.
  • Simplified architecture removes the old StoragePool CRD and bundled telemetry components, leaving a single operator and a CSI driver.
    This reduces resource needs, lowers operational overhead, and makes upgrades and troubleshooting easier.
  • Smaller cluster support now allows single-node and two-node setups, so testers and small production clusters can use ACStor v2.
    The release is open source and runs on any Kubernetes cluster, not just AKS.
  • Kubernetes-native workflows mean you manage volumes with standard StorageClass, PVCs, snapshots, and kubectl commands.
    There’s no special migration path required from v1; lifecycle actions follow familiar Kubernetes patterns.
  • Best-fit workloads and roadmap focus on stateful apps that need fast local disks and quick attach/detach.
    The video notes durability and performance trade-offs and mentions planned integration with Azure Elastic SAN for broader enterprise scale.

John Savill's [MVP] recently published a YouTube video titled Azure Container Storage v2 Overview, which walks viewers through the major changes and practical implications of the new release. The video lays out the technical shifts, demonstrates a short demo, and explains how v2 differs from the original implementation. Consequently, it serves as a useful briefing for teams evaluating storage options for Kubernetes on Azure and elsewhere. Below, we summarize the key points and examine the tradeoffs and operational challenges the video highlights.

What the video covers

First, the presenter outlines the scope of the content, including a short chapter list that tracks topics from architecture to performance and demos. He explains how v2 changes storage orchestration for Kubernetes and why those changes matter for both development and production clusters. Moreover, the video clarifies that the new release aims to reduce complexity while boosting raw I/O performance. Therefore, viewers get both conceptual context and practical notes about deployment.

Next, the video compares the original release to the new version, showing specific items that were removed or streamlined. For example, the older StoragePool custom resource and bundled telemetry components like Prometheus no longer ship by default. As a result, the platform now runs with a single operator and one CSI driver, which simplifies lifecycle management and reduces reserved node resources. However, the presenter also flags the responsibility that shifts to operators for monitoring and metric collection.

Performance and architectural changes

The most pronounced improvement the video emphasizes is performance. ACStor v2 introduces native support for local NVMe devices and NVMe-over-Fabrics options, which the presenter says can deliver up to seven times higher IOPS and four times lower latency compared to v1 in certain scenarios. In turn, these gains make the storage system a better fit for latency-sensitive workloads like databases, messaging systems, and AI inference. Nevertheless, the speaker stresses that raw performance depends on VM SKU selection and local disk configurations, so careful planning remains essential.

Architecturally, the simplification also aligns volume operations with standard Kubernetes patterns such as StorageClass and PVC. Consequently, administrators can manage persistent volumes using the same kubectl workflows they already use for other resources. Furthermore, the removal of extra CRDs and bundled observability components reduces attack surface and operational overhead. On the other hand, this approach places more integration work on teams that want in-depth telemetry or cross-node aggregation of metrics.

Operational tradeoffs and challenges

While the video praises v2 for lowering complexity, it also highlights important tradeoffs. For instance, relying on local NVMe brings clear latency and throughput advantages, yet local disks increase the risk profile for durability because data locality becomes more important when nodes fail. Therefore, operators must weigh replication strategies, striping choices, and backup routines against the performance benefits and the desired recovery time objectives. The presenter also notes that some configurations may require manual striping across multiple disks to reach target throughput, adding administrative steps.

Another operational change involves the removal of a required three-node minimum: v2 supports single-node and two-node clusters. This flexibility benefits dev/test and smaller production environments, but it also changes how teams handle fault tolerance and node maintenance. Moreover, because v2 drops built-in Prometheus, teams must plan for external monitoring solutions and ensure their chosen approach captures relevant storage metrics. Thus, simplicity in deployment can mean more choices and responsibilities for observability and data protection.

Recommended workloads and limitations

The video suggests clear use cases where v2 shines, especially stateful workloads that need low latency and high IOPS. Relational databases, certain NoSQL engines, message brokers, and AI inference services can gain materially from local NVMe performance. Conversely, workloads that require multi-node synchronous durability or guaranteed cross-node resilience may need additional design work to account for the local-disk model. Therefore, architects should test expected failure modes and validate recovery procedures before adopting v2 in critical environments.

Importantly, the presenter clarifies that there is no automatic migration path from v1 to v2, which creates a planning hurdle for existing deployments. Teams will need to evaluate migration options, potential downtime, and data movement strategies if they want to transition. Additionally, while v2 is open source and free to run beyond AKS, integrating it into varied Kubernetes distributions will require validation and possibly custom tooling for specific infrastructures. As a result, upgrade and cross-platform deployment planning must factor into any adoption timeline.

Looking ahead and practical takeaways

Finally, the video touches on future directions, including planned integrations such as support with broader Azure storage offerings. These extensions could give teams more flexible, enterprise-grade options while preserving the new performance model. Meanwhile, the current release already provides significant wins in throughput and simplicity that make it attractive for many clusters, provided teams accept the tradeoffs around durability, monitoring, and migration work.

In summary, John Savill's overview gives a balanced view of Azure Container Storage v2, highlighting impressive performance gains and a cleaner architecture while calling out operational responsibilities and migration challenges. Consequently, organizations should pilot v2 with representative workloads, validate failure and recovery scenarios, and plan monitoring and data protection before full-scale rollout. Overall, the video serves as a practical primer for teams deciding whether v2 aligns with their performance needs and operational constraints.

Storage - Azure Container Storage v2: Whats New?

Keywords

Azure Container Storage v2,Azure Container Storage,ACS v2,AKS persistent storage,Azure CSI driver,Azure storage for containers,Cloud-native storage Azure,Managed container storage Azure