The State of Storage
How organizations are navigating cyber-resilience, AI infrastructure, and the fight for data control
Storage infrastructure is at an inflection point. Organizations are racing to adopt AI, bracing for cyber threats, and quietly wrestling with a question that doesn’t make headlines but shapes every decision: who actually controls our data?
A note before we begin: this research was conducted in late 2025 before the Memflation crisis really picked up speed. We discuss the crisis and how it relates to what we’re seeing in the AI and Data Control sections.
Executive Summary
Three Findings That Challenge Assumptions
The confidence gap is real
86% of organizations believe they can recover from a cyber incident, but 37% have never actually tested that assumption.
AI adoption is slower than the hype suggests
Only 16% are running AI workloads in production. The infrastructure pressure most organizations are feeling isn’t from their own AI adoption, it’s from everyone else’s.
On-premises is a choice, not a legacy
Despite the cloud-first narrative, 63% of respondents (teams that actively chose self-managed infrastructure) keep critical data on-premises. In a market where SSD prices have surged 257%, that choice is increasingly economic.
This report unpacks what’s really happening in storage infrastructure, and what it means for the year ahead.
Who We Heard From
Over 600 storage and infrastructure professionals across industries and regions
By Role
Architects and engineers represent 40% of the sample, with IT leaders comprising another 25%. This composition offers insight into how storage decisions are made and implemented at the operational level.
By Industry
Technology leads at 37%, followed by Media & Entertainment (9%) and Education (8%). This reflects the storage-intensive nature of these industries and their early adoption of emerging infrastructure trends.
By Organization Size
By Region
North America
Europe
Asia Pacific
Other Regions
Organizations range from lean teams of fewer than ten employees to global enterprises with more than 10,000.
The Confidence Gap
Over 600 storage and infrastructure professionals across industries and regions
Ask storage professionals whether they could recover critical services after a cyber incident, and the response is reassuring: 86% say they're confident. Only 14% express doubt.
But confidence and capability aren't the same thing.
When we asked how often organizations run recovery tests that simulate an actual cyber event, the picture shifted dramatically. More than a third (37%) don't run these tests at all. Another 20% only test after major changes, a reactive approach that leaves gaps between updates. Only 8% test monthly.
Recovery Testing Frequency
This is the confidence gap: organizations believe they're ready, but most haven't validated that belief under realistic conditions.
are confident they can recover from a cyber incident
have never tested that assumption
The People Who Know Aren’t the People Who Decide
There’s another dimension to the confidence gap: a disconnect between technical expertise and decision-making authority.
Who has final authority over where sensitive data may be stored?
Only 39% said IT or infrastructure leadership. Business owners hold authority in 29% of cases. Security and compliance (the closest to understanding risk) controls the decision in just 16%.
Executives tend to report higher confidence in recovery capabilities than the technical staff who would actually execute that recovery. The people closest to the infrastructure are often the most skeptical, and the least empowered to change it.
This creates a familiar tension. The architects and engineers who understand storage infrastructure most deeply often aren’t the ones making strategic decisions about it. Business priorities, budget constraints, and executive preferences shape where data lives, sometimes over the objections of technical teams who see the risks more clearly.
Ambitious Targets, Passive Practices
Most organizations have set aggressive recovery time objectives. Over 40% are targeting recovery within four hours, and 71% expect to be back online within 24 hours. These aren’t casual aspirations; they’re promises to stakeholders, customers, and regulators.
But ambition without practice is just aspiration.
target RTO under 4 hours
of them have never tested
with no RTO also don't test
When we cross-referenced RTO targets with testing frequency, the confidence gap reappeared in stark terms. Among organizations targeting a recovery timeline of under an hour, 22% have never run a recovery test that simulates a cyber event. For those targeting a window under four hours, it’s 26%. Even among the most relaxed recovery targets (those promising recovery in 4-24 hours) 32% have never validated that they can actually do it.
This is the confidence gap made concrete. Organizations are making promises about recovery capabilities they’ve never actually demonstrated. When an incident occurs, the gap between the RTO in the plan and the actual time to recovery could be the difference between a manageable disruption and an existential crisis.
The Weak Link Isn’t Technology. It’s People.
We asked respondents to identify which part of their environment would most likely fail during a real incident. The top answer wasn’t storage hardware, network infrastructure, or backup systems.
point to human processes (miscommunication, unclear runbooks, untrained staff) as the most probable failure point.
This is especially relevant in the current environment, where hardware costs have increased significantly and budgets are under pressure. Human capital investments offer meaningful ROI without requiring major capital expenditure. Organizations that know where their weak links are can address them. Organizations that haven’t looked will discover them during an incident.
Closing the Gap
When we asked what single improvement would most strengthen cyber-resilience, the top answer was a return to data storage fundamentals.
The 3-2-1 strategy remains a solid foundation. But modern ransomware attacks don't just encrypt production data, they hunt for backups too. This is where 3-2-1-1-0 extends the foundation. The additional "1" means at least one copy is immutable or air-gapped, completely unreachable, even by administrators. The "0" requires verified recovery testing. For organizations concerned about ransomware, credential compromise, or insider threats, that extra layer is the difference between a recoverable incident and a catastrophic one.
Notably, staff training and drills ranked lower on the priority list, despite being identified as the most likely failure point. Organizations recognize that people are the weak link but aren't prioritizing the investments that would strengthen it. This disconnect represents an opportunity for organizations willing to address it.
The 3-2-1-1-0 Rule
Copies of data
Different media
Off-site
Immutable or offline
Verified errors
These investments require time and attention, and in the short term, they can feel expensive. But the cost of building this discipline proactively is nowhere near the cost of building it reactively. Organizations that wait until an incident forces the issue will learn the same lessons, just at a much higher price: during an actual crisis, under pressure, with real consequences.
The confidence gap isn't a technology problem. It's a discipline problem. Organizations don't need new tools to address it. They need regular recovery drills, documented procedures, and clear ownership.
The AI Reality Check
AI is driving a storage crisis before most organizations even use it.
Hyperscale AI buildouts have triggered what analysts call the “Memory Super Cycle,” a structural supply crisis that’s sent storage costs soaring.
| Before | After | |
|---|---|---|
| 30TB Drive Price | $3,000 | $11,000 |
| Flash-to-HDD Multiple | 6.2x | 16.4x |
| NVMe Lead Times | 4-6 weeks | 40–52 weeks |
| HDD Availability | Available | 2yr Backorder |
And this is just beginning
Our survey found that only 16% of respondents have AI workloads running in production or consider AI critical to their business. Over a third describe themselves as being in "early experiments," testing capabilities without committing to production deployment. Another 21% are running pilots in test environments. And a full quarter of respondents report no AI use at all.
AI Adoption Stage
AI in early experiments
Testing AI in pilot programs
No AI use at all
"AI is extensively adopted with 88% of organizations using AI in at least one function, but many remain in early scaling or pilot phases."
And if most organizations are still in early experimentation, the demand surge is just getting started.
Our data mirrors what McKinsey and other industry analysts are finding: most enterprise AI use remains in pilot or early phases. As that 84% moves from experimentation to deployment, they’ll add their own demand to a supply chain already strained by hyperscaler consumption.
Data Gravity Makes It Worse
In a stable market, organizations could respond to rising costs by moving workloads. But data doesn’t move easily.
How often do you move large datasets to another environment?
When you move large datasets, which concern dominates?
rarely or never move large datasets
cite transfer time as top concern
run AI workloads on-premises
This is data gravity in action. Over time, this creates accidental architecture and accidental lock-in. The more data accumulates in one place, the harder it becomes to move, and the more dependent the organization becomes on that environment.
For organizations in early AI experimentation, this is worth noting. The choices made now about where to store training data will constrain options later, not just technically, but economically. When flash prices are 2-3x higher and your data is anchored to a single environment, "we'll optimize costs later" stops being a strategy.
Data gravity always compounds. In 2026, it compounds at 257% inflation.
For AI, Ownership Is Operational
When we asked what factor most often decides where AI workloads are placed, data sovereignty edged out cost at 31% to 27%. This isn’t just about compliance. Organizations are asking harder questions: Who controls our training data? What happens if we need to switch providers?
Regional Differences
35% of European respondents cite sovereignty as their top placement factor, compared to 28% in North America.
Industry Differences
38% of financial services and 37% of government prioritize sovereignty, compared to just 16% in media.
Training data represents years of institutional knowledge, proprietary processes, and competitive differentiation. Models built on that data inherit its value and its sensitivity.
When training data lives in infrastructure you don’t control, you’re also creating dependencies that compound. The more you train, fine-tune, and iterate, the harder it becomes to move. If pricing changes or terms shift, you’re not just migrating files, you’re untangling the foundation of models you’ve spent months building.
Most organizations are still experimenting, not deploying, which means the real infrastructure demands are still ahead.
Three takeaways for infrastructure teams:
- The window is narrowing. Only 16% are running AI in production, but storage costs have already jumped 257%. The time to plan is now. Waiting means paying more for the same capacity.
- Data placement is an economic decision. Where you store training data determines what you'll pay to scale or migrate later. Data gravity is easier to avoid than escape.
- Ownership enables optionality. Organizations that control their training data and infrastructure can adapt as costs and requirements shift. Those who don't will find their options narrowing as their data grows.
Who Controls the Data
On-premises is more prevalent than the narrative suggests.
The dominant industry narrative says cloud has won. Our data complicates that story, with a caveat.
When we asked who has custody of respondents’ most critical data, 63% said it’s fully on-premises, operated by their own team. Only 8% have handed custody to a cloud vendor.
Who has custody of your most critical data?
This likely overstates the broader market. Our respondents skew toward organizations already invested in self-managed infrastructure. Industry-wide, cloud adoption is certainly higher. But these are teams that actively chose to manage their own infrastructure even as cloud options mature. The question is why organizations with the expertise to evaluate both paths are still betting on direct control.
In 2026, the answer increasingly comes down to economics.
Cost has always been the Constraint
When we asked respondents to name their biggest constraint when designing new data solutions, 41% said cost, more than twice the next closest factor. Data sovereignty (17%) and cyber-resilience requirements (18%) matter, but they're optimized within whatever budget allows.
This isn't new. Cost pressure has shaped infrastructure decisions for decades. What's new is the intensity.
Organizations that were already stretching budgets are now facing a market where the same capacity costs dramatically more and may not be available at all.
Biggest Constraint When Designing New Data Solutions
selected Cost/Lock-in as a governance concern.
When Broadcom acquired VMware in 2023, customers reported price increases of 8x to 15x. One described it as "being held for ransom." Migrations can take years. When switching is hard, the vendor sets the terms.
When we asked respondents to select their data governance priorities, 38% chose “cost control and avoiding vendor lock-in.” It wasn’t the top choice, but it was remarkably consistent across organizations of all sizes. The concern is widespread, even if it’s not always the loudest voice in the room.
That was true before the cost surge. Now it’s worse.
Flexibility Is the Competitive Advantage
Sovereignty and cost used to feel like separate priorities. The 2026 market has collapsed that distinction. When storage prices surge, control over your infrastructure is cost control.
Organizations with flexible, self-owned infrastructure can respond to volatility in ways their competitors can’t:
- Shifting to cost-effective media as the flash-to-HDD multiple widens
- Sourcing hardware from multiple suppliers when lead times stretch past a year
- Scaling without per-TB licensing or unpredictable egress fees
- Modeling next year’s budget without waiting to see what their provider decides to charge
- On-premises is a strategic choice. Organizations with the expertise to evaluate both paths are choosing direct control, not out of inertia but as insulation from cost volatility and supply chain disruption.
- Cost pressure accelerates lock-in. Every consolidation decision made under budget constraints deepens dependency. The Broadcom lesson applies broadly: when switching is hard, you pay whatever the vendor charges.
- Flexibility is leverage. Vendor-agnostic hardware, open platforms, and hybrid architectures aren't just cost optimizations. They're strategic insurance against a market that's getting more volatile, not less.
What Enterprise Users Do Differently
How storage challenges evolve with scale
While this survey draws heavily from smaller organizations, 15% of respondents work in organizations with more than 1,000 employees. Comparing these larger organizations reveals how challenges evolve with scale.
More Confident, and Better Validated
43-48% of large orgs (1K+) describe themselves as "very confident" in recovery, compared to 32% of smaller organizations.
This confidence appears grounded in practice: only 28-29% of large organizations never test, compared to 38% of those under 1,000 employees.
AI Adoption Accelerates
29-33% of large organizations have AI in production or consider it critical, compared to just 13% of smaller organizations.
Only 7-11% of large organizations report no AI use, versus 29% of those under 1,000 employees.
Governance Formalizes
At 10K+ organizations, security or compliance leadership holds authority 36% of the time, a 2.5x increase from smaller orgs. Data governance moves from informal business decisions toward formalized security functions.
Constraints Shift
At the largest organizations, cost drops to 27% while latency and performance rise to 32%, the only segment where performance outranks cost. They often have procurement leverage that mid-sized companies lack.
The Mid-Market Squeeze
The 1K–10K segment faces the hardest position. Cost pressure remains high (46%), but concerns about cybersecurity spike to 25%, higher than either smaller or larger organizations.
These organizations face enterprise-grade threats and enterprise-grade storage demands, but without enterprise-grade procurement leverage. They're large enough to be targets, not large enough to secure preferential supply terms. In 2026, this segment feels the squeeze most acutely.
Looking Ahead
What practitioners predict for the next three years
We asked respondents to share their predictions for how storage will change. The open-ended responses clustered around recurring themes.
AI as Fundamental Driver
Storage will shift "from passive warehouse to active intelligence layer for AI." Infrastructure will need to become smarter, more integrated with compute, and capable of supporting real-time analytics.
Push Toward Local Control
"More local, less third-party dependent" was a common sentiment. Several predicted that improvements in local storage density would accelerate movement of data from cloud back to on-premises.
Security as Default
Encryption in transit and at rest will become "expected by default." Versioning and immutability will move from best practice to baseline requirement. What's optional today will be table stakes tomorrow.
No Relief on Costs
Notably absent from most predictions: relief on costs. Practitioners understand the current pricing environment isn't a temporary spike. Organizations planning for 2027+ are assuming storage will remain expensive.
These predictions take on new weight in the current cost environment. When cloud costs compound unpredictably and hardware supply is constrained, local control isn't just a preference, it's a potential hedge. The organizations that will thrive aren't waiting for clarity. They're building flexibility before the market forces their hand.
Three Themes to Watch
Close the Confidence Gap
The disconnect between cyber-resilience confidence and actual testing practices is the most actionable finding in this report. But the stakes have changed. In a supply-constrained market, you can't buy your way out of a failed recovery. Organizations that haven't validated their recovery capabilities are carrying more risk than they were a year ago. The fix isn’t new technology. It’s discipline.
Waiting Has a Price
AI adoption is slower than headlines suggest, and most organizations still have time to plan. But that time isn't free, and it's getting more expensive. Enterprise SSD prices are up 257% and relief isn't expected until late 2027. Every quarter of delay is a quarter of higher costs.
The action isn't to rush adoption. It's to lock in infrastructure flexibility now while options still exist.
Flexibility Is the Strategy
The organizations best positioned won't be the ones with the most storage or fastest AI adoption. They'll be the ones with options. When cloud costs compound unpredictably and proprietary vendors re-price at will, the ability to move is power. Sovereignty, ownership, and avoiding lock-in aren't abstract principles, they're the difference between absorbing cost increases and being dictated to by them.
The organizations that thrive won't be the ones with the most storage or the fastest AI adoption. They'll be the ones with options—built before they needed them.
