As the saying goes, all roads once led to Rome.
Over the past decade, many CIOs would have said the same about the cloud. Every transformation initiative, every modernization project, and every roadmap seemed to point in that direction.
The cloud became the modern Rome; the destination where every digital ambition converged.
But in 2026, that trend is fading.
Companies are no longer racing upward without looking back. CIOs and CFOs are pausing and asking tougher questions this time:
- What really needs to run in the cloud?
- Which workloads perform better or cost less on-premises?
- Do we still want to depend on one hyperscaler, or is it time to think hybrid and multicloud
- And what is the smartest way to scale AI without blowing up compute and storage budgets?
This does not mean the cloud has lost its central role in digital transformation. Far from it. But the “all to the cloud” era is over. In a time of economic uncertainty and skyrocketing demand for computing power driven by AI, leaders are carefully weighing every move.
CIOs understand that public cloud is not always the most cost-effective or compliant option, especially for data-heavy or regulated workloads. Many are now looking again at private cloud setups, both on-premises and hosted, to gain more control over costs, performance, and data privacy. In fact, Forrester predicts that four out of five enterprise cloud leaders will increase their investments in private cloud by 20% this year.
Other trends are also emerging, such as repatriation, but not for entire systems, but selective workloads. The focus now is on protecting sensitive data, maintaining digital sovereignty, meeting regional regulations, and avoiding the costly assumption that cloud is always more efficient than owning your own infrastructure.
The elephant in the room is clear. Times are changing, and AI is not the only reason behind it… though it certainly plays a major part.
This article explores the conversations happening inside boardrooms today as executives search for the smartest way forward in their cloud migration strategy for 2026.
I. The End of the “All to the Cloud” Era
The cloud pressure is back… but it’s different
The pressure to modernize through the cloud is rising again, but it no longer looks like the wave we saw a few years ago.
In its first phase, cloud migration was fueled by the promise of agility, faster innovation, and a belief that moving workloads out of data centers automatically meant progress. Every modernization plan pointed upward. The goal was simple: leave the legacy behind and embrace the elasticity of the public cloud.
But now, that narrative is gaining more nuances, and that’s a good thing. Organizations are still under pressure to transform, yet the drivers have changed. AI has created a whole new demand for computing power, that’s a fact. Gartner even predicts that by 2029, around half of all cloud compute resources will be used for AI workloads. That’s huge. It’s also forcing many teams to take a hard look at whether their current infrastructure can really handle the volume of data processing and model training that AI requires.
At the same time, the economic reality has shifted. Many CIOs are realizing that scalability doesn’t always mean savings, and that flexibility has to go hand in hand with governance and cost control.
What has emerged is a new phase of rationalization. Companies are reviewing where their systems and data are stored, but that does not mean they are suddenly abandoning the cloud. The point is to think carefully, with a calculator and a pencil on hand, to decide what makes the most sense from both a financial and a technical point of view.
On the financial side, the goal is to find the right balance between cost, performance, and predictability. On the technical side, it is about protecting sensitive data and making sure the infrastructure can support the growing computing demands of AI.
Repatriation, for example, is a growing trend. According to IDC, around 80 percent of organizations expect to repatriate at least some compute or storage resources this year. This shows how things are changing, because repatriation is no longer seen as a retreat or a failure of cloud strategy. It is a way to gain efficiency, improve performance, and adapt better to what the business really needs. Moving some workloads back on-premises can reduce latency, strengthen compliance, or simply align resources with business value.
This re-evaluation also stems from growing concerns around digital sovereignty, data gravity, and latency sensitivity. Some workloads benefit from the proximity and predictability of private environments, while others still rely on the global scale of hyperscalers. The result is not an exodus from the cloud, but a search for balance, with companies now adopting different forms of hybrid ecosystems.
The difference now is philosophical as much as technical. Cloud adoption used to be an ideology, a declaration of modernization. Today, it is a strategy that demands precision, transparency, and clear alignment with financial and regulatory priorities.
For CIOs and CFOs, this means asking new questions before making the next migration move.
II. The New Drivers of Cloud Decisions
The real economics of cloud: visibility, cost, and control
The economic equation has changed dramatically. If the first decade of cloud adoption was defined by the promise of flexibility, the next will be defined by accountability. In the rush to modernize, many organizations underestimated the real cost of operating in the public cloud. Forrester’s Tracy Woo summed it up: “In the rush to the public cloud, a lot of people didn’t think about pricing.”
Hidden expenses such as egress fees, inter-region transfers, and escalating compute costs have made some workloads more expensive in the cloud than on-premises. In data-heavy use cases, such as AI training, data movement alone can represent up to 45 percent of total project costs. And as AI workloads become mainstream, compute inflation is reshaping cloud bills. GPU demand, storage replication, and data retrieval for model tuning now account for a growing share of enterprise budgets, forcing CIOs to redefine what “scalability” really costs. And the stakes are only growing. According to McKinsey, the global demand for compute power will require nearly $7 trillion in data-center investments by 2030, with $5.2 trillion dedicated to AI workloads alone. That projection underscores how AI is turning compute into one of the world’s most valuable — and expensive — resources.
Electric bills are becoming a boardroom discussion topic
For years, electricity was treated as background noise in the cloud debate — an invisible cost folded neatly into the invoice. But as AI workloads multiply, that quiet hum of power has turned into a roar loud enough to reach the boardroom.
Energy consumption has become both a financial and strategic concern. Hyperscalers’ massive data centers still benefit from economies of scale and access to renewable energy, which makes them far more efficient than local facilities. Yet, even they are starting to feel the strain. According to Bloomberg, electricity prices around major data-center hubs have surged by up to 267% in the last five years, and demand for grid power in the U.S. is projected to grow 22% in 2025 alone, adding roughly 11.3 GW to the system. (Pew Research Center)
The warning lights are starting to blink. When you buy cloud (public or private) the energy cost is already baked into your bill. But as Forrester analysts Lee Sustar and Charlie Dai point out in The Commodity Cloud Era Is Over — The AI-Native Cloud Is Here:
“Every public cloud customer is already funding this AI transformation. You are already paying either directly for managed AI services or indirectly through standard cloud bills. Those commodity services cost less to deliver now, thanks to billions saved by extending server lifespans — savings that hyperscalers are plowing straight into AI infrastructure.”
As AI’s appetite for computation and cooling grows, those same bills could rise in ways few CFOs have yet modeled. At some point, hyperscalers may begin passing on their rising electricity costs to customers.
On-premises setups, meanwhile, offer the illusion of control. In practice, measuring and managing real power usage can be a blind spot. Electricity costs are often difficult to track, especially now that power-hungry AI workloads are new to many organizations. Training large models, for instance, can drive consumption spikes that inflate utility bills and surprise financial planners.
The result is a new kind of tension in the C-suite. Every teraflop of compute has a wattage behind it, and every watt has a price. The cloud no longer floats above the financial discussion; it’s plugged directly into it. Energy has quietly become the hidden currency of the AI era.
CAPEX vs. OPEX: rethinking the financial architecture of the cloud
At the heart of this economic rethink is a fundamental accounting shift: from CAPEX to OPEX.
Traditionally, on-premises environments required heavy capital expenditures (CAPEX): purchasing servers, storage, networking equipment, and energy infrastructure. These assets depreciated over years but offered predictability: companies owned their hardware and could optimize usage over time.
Cloud computing, by contrast, introduced a pay-as-you-go operating expense (OPEX) model. It removed upfront investment and allowed organizations to scale capacity as needed, transforming IT from a fixed asset into a variable cost.
That flexibility was revolutionary, but it came at a price. Without tight governance, OPEX can spiral faster than CAPEX ever did. Cloud bills fluctuate monthly, often unpredictably, and the more data-intensive AI workloads become, the higher the exposure.
The new goal is to decouple flexibility from volatility, keeping the agility of OPEX while restoring the predictability of CAPEX. Hybrid and private-cloud models, coupled with FinOps discipline, are becoming the sweet spot.
FinOps becomes the CFO’s new best ally
With this volatility introduced by AI workloads, waiting until the end of the quarter to understand what went wrong can be devastating.
One unexpected spike in compute or storage use can distort an entire quarter’s balance. That’s why cloud management must be built in from the start.
McKinsey found that the biggest mistakes in cloud migration often come from weak or delayed FinOps capabilities. Many companies postpone cost-governance practices until their annual cloud spend surpasses $100 million, by which point inefficiencies are already deeply embedded. Others treat FinOps as a purely technical function, leaving CFOs and business units out of the conversation until it’s too late. This late involvement turns cloud transformation into a reactive, rather than strategic, effort .
FinOps is designed to prevent exactly that. It helps organizations move from “spend first, analyze later” to “measure, optimize, and scale responsibly.”
McKinsey found that the potential impact of FinOps goes beyond cost tracking:
- Most companies still have 10–20% in untapped cloud savings.
- Automating FinOps could unlock up to $120 billion in global value.
- Mature programs cut cloud waste by about 30% each year, according to the FinOps Foundation.
III. Technical Factors
The risk of vendor lock-in
As cloud ecosystems grow more complex and deeply integrated with AI services, vendor lock-in has reemerged as one of the top strategic risks for CIOs. Gartner’s risk report lists dependency on a small group of hyperscalers (AWS, Microsoft Azure, and Google Cloud) as a “significant emerging risk.” The concern isn’t just cost; it’s control.
When business-critical workloads depend entirely on one provider’s infrastructure, a single outage, price shift, or compliance change can ripple across the entire organization.
That’s why many executives are adopting what analysts call “pragmatic multicloud” strategies. Instead of chasing redundancy for its own sake, companies are designing architectures that preserve freedom of movement (containerizing applications, adopting open APIs, and using Infrastructure-as-Code to ensure workloads can migrate without major rework). As IHG Hotels’ CIO Eric Norman explains, “Lock-in has always been there, but the difference now is we design for continuity.”
At the same time, some workloads still benefit from the simplicity of single-vendor environments, where tight integration can drive performance and speed to market. The key, experts say, is not avoiding lock-in entirely—it’s choosing where it’s worth it. Modern FinOps practices and hybrid architectures make that choice more deliberate, allowing CIOs to trade a bit of dependency for the right balance of efficiency, cost predictability, and resilience.
Digital sovereignty and compliance take the spotlight
Cloud strategy in 2026 is as much about geography as technology. With new regional regulations on the rise, digital sovereignty has become a key factor in determining where and how companies store and process information.
In simple terms, digital sovereignty means keeping control over your data — ensuring it’s stored, processed, and governed under the laws of your own country or region, rather than being subject to foreign jurisdictions or opaque third-party handling. For global enterprises, that means rethinking where workloads reside and who ultimately has access to them.
At the same time, cloud providers are introducing innovations such as Confidential Computing, which allows workloads to remain encrypted even during processing, and AI-powered threat detection to improve resilience across distributed systems. These capabilities are making it possible for regulated industries to stay in the cloud while meeting strict governance standards.
The message is clear: sovereignty and compliance are not blockers anymore, but they must be designed into the architecture from the start.
AI’s technical bottleneck: throughput becomes the new battleground
Another discussion emerging inside boardrooms and IT teams today revolves around the technical performance needed to make AI truly effective. The focus is no longer only on GPUs or raw computing power. The real challenge now is data throughput.
As many cloud and infrastructure leaders explain, when storage systems cannot feed GPUs fast enough, compute capacity sits idle and budgets quickly evaporate. That simple reality captures one of the most expensive inefficiencies in the AI era.
This shift is forcing organizations to rethink how they design and scale their infrastructure. In the first wave of cloud migration, the goal was scalability and elasticity; essentially getting more compute power on demand. In the AI-driven phase, performance is defined less by how much capacity a company can rent and more by how efficiently data moves across hybrid environments. A slow pipeline can turn even the most advanced GPU cluster into an expensive waiting room.
For that reason, many enterprises are modernizing their data layer as part of their broader cloud strategy. They are investing in NVMe-based storage, edge and micro data centers, and high-bandwidth interconnects that keep data closer to where it is processed. This approach reduces latency, lowers egress fees, and ensures GPUs stay active rather than idle.
It also connects directly to the broader conversation about where workloads should be. The need for proximity and performance is leading CIOs to bring certain processes back on-premises or into private clouds when that closeness delivers better efficiency and control.
IV. Building the Right Cloud Mix
Public, private, or on-premises: what makes sense in 2026
After a decade in which the public cloud seemed like the only destination, new alternatives are opening up for executives. And AI has made this shift inevitable.
Not only have costs come under scrutiny, as we saw earlier, but the technical and strategic aspects of cloud use are being rethought as well. The goal is no longer to modernize for the sake of modernization, but to evaluate every factor carefully: ensuring data security, reducing latency, optimizing performance, controlling energy consumption, and maintaining regulatory compliance without sacrificing business agility.
That’s one of the reasons behind the growth of what analysts call The Great Repatriation. It sounds dramatic, but it’s not a return to the data centers of 20 years ago. It’s a sign that companies are getting smarter about their architectures. According to IDC’s report Assessing the Scale of Workload Repatriation, 80% of organizations expect to move at least some compute or storage workloads out of the public cloud in 2025.
The motivations are clear. As Stanley Mwangi Chege, CEO of Digital Transformation Experts, explains, boards are demanding tighter control over costs, privacy, and compliance. AI has amplified those concerns — not because the technology itself is flawed, but because it introduces new risk vectors. Many IT leaders worry about sensitive data being used to train public models or crossing regulatory boundaries. In highly regulated sectors, that’s a line you can’t cross.
Still, moving some workloads back on-premises doesn’t mean giving up the flexibility that made the cloud attractive in the first place. The new generation of as-a-service offerings is bridging that gap, bringing cloud-like scalability and consumption models into private environments. As IDC’s Natalya Yezhkova points out, this approach “has become an essential part of hybrid and multicloud strategies.”
The hybrid model is no longer a temporary stop. Recent market analyses show that hybrid cloud adoption is accelerating, with global spending expected to grow at double-digit rates through 2025 as enterprises blend public and private infrastructures to balance agility and control.
V. The Human and Organizational Levers
Beyond the economic and technical factors, cloud migration is still a human challenge. These programs are complex. They require specialists who understand both the legacy environments being replaced and the modern architectures taking their place. But they also depend on something less visible and far harder to master: coordination.
Most organizations don’t have all the expertise they need in-house. That’s why they rely on partners. And to get the most out of these collaborations, defining clear responsibilities and scopes of work becomes essential.
In the following sections, we’ll look at both sides of this equation: first, the growing cloud skill gap; and second, how companies are learning to coordinate more effectively with the partners who help close it.
A. The cloud skill gap widens in 2026
The demand for cloud expertise keeps growing, but the talent pool isn’t keeping up. According to TechTarget, IDC estimates that more than 90 percent of organizations will face IT skill shortages by 2026, costing them around 5.5 trillion dollars worldwide. A significant portion of that gap lies in cloud skills.
And it’s not slowing down. Gartner projects that global spending on public cloud services will hit 723 billion dollars in 2025, a 21 percent jump from last year. Yet many organizations can’t find enough qualified professionals to manage that growth.
The consequences are everywhere. Projects stall halfway. Deployments stay unstable. Teams spend more time fixing problems than improving systems. And as TechTarget notes, many organizations are forced to delay innovation just to keep things running.
Cloud roles today go far beyond managing servers. Engineers now need to juggle multi-cloud environments, implement FinOps practices, and navigate complex compliance and digital sovereignty requirements. Add AI into the mix, and the pressure multiplies.
B. The partner equation
But what happens when a partner joins the project to help close the skills gap?
Suddenly, more hands are in the mix — and that makes coordination even more complex. Without a clearly defined strategic plan, responsibilities can overlap, and critical tasks can fall through the cracks.
That’s where well-established frameworks come in. Tools like RACI, DACI, or RAPID can help map out who is Responsible, Accountable, Consulted, and Informed at each stage of the migration. These frameworks bring structure to collaboration and ensure every stakeholder knows their exact role and scope.
At Inclusion Cloud, for instance, our team has developed a guide on how to apply a RACI matrix specifically for SAP migrations — a timely resource, as many SAP products are reaching end of support and migration pressure is rising. This approach helps companies align strategic goals with operational execution and avoid confusion during high-stakes transitions.
In the end, the success of a cloud strategy doesn’t just depend on technology. It depends on people: how they work together, how they communicate, and how clearly they understand the mission.
The 7Rs + 1: A Decision Compass for a Complex Cloud Landscape
In a time when the decision tree for cloud strategy keeps growing new branches, executives need a clear framework.
The 7Rs framework has long been a trusted playbook for cloud migration strategy. For years, it has helped organizations classify workloads and decide how to move them. But 2026 demands a broader perspective. The decision tree for cloud strategy has never been more complex. Companies can now go multicloud, hybrid, sovereign, or even pursue selective repatriation. And we shouldn’t forget the new dimensions AI has added to computing needs (data throughput, latency, cost volatility, and governance).
That’s why we see value in thinking of the model as the “7Rs + 1.”
- Rehost: Move applications to the cloud with minimal changes—a fast way to exit legacy environments.
- Replatform: Make light optimizations (like changing databases or OS) for better scalability and cost-efficiency.
- Refactor: Redesign and modernize applications to unlock full cloud-native benefits.
- Repurchase: Replace legacy software with SaaS solutions that offer built-in upgrades and agility.
- Retain: Keep certain workloads on-premises due to compliance, performance, or cost reasons.
- Relocate: Move complete environments—often virtual machines—to a new infrastructure with minimal rework.
- Retire: Decommission outdated systems and free up resources for innovation.
And then comes the +1: Rethink, the new imperative for the AI era.
Rethink means more than selecting a migration strategy. It’s about treating the migration as a strategic inflection point. It’s about questioning how the business can reduce costs, generate new revenue streams, and build an architecture ready for AI. In a world where data drives the value, Rethink asks: How will your infrastructure enable insight, not just output? How will your migration prepare you for new services, faster decisions, and smarter operations? For example: training large models, scaling inference, managing data flows, optimizing cost per teraflop — these emerge as core design considerations, not afterthoughts.
In short, while the 7Rs remain a historic framework for modernization, the “+1” adds the future-proofing lens. It shifts the conversation from “how do we move?” to “how do we win in the new digital economy?”
VI. Conclusion: Finding Balance Beyond the Ideology
In 2026, the mission for CIOs and CFOs is no longer just to migrate to the cloud. It is to extract real value from it.
Many organizations have completed their modernization journeys, only to realize that being cloud-enabled doesn’t necessarily mean being future-ready. Systems may now live in the cloud but still operate with old thinking, outdated governance, and legacy data flows. The next frontier is clear: embedding intelligence into every workflow, every decision path, and every business model. According to analysts, by 2026 more than 80 percent of enterprises will have integrated AI and machine learning into their mission-critical processes.
But progress brings its own challenges. A recent industry report found that nearly one in three IT leaders believes that half of their cloud spending is wasted. The reasons are familiar: limited visibility, misaligned culture, and fragmented governance.
At the same time, the role of the CIO is evolving. Boards no longer ask if a company is cloud-ready—they already assume it is. The real question now is whether the organization is intelligence-ready. Do its architectures support fast decisions, resilient operations, and growth under new business realities?
Time and again, the greatest migrations turn out not to be IT projects, but enterprise rethinking moments. Moments that allow companies to ask the important questions instead of moving everything to the cloud just because it was the ideology of the moment.
How Inclusion Cloud helps
At Inclusion Cloud, we help enterprises navigate this complexity and execute cloud migrations that are both efficient and future-ready.
With inMOVE™ by Inclusion Cloud, our AI-powered delivery engine, we build specialized teams with certified experts in cloud, data, and AI who design, implement, and optimize hybrid and multicloud environments. Whether your goal is to modernize legacy systems, prepare your infrastructure for AI workloads, or design secure and sovereign architectures, we make sure your strategy moves from vision to execution seamlessly.
Ready to take the next step? Book a discovery call with our experts.