For the second time this March, I was back in London. Spring was in the air, and so was a new kind of pressure. This time, it wasnât about sustainability. The visit was for The Economistâs Business Innovation Summit. It was a gathering not of futurists, but of business leaders trying to get real about AI. The brief was clear: From AI to ROI. No cheerleaders, or Silicon Valley spin. Just whatâs working, whatâs stuck, and whatâs about to break.
No one here needed convincing that AI is a big deal. The question wasnât if, it was how. What does real change look like when youâve got quarterly targets to hit, legacy systems that creak, and a workforce still learning what a token limit is?
Most companies say AI is transforming work. What theyâre actually doing is plugging it in like a smart add-on and hoping it sticks. But the ones seeing real change are rethinking everything, not just tweaking at the edges.

The room wasnât full of AI dreamers. It was full of people whoâve hit the wall and come back with better questions. Shadow IT. Talent gaps. Energy bills. Pilots that never scaled. Cultural resistance. Burnout. The kind of practical pressure that doesnât show up in keynote slides, but does show up in budgets and broken dashboards.
The loudest voices werenât futurists. They were operators. Rebuilding the messy middle: training people before the tech outruns them; swapping automation checklists for real agent workflows; fixing the data mess before dreaming up use cases, and stress-testing risk in a cloud setup that still breaks under load.
And while no one said it outright, the subtext was clear: AI isnât just about productivity. Itâs about power. Who controls it. Who understands it. Who adapts first.
This special dispatch is a snapshot of that moment: the turning point where the AI conversation shifted from promise to pressure. What follows in the next 7,000 words is what I saw, heard, and learned. No hype. Just the reality of what happens after you say âweâre doing AI.â
You can use the table of contents on the left to jump between topics (if youâre reading online), and if you download the Substack app, you can listen to a narrated version of this feature.Letâs do this! đđť
Millennial Masters is brought to you by Jolt âĄď¸ The UKâs top web hosting service
1. AI hype is cheap. Strategy isnât.
Lots of companies are still chasing shiny tools. The serious ones are chasing results. That means starting with strategy and a team that knows what problem theyâre solving, not just what tool theyâre using.
âBusiness strategy comes first and foremost,â said Loretta Franks, chief data and analytics officer at Kellanova (formerly Kelloggâs). âItâs foundational to finding the right use cases and the application. [...] We donât want to be drowning in MVPs that look great on a PowerPoint but canât scale.â
The pressure is everywhere: to move fast, look busy, and not fall behind. But jumping in without a clear purpose? Thatâs how you burn budget and stall progress. So the most effective leaders are slowing the rush. Making space to learn. To test. To get people aligned before the tech takes over the agenda.
At Kellanova, that meant launching âcuriosity clinicsâ company-wide, optional sessions on AI. âOur whole executive group has been attending them, sponsoring them, and educating themselves around AI,â said Franks. âTheyâve been quite vulnerable in front of broad groups⌠itâs grounded everyone to the same level.â
That shift in culture, from top-down certainty to something messier, more experimental, came up again and again. âThereâs a lot of pressure to be a leader, to have all the answers,â said Jon Lexa, president of Sana. âBut the moment you can break down that shield and say, âIâm actually just figuring this out as well,â it can be extremely motivating.â
But no, this doesnât mean everyone needs to become a coder. âThere is a thorough delusion that the C-suite will learn absolutely everything,â said Marcin Detyniecki, group chief data scientist at AXA. âItâs not in a one-week training that youâre going to make a data scientist.â
What matters is knowing where AI fits, what it can unlock, where it breaks things, and whatâs likely to slow you down. At AXA, that means applying AI âwhere it makes sense for your jobâ and designing training without technical exercises, in partnership with business schools.
Scaling AI means making it fit your world, not pasting in someone elseâs use case. âWeâre actually not using [our AI] the way our customers use it,â said Lexa. âWeâre taking the technology and applying it to our context⌠thatâs what delivers impact.â
The same thinking applies to people. You canât lift and shift someone elseâs plan for that either. âYou need to give them the tools to come with you,â said Franks. At Kellanova, that meant investing in soft skill development like storytelling, critical thinking, business relationship management as part of a company-wide âyear of development.â
Nadine Thomson, former president at Choreograph, summed up the stakes: âIâve seen a lot of companies spending substantial amounts on AI and not seeing returns. Unless youâve got a really defined use case⌠donât spend ÂŁ3 million on it.â Her message? Start small. Stay strategic. And donât let the hype do your thinking for you.
Recognising that AI is important is easy. Deciding what to do with it (fast!) is where the real tension kicks in.
2. The AI anxiety: Act fast, think slow
Everyone feels the heat to move fast. But speed without structure is risky. The smart play now? Slow down just enough to get it right.
âI think some of us are feeling that fear right now about how things will play out based on the choices we make about AI today,â said Lorraine Barnes, UK Gen AI lead at Deloitte. âChange moves at the pace of business, not the pace of technology.â
That tension cuts through every boardroom: the promise of AI is clear, but putting it to work is another story. âOrganisations have systems and controls and processes and infrastructure that, for good reason, will slow the pace at which we can adopt this technology,â Barnes said. âWe need to give it space, give it oxygen, allow for the creative process to happen.â
Forget the headlines. Most companies are still figuring it out: testing, tweaking, hoping something sticks. âItâs important to distinguish between Gen AI and AI,â Barnes said. âWith Gen AI, we donât know yet.â Financial firms have had a head start with structured data, legacy systems, and years of modelling. But now, every sector is pushing forward. âWeâre seeing fantastically applied use cases in life sciences, around research data, in public sector, in the technology sector⌠and in lots of cross-cutting applications like legal, customer services.â
Regulation, she said, cuts both ways. âUncertainty around regulation is likely to hamper the pace of change, but with good regulation, it should help us apply AI safely and appropriately.â
Whatâs different this time is the very nature of the tech. âItâs very different from other technologies⌠where weâve come to expect decision and accuracy. We get through probabilistic models things that appear like creativity, and I think we just donât know quite how to handle that yet.â
Most execs are already on board. The problem is knowing what that means in practice. âEven the leadership thatâs bought in⌠there is still a big gap in understanding the new technology and the power of the technology and how best to apply it.â She compared todayâs frenzy to the early smartphone era. âHereâs the app: what was the question?â
Teams should look beyond small gains and redesign how decisions are made across the business. âItâs quite seductive to get overly focused on the positive benefits of Gen AI,â she said. âThereâs a huge upside, no question, but there are also some downsides. We need to take this into consideration.â Her example? A wave of beautifully written AI-generated emails that add zero productivity value.
Yes, the pressure to prove ROI is real. But the bigger risk is doing nothing and falling behind. âI donât think thereâs a risk of jumping in too soon. I think we shouldnât wait⌠but making sure that we have the right foundations in place is a no regrets move.â
Employees arenât waiting for permission. Theyâre already using public AI tools to get their work done. Thatâs Shadow IT. âWhen companies donât keep pace with my needs as an end user, Iâm going out and using publicly available tools to do my job.â Thatâs not a hypothetical, itâs happening right now. âOur research⌠tells us explicitly that a majority of employees feel that it will help them in their career progression, it will help them do their jobs.â
Her advice? âDonât wait. But⌠give yourselves the time and the patience to deliver the business outcomes and ROI, and really embrace the nuance and the creative process.â Yet itâs not the fear of missing out, but the fear of measuring the wrong thing entirely. Because traditional ROI thinking is already outdated.
3. ROI is broken. AIâs forcing a rethink

The usual ROI playbook doesnât work here. AI is surfacing ideas you wouldnât even know to ask for.
âYouâre not going to lose your job to AI. Youâre going to lose your job to someone whoâs working with AI,â said Costi Perricos, global gen AI lead at Deloitte. âWe now have more accountants than ever before, and I think the same thing will happen with software engineering.â This isnât about jobs disappearing. Itâs about rewriting what those jobs actually are, and what skills now matter.
Jessica Hall, chief product officer at Just Eat (takeaway deliveries), put it bluntly: âIf you do one of those [request for proposals] today for an AI solution, chances are that during the time youâve done that RFP, the product has moved on significantly.â Procurement hasnât caught up with the pace. By the time a process is done, the tech has moved on. Hallâs take? Donât waste time betting on the best, just start running. âThe mindset here has to be around being in the race, rather than choosing the horse.â So sheâs hedging her bets, pairing older giants with newer AI players to stay agile.
That pace is forcing companies to invent roles they didnât plan for. âWeâve been hiring in the UX space, AI specialists to help us level up our customer-facing chatbots and our customer-facing generative AI solutions,â she said. âI know that kind of prompt engineer is a little bit laughed at; is it a real job? But actually, I think being able to write really high-quality prompts⌠is a genuine skill that everybody needs.â
But the real change isnât in titles. Itâs how we think about work. Aneesh Raman, chief economic opportunity officer at LinkedIn, said the framing itself is flawed. âWeâve done a disservice to the world⌠by using this term AI literacy, AI skills, AI fluency. That is the equivalent of saying internet skills, internet tools. Like, AI is the big paradigm shift.â Knowing how AI works is one thing. Figuring out how to use it every day, thatâs the hard part. âEveryone in this room should be using AI tools; multiple tools for multiple reasons.â
Ikhlaq Sidhu, dean and professor at IE University, cut through the complexity: âThe new core competency is speed.â Itâs no longer about squeezing margin or cutting heads. Speed and iteration are taking over. âThe story that youâre describing here is the singularity story⌠it is just getting faster and faster.â The real skill now? âTry a new thing⌠see an opportunity⌠do this other thing.â Itâs about trying fast, failing fast, and moving again. Precision can come later.
Ramanâs perspective reframes ROI completely. âWe are going to go in this innovation economy, where itâs ease of innovation, not efficiency of production, thatâs going to come to the centre.â Soon, we might hear CEOs brag less about cost savings, and more about âa volume of ideas, new products, or features that companies otherwise wouldnât have been able to do.â
The longer you wait to adapt, the harder itâll be to catch up. âThe change will never be as slow as it is right now,â Raman warned. This is the slowest itâll ever be again. Thatâs the new reality.
None of this works without good data. Thatâs the engine and the limit. âYour generative AI can only be as good as your data strategy,â said Hall. âIf I can leave you with one final thought of the ROI, it would be really invest in understanding and developing a top quality data strategy, because that will enable you to get the max out of your AI investments.â
None of this lands unless your people are ready. That means training for action, not just awareness.
4. Skills wonât save you. Mindsets will.
Teaching skills alone wonât cut it. If teams donât understand how and why to use AI, theyâll miss the point. This is less about learning code and more about shifting culture.
Reskilling sounds good in theory. In practice, itâs disjointed, rushed, and often an afterthought. Until training becomes core to the business, AI wonât make much of a dent.
âIf you have managers and leaders that are not AI competent or AI confident, then itâs unlikely that theyâre going to have their teams AI-enabled,â said Claire Davenport, chief operating officer at Multiverse.
Resilience doesnât mean handing everyone a ChatGPT login. Itâs helping teams figure out whatâs worth doing and how to do it better. âItâs not about being the best in the latest on AI,â said Bastien Parizot, senior vice-president IT & digital at Reckitt (Durex, Finish, AirWick, Dettol and more). âItâs about understanding, âlet me apply that to our business,â and who is best to apply that to our business? The people who already know our brands, our products, our research and development.â
Isabell Welpe, professor of strategy at the Technical University of Munich, argued the real innovation is in how we learn. âIf you give everyone a personal tutor, an AI tutor that is tailored to what you know, how you learn, what youâre interested in⌠the time needed to master certain subjects shortens drastically,â she said.
You canât train your way out of this. Culture eats training for breakfast if the systems around it donât change too. âItâs not just about the more usage and the more time saved,â said Davenport. âItâs what that actually then translates into in terms of what people are using AI for.â
Multiverse put this to the test. One group got basic CoPilot training. The other got hands-on coaching. The coached group used AI three times more. And the real impact? âTheyâre actually using it to solve for harder things,â she said.
Reskilling at scale also demands inclusivity, not just urgency. âWe shouldnât forget about our parents, our grandparents, the ageing society,â said Christina Yan Zhang, chief executive of The Metaverse Institute. âIf they didnât give birth to every single one of us, we wouldnât exist in the first place, right?â She also flagged that one-third of the global population still doesnât have internet access, and the digital divide risks deepening further without serious intervention. And we need more women leading this transition, she argued, citing gender bias baked into datasets and algorithms.
Zhang raised a bigger challenge: how humans and machines will co-exist, and who gets left behind. âWeâre talking about potentially up to 30 billion humanoid robots living with 10 billion people,â she said, referencing Elon Muskâs long-term predictions. âMaybe we should start to teach our kids and ourselves how to live in harmony with our robotic friends.â
But businesses canât wait for the education system or governments to fix the pipeline. Resilience starts from within. âThe people on our programme are probably in their 30s, 40s,â said Parizot. âThatâs because they know the business.â
What actually matters now isnât fluency in code. Itâs flexibility, curiosity, and knowing when to ask better questions. âThe top two skills now are flexibility and adaptability,â according to Parizot.
Welpe had a simple message for employers: stop making learning feel like a checkbox. âMany companies still have three training days per year. They might want to consider whether they should abandon that and send a signal to the employee: if you want to learn a business skill that is going to be useful, there is no limit.â
Her advice to HR? âTreat AI agents just the way you would treat other applicants. Youâll interview a few AI agents, maybe youâll put them on probation. Some, youâll offer a permanent position.â
The companies that treat AI agents like real team members, with structure, purpose and oversight, will be the ones shaping the next decade. Once teams are fluent, the bigger question is balance. Where should the machines take over and where do humans still make the call? Thatâs where work itself starts to evolve.
But knowing how to use AI is just the start. The next step is putting it to work and making sure the work is worth doing.
5. Donât replace people. Reinvent work.
AI can remove friction, but only when itâs used with intent. Companies seeing real impact are building systems around people, not the other way round.
The debate isnât whether AI will reshape work. It already is. The real challenge is using it to boost productivity without gutting what makes teams valuable.
For Danielle DâLima, vice-president of operations at Depop (social e-commerce), it starts with mindset. âLook for places where AI can create value, rather than just cost savings,â she said. âOutcomes over output.â At Depop, the focus has been on enhancing the user experience, not replacing people, and that principle holds inside the business too. âLet the machine do the heavy lifting and use your humans for things that only humans can do, like creativity, empathy, critical thinking.â
That balance is already being put into practice on the shop floor. Ahmed Khedr, VP of retail digital at e&, described how their telecom stores in the UAE use AI screens and facial recognition to tailor service. âInstead of the agent spending lots of time searching manually⌠the AI-powered applications already give personalised recommendations,â he said. Itâs freed up teams to focus on complex issues and human connection. The result? âProductivity increased by 18%, conversion rate increased by 32%, and the customer experience⌠above 90.â
The gains are real, but automation still needs a human filter. âSome organisations had to remove [AI-powered customer service] for a specific type of customer,â Khedr warned. âInstead of being promoters, they are not happy with the algorithm.â Human oversight, he stressed, is still essential, especially in customer-facing roles.
Bilal Gokpinar, professor at UCL School of Management, echoed this. âOne fascinating finding we have is that⌠with automation, there is the risk of reduced innovation.â Over-relying on algorithms can dull critical thinking. âIf your human workers are not engaged with the technology, then after some point⌠youâre not going to be able to leverage the entire human capital that you have.â
Still, the fears around mass job losses may be overblown. â61% [of jobs]⌠would see a redesign,â said Martin Thelle, senior partner at Implement Economics. âOnly around 7%⌠could, over a period of 10 to 15 years, risk being replaced.â And even in automation-heavy roles like customer service, the shift isnât always negative. âWe see not only higher productivity, but also⌠greater job satisfactionâ when AI augments, not replaces, workers, Gokpinar added.
What separates the smart operators from the rest? Start with a strategy, said Peter Jackson, global head of data office (interim) at Schroders. âWe wrap this into a data literacy programme⌠because data is becoming more and more prevalent in peopleâs jobs.â Training isnât just for the tech teams. Every role now touches data.
And companies canât do it alone. âSociety has an ethical responsibility to upskill and retrain people,â Jackson said. Universities are stepping up too. UCL is launching hybrid courses that blend marketing, data science, and resource centres to help SMEs build AI capabilities.
Every business will take a different route, but some basics are non-negotiable. âStart with the end in mind,â said Khedr. âNot by the technology, by the problem.â The tech should follow the problem, not dictate it.
The winners wonât be the ones with the shiniest dashboards. Theyâll be the ones who design work around people, not platforms.
6. Your new co-worker is an (AI) agent

AI agents are no longer just chatbots. Theyâre planning, deciding, and getting embedded into core workflows, like digital colleagues with real jobs to do.
These agents donât just respond, they act. They pull data, trigger actions, and even hand off tasks across teams. The impact depends on how itâs done: smoother operations or total upheaval.
âItâs kind of moving over into this world where, as you sort of quite generally [put it], the AI is able to act,â said Yemi Olagbaiye, client solutions director at Softwire. âSo weâve got the understanding bit like as the front end⌠and weâre now moving over to this world where the AI can act and make decisions on your behalf or on its own.â
âIf you look at the retail world, youâve got companies like Sephora that are having these AI agents⌠able to almost do like a sort of customer service, upselling, cross selling type of thing,â said Olagbaiye. âIn the sports world as well⌠[the NFL is] giving the ability to analyse the behaviours of sports players⌠and in real time, give them feedback.â
But the real risk isnât rogue AI. Itâs the system doing exactly what itâs told without questioning the logic. âI donât think that the risks that we have with these things is Terminator-style like AI gone rogue,â he said. âI actually think that the biggest risk⌠is more just that the agentic AI does exactly what we tell it to do.â
His analogy? Parenting. âYou have your child, and your child grows with these new capabilities⌠all of a sudden you can walk, you can crawl, talk⌠With those new capabilities, thatâs exciting⌠but with that comes new risks.â
The solution, he argued, is proper governance from the outset. âItâs about giving someone the specific ownership of that role,â he said. âThat might be the Chief AI Officer. It might be the business unit leader⌠rather than governance being this kind of checkbox thing.â
Organisations also need to get their infrastructure in order. âThere are so many things coming here⌠thereâs the sort of data⌠itâs also about that continuous governing process,â he said. âWhat happens if there is some hidden bias⌠and this is leaving these autonomous agents to go off and do a bunch of stuff on their own?â
But this isnât just a tech upgrade, itâs a whole new way of thinking. âYou hire an AI agent for your organisation. You give it not only the data, but also the policies and the context,â he said. âThat feels really exciting.â But the real challenge is what happens next. âYou donât want to wait until some sort of scandal happens,â he warned. âDo everything⌠by design from the beginning.â
What happens when you build your business around agents, not people? Jon Lexa, president of Sana, sees it as a rewrite of how businesses actually run. Forget clunky dashboards and endless admin. With agentic AI, the goal is simple: automate the boring stuff so people can focus on what really matters.
âWhen we say weâre building agentic AI,â said Lexa, âitâs taking a large language model, combining that with knowledge â that knowledge could be a policy document, contract in a database â and then combining that with instructions and tool use.â These agents donât wait for prompts; they carry out tasks, trigger actions, and deliver results. âAn example could be, I want to analyse the top 10 opportunities in Salesforce from last week. If I write that to an agent, it will identify that it has to select Salesforce, see which opportunities I have access to, and then return those results in a synthesised manner.â
This shift sounds small but it adds up. âYou no longer have to funnel around with these horrendous enterprise user interfaces,â said Lexa. âThat might seem simplistic, but if you start to compound these small improvements over time, agents can actually unlock quite a lot of productivity for you.â
The real magic comes when those agents start collaborating across systems. Sales teams managing RFPs are a perfect use case. âThese requests can have hundreds and hundreds of questions⌠but your company might have a corpus of old requests,â Lexa explained. âNow, within the agent, you can instruct it to go look at the historical ones, find the ones that are most relevant⌠and use the historical answers to populate the new RFP, saving you weeks, if not, at least hours of time.â
Lexa didnât sugar-coat it. âIt is not a silver bullet. Itâs not going to cure everything. Hopefully not, because I think thereâs a lot of purpose to what we do as humans.â There are still limitations. âThatâs not something we can solve today,â he said, referring to AI struggles with CAD files and complex design workflows.
And how do businesses do it right? First: strategy. âYouâre not going to get away with just having a slide in your quarterly board deck saying that youâre buying Copilot licences for everyone,â Lexa said. It needs to be tailored to your organisation. âWhy is it that you think you can optimise your supply chain using agents, but not perhaps legal workflows?â
On the future of work, Lexa didnât hold back. âJobs will fundamentally change⌠donât try to figure out with your existing workforce where you can kind of plug in AI over time.â
This goes beyond efficiency. It could determine who thrives and who falls behind. For leaders still hesitating, Lexa had a clear message. âWeâre at a critical juncture⌠weâre going to see quite a significant change in how we work, in how we organise ourselves at work.â
AI might not replace your team, but teams that use it well could replace you.
7. On the edge: AI left the cloud
Edge AI isnât a future trend. Itâs already shifting how major industries operate, from aerospace to energy. Whether itâs simulating aircraft or managing home energy use, the action is moving away from the cloud and closer to the front lines. But the transition isnât seamless. Technical limits and organisational inertia are slowing it down.
âIf we are combining the generative AI in the simulation, we sometimes can reduce even 100 times the computational time,â said Grzegorz (Greg) Ombach, head of disruptive research and technology and senior vice-president at Airbus. âIt gives us ability to speed up development, increasing the loops of the simulations⌠we can get better product at the end of it, and it is already applied today.â
This isnât just about speed. Itâs forcing companies to rethink their entire model. Giorgia Molajoni, chief technology officer at Plenitude (gas and electricity sales), explained how edge AI is reshaping the energy sector. âA smart meter is not only a data collector anymore. It becomes an intelligent smart device that can promote action, sometimes even take action⌠the same person can choose to be a consumer, a producer, so prosumer, or even somebody that just stores energy.â
That change flips the traditional energy model, handing more control to individuals. âWhat used to be strongly calculated by the company providing the energy now has to match demand and response and storage offer from a multitude of people. That changed your business model from a centralised one to distributed one, which changed completely the parity of the energy business.â
Still, tech doesnât fix everything. âApplying AI everywhere doesnât make sense in many cases,â said Ombach. âThere are use cases which we have seen that applying AI costs you more than not applying it, also from an energy cost perspective.â In safety-critical areas like aviation autopilot systems, for instance, Airbus avoids AI altogether due to certification challenges. Instead, they focus it where the benefits are clear, like improving satellite imaging resolution or boosting internal productivity through knowledge management tools.
In other sectors like retail and finance, scaling hits both tech and team barriers. âMake sure you have a workflow that is educated on things like using AI,â said Tahmid Quddus Islam, vice-president of innovation and technology at Citi. âGreat technologies come out and everyoneâs super excited⌠but they might not necessarily know how to use it to the best of their ability.â
Ruth Miller, principal consultant at Lenovo, agreed. âGet a business case underlined and agreed with the business. Not just from a POC, but from a pilot and a full production rollout. Because I see a lot of things falling in between.â Inside global firms, tangled structures often kill momentum before it starts. âThereâs about 25 different companies within that retail group. Another five which are only partially owned⌠how do you tell all those different countries, different managements, that theyâre going to roll out something they see no value in?â
The real wins go to teams who can tie AI to hard numbers that make sense in the boardroom. âThis saves you money, this creates you value and make it a big number, so everybody on the board understands,â said Miller.
Ombach sees the edge as a launchpad, where AI and robotics are already starting to converge. He expects generative AI and robotics to converge fast, particularly with smaller, more efficient models running directly on edge devices.
Molajoniâs eye is on what happens next, when machines donât just help humans, but start working with each other too. âIâm interested in how actually human, 100% human, and machine will interact and how weâre going to start to see machines interacting with machines as well. Itâs going to be a cultural revolution.â
The ripple effects go beyond business. This shift is already reshaping how we live. âCan you take a phone off a small child now?â said Miller. âTheyâve grown up with that. From babies with tablets⌠technology is part of their lives. Itâs a different way of thinking, working and behaving that weâre developing.â
And with all this speed and intelligence comes a price. AI might be smart, but its carbon footprint is getting harder to ignore.
8. AIâs dirty secret: The infrastructure crisis
AI is scaling fast, but so are the power bills. Efficiency gains arenât keeping up with the demand. đ° Iâve covered this more in depth in my dispatch from the Economist Impact 10th anniversary Sustainability Week here.
As generative models roll out across more teams, a quieter issue is starting to surface: the infrastructure is buckling. Electricity demand is surging. In some regions, even water supply is becoming a constraint.
âThis power story⌠weâve known this problem for a long, long time,â said Tikiri Wanduragala, senior consultant for infrastructure solutions at Lenovo. âBut what AI has done is put it on steroids.â
AI scaling is now âthree times, five times, in some cases, ten timesâ more demanding. Itâs forcing companies to rethink everything from power supply to cooling systems. âInfrastructure â very boring. No one paid any attention,â Wanduragala said. âSuddenly, in the last 36 months, itâs a huge interest.â
The impact is real. Microsoftâs emissions have risen by a third. Googleâs are up by half. Even the most efficient cloud giants are starting to strain. âWeâve eaten up a lot of that [efficiency] going forward,â Wanduragala said. âCustomers are talking more about hybrid AI⌠bringing stuff in house.â
Itâs also widening the energy performance gap between organisations that track impact and those that donât. âDo you have accurate data about how your infrastructure is operating, how your workloads are running?â he asked. âThis gives you these little secret pieces of information.â
Cooling remains a huge hidden cost. âForty percent of new energy is used for cooling,â he said. âAnd water⌠for some data centres [is] being difficult to deploy because of restrictions.â
But the same tech causing the spike could also help cut it. Lenovo is already testing AI to cut waste in packaging and supply chains. âThereâs a lot of data, a lot of efficiencies that are possible,â he said.
His advice to companies: donât wait for regulation to force your hand. âEven if you donât care about profitability, even if you donât care about carbon,â he said, âyou should still be thinking about this story.â Because AI is only going to get smarter and hungrier.
9. AI is a new secret ingredient
AI isnât a side project at Deliveroo (takeaway and grocery deliveries). Itâs now baked into how the business grows and makes decisions. For CTO Dan Winn, it only counts if it changes something real. âWeâre laser focused on what customers want and solving the needs of our consumers, our partners and our riders,â he said.
Take customer feedback. Like most companies, Deliveroo uses NPS surveys to gauge satisfaction. But thereâs a flaw: only the very happy or the very angry tend to respond. âIt doesnât represent the majority of interactions youâre having with customers,â said Winn. So the team built a generative AI system to analyse the full scope of customer support chats and generate a synthetic NPS score, one thatâs far more representative of real sentiment. âWe tested it⌠and tuned it to a place where it is very representative of what a customer would say if they answered the survey.â The result? A faster, truer read on how customers actually feel, and sharper product calls as a result.
AIâs also reshaping the back end, especially in logistics. For a company delivering everything from a burrito to a weekâs worth of groceries, assigning the right number of riders per order is critical. âHistorically, we had to guess based on the cost or the number of items,â said Winn. But that left too much margin for error. Now, they use AI to scan item images and descriptions, estimate weight and volume, and assign the right number of riders. âIt drives cost efficiency⌠If two riders go out, that cost is entirely borne by us.â
Beyond operations, Winn sees AI playing a defining role in how software gets built, and who builds it. âWeâre seeing an evolution in our engineering teams,â he said. âIncreasingly, engineers will operate as tech leads, directing the work of a small fleet of software engineering agents.â He sees that shift speeding things up and giving engineers more room to think. âFor engineers that embrace it, itâs a superpower.â
Winn also knows AIâs not just about speed. It ties back to the business model. âWeâre now a profitable company, producing positive free cash flow,â said Winn. âThat enables us to focus more entirely on serving the needs of customers and partners.â
AI is making conversations cheaper and smarter. Deliverooâs betting that soon, most chats with customers will start with an agent. âThereâs a sea change coming in the next two years,â Winn predicted.
But none of this works if the basics break. And the cloud is starting to creak.
10. Cloud chaos is coming
In todayâs cloud-heavy world, outages arenât rare. Theyâre inevitable. So how prepared are businesses to deal with the next big failure?
That was the warning from Mark Gradwell, board member at Zero Outage Industry Standard: âGo back and make sure that you have done a risk assessment recently. Things change all the time. Technology changes all the time.â
Multi-cloud setups are trending, but theyâre no magic bullet for staying online. âThere are definitely benefits⌠but I would argue the only way that you will have resilience is by adopting a multi-cloud strategy â especially if youâre working in a multinational organisation.â The catch? âThere is definitely complexity that you need to consider in the management.â
More companies now want options: independence from vendors, control over where their data lives, and flexibility across borders. âIf youâre a multinational doing business across the globe, being able to select cloud providers in Asia or in Latin America or in the US or in Europe becomes super important.â
Itâs not just about picking the right tech. Companies need to practice failure like itâs part of the job. âI cannot stress this enough, how important running tests are and making sure that you do them on a regular basis.â
That starts with knowing your numbers: how fast you can recover, how much data you can lose, and which systems your teams are quietly leaning on. âSome outages are caused through carelessness⌠but when weâre under pressure, sometimes we donât make the best decisions.â
In one case, a CEO dismissed an outage warning during a client visit, assuming it was a joke. âHe phoned in⌠and was told the service could be restored in a few minutes. It didnât take an hour â it took about two hours,â Gradwell said. âWe were overly reliant on the hyperscaler.â
The failure? âHumans do funny things under pressure,â he added. âTesting and making sure your business continuity plan isnât just gathering dust â thatâs key.â What matters most? Communication. Who hears what, when, and how, all of it needs planning. âKnowing how the communication is going to go out, the frequency of the communications, who the stakeholders are, is absolutely key.â
Tech fragility is just one pressure point. The other is policy, and right now, the rules are still being written.
11. Regulators are winging it
AI is having its policy moment. Governments are racing to write the rules, weighing risk frameworks against economic ambition. But if businesses want rules that work, they need to speak up â now.
âThe AI Act was very much drafted with that kind of mindset,â said Jon Steinberg, Director of Global Policy Campaigns at Google, referring to the EUâs risk-based approach to AI regulation. âAnd then Gemini, ChatGPT happened⌠there was a shift.â
That shift sparked a wave of public debate and political urgency, with interventions from tech leaders, academics and governments. âWe had the Bletchley Park summit⌠the follow-up summit in Seoul⌠very much focused on safety and risks,â said Steinberg. âWhere I think we are now⌠is a pendulum swing back towards the opportunity.â
For Google, that swing is long overdue. Steinberg quoted CEO Sundar Pichai: âAI is too important not to regulate, but too important not to regulate well.â Countries like the UK, Japan and Singapore are pushing for pro-growth policies over blanket restrictions. âThat shift⌠we think is really positive.â
Still, Steinberg said that Google remains in favour of regulation, as long as it sticks to risk-based logic. âWeâve been supportive of [the AI Actâs] fundamental framework⌠we were very active participants in that debate in a constructive way,â he said. But as the legislation evolved, concerns emerged.
âIn the final stages⌠they introduced a separate category of rules around what they call GPAI â general purpose AI,â he explained. âThey basically said⌠these models are so big that they need their own class of rules because they are inherently risky. We disagree.â
Googleâs concern? New categories like GPAI could slow down innovation before it even starts. âThatâs when you run the risk of closing your market off from the innovation, the growth and the opportunity that it costs,â said Steinberg. And other countries are watching. âGovernments are thinking twice before copying and pasting.â
That concern is particularly acute when it comes to transparency. âThere is some merit to having transparency about how models are built,â said Steinberg. âWhat we are mindful [of] is when those obligations potentially threaten trade secrets [and] competitive advantage.â
What about the role of public opinion? âGovernments are representing the interests of their citizens,â Steinberg said. âBut we also want to be mindful⌠that we donât regulate a hype cycle.â
He shared a cautionary tale from Googleâs early work on self-driving cars. âI can very clearly remember going to a meeting with an unnamed transport ministry where they said, âWeâre going to write a law about how self-driving car technology should be governedâ⌠before the technology was proven.â The lesson? Donât legislate the unknown.
As the rules shift, some fear a global race to relax standards. Steinberg doesnât think so, but admits thereâs competitive tension. âPerhaps some friendly⌠competition about where is the best place to develop and deploy.â
Europe might be leading on AI rules, but that lead is starting to sting. âThereâs a reason why companies like Google, like OpenAI, like Meta⌠are choosing to launch products in Europe later,â Steinberg noted. âThatâs the risk.â
Because whether the policyâs perfect or not, the marketâs already moving. The front-runners arenât patching AI onto old systems. Theyâre rebuilding from the core. Policy matters, but no amount of lobbying can make up for a weak foundation. The real question is how your business is built.
12. AI isnât a feature: Build for whatâs next
AI isnât on the sidelines anymore. Itâs centre stage, and the stakes just changed. The hesitationâs gone. Now itâs about pace, clarity, and consequence.
What I heard at the Business Innovation Summit was clear: the age of AI pilots and âexperimentationâ is over. If youâre still exploring use cases, youâre already behind. The best companies arenât just deploying AI. Theyâre rebuilding around it.
This isnât about doing more with less. Itâs about doing things you couldnât do before. The real shift isnât technical. Itâs cultural. AI changes how an organisation senses, decides and moves. That demands rewiring, not tweaking.
Success wonât hinge on data or model size. The edge now is speed of adaptation. As one speaker put it: âThere are no technical problems. Only people problems.â The boldest companies are solving for that first, redesigning how work gets done, not just which tools sit in the stack.
But many are still using AI to speed up old processes. Faster forecasting. Cheaper customer service. Smarter pricing. Thatâs not transformation. Thatâs efficiency theatre.
The real gains come from automating judgement, not just tasks. From surfacing insights that no team would have known to look for. From rethinking how decisions happen.
Most companies are layering AI onto broken systems. A few are rebuilding the machine. Those are the ones reshaping the market. The rest are stuck trying to optimise failure.
The next stage isnât about scale. Itâs about alignment. Leaders who can connect culture, infrastructure, and product will move faster than regulation and faster than competitors. That window is already closing.
This shift wonât be reversed. The technology has changed, and so has the pace. What happens next depends on how well your business listens, learns, and moves. Thereâs no safety net for the slow. But thereâs no ceiling for the fast either.
Donât miss my Booksmart primer: AI, power & the future: What the smartest people are actually saying about AI â without hallucinations.
What GPT wonât say about real AI power đŚž
If youâre a builder, leader, or investor, the next decade will be defined by whether you understand the system youâre operating in, and whether you choose to know the system or be ruled by it without realising.