The email arrived at 2:47 AM Pacific time. Sarah Chen, a senior marketing analyst at a Fortune 500 technology company, had been expecting good news. Her Q3 campaign had exceeded every metric. Customer acquisition up 34%. Cost per lead down by half. Her manager had called it "the best quarter in department history."
The email was not good news.
"Due to organizational restructuring and investments in AI-enabled marketing automation," it read, "your position has been eliminated effective December 31, 2025. This decision was not a reflection of your individual performance."
Chen stared at the screen, her apartment dark except for the blue glow of her laptop. She had an MBA from a top-20 program. Seven years of experience. A track record of delivering results. And a machine had just taken her job—not because she failed, but because she succeeded so well that the company realized it didn't need her anymore. The AI could replicate her methodology. What it couldn't replicate was her salary.
"The cruelest part," she told me three weeks later, her voice carrying the particular flatness of someone who had repeated this story too many times, "is that I trained the system. They asked me to document my processes. To create templates. To explain how I thought about campaigns. I thought I was building institutional knowledge. I was actually building my replacement."
I want to be honest about something before we go further: I don't know how Chen's story will end. When we last spoke, she was three months into unemployment, sending out applications that disappeared into algorithmic screening systems—the same systems, she noted bitterly, that she'd once helped companies implement. Her savings were running low. She was considering moving back in with her parents. At 34. With an MBA. After a career that had seemed to be going exactly right.
The optimists will tell you stories of successful pivots. The pessimists will tell you the economy is collapsing. I'm going to try to tell you something harder: that both camps are partly right, that the truth is messier than either wants to acknowledge, and that the comfortable narratives we use to make sense of this moment are failing us.
A disclosure before we continue: I run an AI-powered recruitment platform. I have skin in this game. I benefit when companies adopt AI for hiring. You should read what follows knowing that—and judge accordingly whether I'm being honest about the downsides of a technology I profit from.
In the first six months of 2025, 77,999 tech jobs were directly attributed to AI displacement. By year's end, the number exceeded 150,000 across all industries.
But these numbers, for all their stark clarity, obscure as much as they reveal. They don't capture the marketing analyst in San Jose who's still employed but whose job has quietly hollowed out—where she used to spend weeks on campaigns, she now spends days reviewing AI output. They don't count the new graduate who never got hired in the first place because the entry-level position was automated before she finished her degree. They don't measure the psychological weight of knowing that your expertise—the thing you spent a decade building—is now something a machine can approximate in seconds.
What follows is not a prediction. Predictions are what consultants sell. This is an attempt to see clearly what is actually happening, to acknowledge what we don't know, and to take seriously both the fears and the possibilities that this technology presents.
The Numbers Everyone Cites (And What They're Not Telling You)
Every major consulting firm has published an AI workforce report in 2025. McKinsey surveyed 1,993 business leaders across 105 countries. The World Economic Forum polled over 1,000 global employers representing 14 million workers. BCG, Deloitte, PwC—all have their numbers. The research industrial complex has been working overtime.
They agree on the basics: 91% of organizations now use at least one AI technology. The adoption happened faster than almost anyone predicted. Three years after ChatGPT's release, AI has moved from curiosity to infrastructure.
What they don't want to talk about is more interesting.
MIT released a study finding that AI can already replace 11.7% of the U.S. labor market—representing $1.2 trillion in wages. The World Economic Forum projects 92 million jobs will be displaced globally by 2030, with 170 million new jobs created for a net gain of 78 million positions.
These numbers get cited in every boardroom presentation, every investor deck, every policy paper. They rarely get examined.
Think about what that "net gain" actually means. The 92 million people who lose their jobs are not the same 170 million who will fill the new roles. A customer service representative in Omaha cannot become an AI prompt engineer in San Francisco by wishing. The skills are different. The locations are different. The educational requirements are different. The timing is different—displacement happens in months; retraining takes years.
The net numbers look positive on a spreadsheet. Ask Sarah Chen how positive they feel.
A workforce economist who works for a major research institution but asked not to be named—because her organization's official position is more optimistic than her private assessment—put it to me this way:
"The net job creation projections assume a labor market fluidity that has never existed in human history. They assume that a 50-year-old paralegal in Cleveland can retrain as an AI systems auditor in Austin within the timeframe before her unemployment benefits run out."
She took a sip of her coffee, which had gone cold. We'd been talking for two hours.
"The models don't account for mortgages. They don't account for aging parents. They don't account for the fact that people have lives rooted in specific places with specific relationships. The models treat workers as interchangeable units. Workers are not units."
The Dirty Secret of AI ROI
According to McKinsey, while 88% of companies use AI in at least one function, only 39% report any impact on their bottom line. BCG's analysis is even more striking: 60% of companies report generating no material value from their AI investments. Only 1% have reached what could be called AI maturity.
This suggests two possibilities, and I'm genuinely not sure which is more troubling.
The first: current AI is overhyped, and much of the displacement anxiety is premature. Companies are buying tools they don't know how to use. The revolution is taking longer than the headlines suggest.
The second: companies are making massive AI investments that aren't paying off, creating pressure to cut labor costs to justify the spending. When a company spends millions on AI tools and sees no EBIT improvement, the easiest way to manufacture ROI is to fire people—regardless of whether the AI actually works.
I have talked to enough HR leaders off the record to suspect the second is more common than anyone wants to admit. AI becomes the justification for workforce reductions that would have happened anyway.
"We were told to cut 15% of headcount," one VP of People at a mid-sized tech company told me. We were sitting in a coffee shop in Palo Alto; she'd asked to meet off-campus. "The CEO wanted it framed as an AI efficiency initiative. Sounds better to investors than 'we overhired during the pandemic and need to right-size.' The AI tools we bought had nothing to do with which roles we cut."
The Entry-Level Extinction
Let me introduce you to Marcus Thompson. He's 28, has a computer science degree from a state university, and spent two years as a junior software developer at a fintech startup in Austin. Last March, his team of six was reduced to two, with the remaining developers expected to use AI coding assistants to produce the same output.
I talked to Marcus over video call. Behind him, I could see the bedroom of his parents' house in Ohio, where he'd moved after burning through his savings. There was a poster on the wall from his college days—something about "the future is code."
"They kept the senior people," he said. "The ones who could supervise the AI. But juniors like me—we were the ones doing the work that the AI is now good enough to do. The grunt work. The learning work."
He looked away from the camera for a moment.
"The thing nobody talks about is that the grunt work is how you become senior. How are we supposed to learn if there are no entry-level jobs?"
Marcus mentioned that three of his former teammates—all junior developers—were also still looking for work. One had gone back to school for a master's degree, betting that more credentials would help. Another was doing gig work. The third had left tech entirely and was training to become an electrician. "He says at least the wires don't care about AI," Marcus told me.
I've started calling this the Entry-Level Extinction—the systematic elimination of the positions that have traditionally served as the first rung on professional ladders. Marcus's question haunts me because it points to something the aggregate statistics miss entirely.
The World Economic Forum explicitly raised this concern: "Is AI closing the door on entry-level job opportunities?" Their data suggests it is. When AI handles the routine tasks that junior employees once performed, there's less need for junior employees. But without junior employees, there's no pipeline for senior employees.
Companies are optimizing for short-term efficiency while hollowing out their future talent development. It's the classic tragedy of the commons, playing out in slow motion across every knowledge-work industry.
Workers aged 18-24 are 129% more likely than those over 65 to worry about AI making their jobs obsolete. And 49% of Gen Z job seekers believe AI has reduced the value of their college education.
They may be right.
The Great Reversal
For thirty years, the conventional wisdom held a comforting shape: technology threatens blue-collar work but protects knowledge workers. Factory jobs would be automated; office jobs were safe because they required language, judgment, creativity—the things machines couldn't do.
This assumption has collapsed so completely that we haven't fully processed what it means.
"Surprisingly enough, knowledge workers are facing the highest level of exposure here," observed Svenja Gudell, chief economist at Indeed Hiring Lab. "With automation, often it was manual labor that was replaced."
The reversal is not subtle. Software engineering—the very profession that built these AI systems—faces 95% task exposure according to Brookings Institution analysis. Information technology. Mathematics. Legal services. Accounting. The fields that required expensive degrees and years of training are precisely the fields where AI is making the fastest inroads.
Meanwhile, truck drivers? 30% task exposure. Hairstylists? Effectively immune. The barista who makes your morning coffee has more job security than the data analyst who crunches the company's numbers.
The class implications are profound.
For decades, the American dream pitched higher education as the path to security. Get a degree. Work with your mind instead of your hands. You'll be protected from the rough winds of technological change. Millions of families took on massive debt based on this promise.
That promise is breaking. And the people who made it—university administrators, career counselors, politicians celebrating the "knowledge economy"—have not yet reckoned with what they sold.
What the CEOs Actually Say (When They're Being Honest)
Corporate leaders are no longer pretending otherwise.
Ford CEO Jim Farley: AI will "replace literally half of all white-collar workers."
Salesforce CEO Marc Benioff: AI is already handling up to 50% of the company's workload.
At JPMorgan, managers have reportedly been told to avoid hiring as AI rolls out across the business.
These aren't fringe predictions. These are some of the most powerful business leaders in America telling their employees, in public, that half of them may be unnecessary.
The candor is unusual. I wonder sometimes if it's strategic—managing expectations downward so that any outcome short of mass layoffs can be framed as a success.
What interests me more than the predictions is the lived experience that doesn't match them.
I spoke with a senior product manager at a major tech company—someone who, according to the models, should be heavily exposed to AI displacement.
"My actual job hasn't changed much," she said. "I use AI to draft documents faster. I use it to summarize meetings. But the core of what I do—understanding customer needs, navigating organizational politics, making judgment calls about tradeoffs—that's still entirely human. The AI can write me a product requirements document. It can't tell me whether that document will actually get this product shipped given the seventeen different stakeholders who all have veto power."
The gap between task exposure and actual job displacement may be larger than the headline numbers suggest. A job where 60% of tasks can be automated is not a job that's 60% eliminated. It might be a job that's 40% freed up—with that time filled by the human-only tasks that remain.
Or it might be a job that gets cut to a fractional role. Or eliminated entirely because the remaining 40% doesn't justify a full-time salary.
Which outcome prevails depends on decisions made by people in corner offices. And those decisions are driven by factors that have little to do with technology.
The Training Myth
Every AI workforce discussion ends with the same prescription: reskilling. Train the displaced workers for the new jobs. Close the skills gap. The solution seems obvious.
Almost nobody is actually doing this at scale. And the reasons why are more damning than the gap itself.
According to SHRM research, 37% of employers say they provide reskilling programs. But when you ask employees, only 28% confirm this. Similarly, 44% of employers report offering upskilling programs; only 33% of employees agree.
This perception gap reveals something important. Either companies are overestimating their training efforts—counting a one-hour webinar as "upskilling"—or the training that exists is so disconnected from workers' actual needs that it might as well not exist.
Only 38% of companies offer AI-related training to their staff, despite 82% of business leaders acknowledging its importance.
This gap between stated priority and actual investment is not an oversight. It is a choice.
Training is expensive. Its returns are uncertain and long-term. Laying off workers and hiring new ones with different skills is faster and shifts the cost of education to individuals and the public education system. From a quarterly-earnings perspective, the calculus is clear.
I met a 26-year-old copywriter—let's call her Jamie—at a coworking space in Brooklyn. She was there because her company had closed its office; everyone worked from home now, but she couldn't stand the isolation. She was paying for desk space out of pocket.
Jamie had asked her employer about AI training. "They sent me a link to a free Coursera course," she said. "That was it. Like, I'm supposed to reinvent my career in my spare time after working 50-hour weeks."
She laughed, but there was no humor in it.
"I think they're hoping I'll just quit so they don't have to fire me. It's cheaper that way."
I asked if she knew others in similar positions. She counted on her fingers: her roommate, a graphic designer whose job had been cut to part-time; a friend from college who'd been a paralegal until her firm automated most document review; a former coworker who'd been laid off three months earlier and was still looking.
"We text each other job postings," she said. "It's like a support group. But for people who are all applying to the same shrinking pool of jobs."
The Credentials Trap
The data point that keeps me up at night: 77% of new AI-related jobs require master's degrees.
Think about what this means for Sarah Chen. For Marcus Thompson. For Jamie. For the millions of workers who face displacement.
The new jobs being created require credentials that most displaced workers don't have. Obtaining those credentials takes years and costs tens of thousands of dollars. The timeline for job displacement is measured in months; the timeline for retraining is measured in years.
This isn't a skills gap. It's a credentials gap. And the difference matters. A skills gap can be closed with training. A credentials gap requires institutional transformation that takes decades.
The World Economic Forum projects that 1.1 billion jobs will be transformed by technology in the next decade. Microsoft and its partners trained 23 million people in digital skills in 2024.
At that rate, it would take nearly fifty years to reach the affected population.
Training efforts aren't worthless. They're operating at a scale that's wildly insufficient for the challenge we face. The comfortable narrative of "reskilling will solve this" is a way of avoiding harder questions about who bears the costs of technological transitions.
The Agentic Acceleration
Everything I've described so far relates to the current generation of AI—systems that respond to prompts, generate content, and assist with discrete tasks.
The next wave is already arriving. And it's different in ways that matter.
NVIDIA CEO Jensen Huang opened CES 2025 by declaring it the "Year of AI Agents." His language was revealing: IT departments, he said, will soon function as "HR departments for AI agents, managing an expanding roster of digital employees."
Not tools. Employees.
Agentic AI systems don't wait for prompts. They perceive, reason, plan, and execute with decreasing levels of human intervention. The market for these systems is projected to grow from $5.1 billion in 2025 to $47 billion by 2030—nearly tenfold in five years.
According to McKinsey, 23% of organizations are already scaling agentic AI systems. Another 39% are experimenting.
What does this mean in practice?
I was at a conference in San Francisco last month. Late evening, the kind of networking hour where people's guards come down. I ended up at a hotel bar with a founder building agentic AI for customer service. Three drinks in, he got honest.
"Right now, our product handles tier-one support tickets. But in 18 months, it'll handle tier-two. And in three years—honestly—I'm not sure what human customer service looks like anymore. Maybe supervision. Maybe edge cases. But the volume work? That's going to agents."
He glanced around to make sure no one was listening.
"For the record, we emphasize human-AI collaboration in our marketing."
The gap between what AI companies say publicly and what they believe privately is one of the unspoken dynamics shaping this transition. I've heard similar candor from founders at half a dozen other AI companies. The public message is augmentation. The private belief is replacement.
New roles are emerging: prompt engineers, agent orchestrators, human-in-the-loop validators. LinkedIn job postings for these positions have tripled in 2025.
But look closer. These jobs require skills that most displaced workers don't have. They exist in concentrations in a handful of tech hubs. They represent thousands of positions, not millions.
The narrative that new technology creates new jobs is historically accurate. It says nothing about whether those new jobs will be accessible to the people displaced by the old ones.
The bookkeeper replaced by accounting software did not become a software engineer. The factory worker displaced by robots did not become a robotics technician. History suggests that new jobs go to new workers. Displaced workers struggle to find footing.
The View from Outside America
Most AI workforce analysis is written from an American perspective. This matters because the American experience is not universal—and in some ways, it may be misleading.
The EU AI Act, which takes full effect in August 2026, represents the world's most comprehensive attempt to regulate AI in employment contexts. Companies using AI in hiring, performance evaluation, or workforce management will face stringent documentation, transparency, and oversight requirements.
European policymakers explicitly prioritize worker protection over innovation speed. The theory is that regulated AI will be more trustworthy and ultimately more sustainable. The risk is that Europe regulates technology it doesn't build—in 2025, the U.S. produced about 40 large foundation models, China produced 15, and the EU produced 3.
China represents something different: AI deployed not just for productivity but for control.
Chinese warehouse workers wear biometric devices that track their movements. Delivery drivers are managed by algorithms that optimize for speed with little regard for human factors. Factory workers are scored on "efficiency metrics" that determine their continued employment.
This is a possible future for AI and work—one where the technology is used not to replace workers but to extract more labor from them. It's worth keeping in mind as we debate displacement versus augmentation.
There's a third option. Intensification.
The Invisible Billions
Almost no AI workforce analysis seriously considers India, Southeast Asia, or Africa.
This is strange, given that India's technology services sector employs over 5 million people in roles directly exposed to AI automation. Call centers in the Philippines support hundreds of thousands of families. Back-office operations in Malaysia and Vietnam represent a growing share of global knowledge work.
A technology services executive in Bangalore, speaking off the record, put it to me bluntly. We talked over video; behind him, I could see the lights of a tech park through his office window. Thousands of people in that building doing exactly the work AI was learning to do.
"The American discussion is about whether AI will take middle-class jobs. For us, it's about whether an entire industry—one that created the first real middle class in Indian tech history—will exist in ten years."
He was talking about his own industry. His own job.
"We spent thirty years building a comparative advantage in doing what Americans didn't want to do—the back-office work, the support work, the code maintenance. Now AI can do it. What do we do? Become a country of prompt engineers? We don't have that many English speakers."
The global implications of AI-driven labor displacement have barely been examined. If millions of jobs disappear from India's tech sector, what happens to the economy that depended on them? If call centers in the Philippines close, what happens to the families they supported?
These questions aren't on the agenda of Davos panels. But they'll determine the lives of hundreds of millions of people.
Why This Time Might Actually Be Different
Every time AI displacement is discussed, someone invokes history. The textile workers of the industrial revolution. The agricultural laborers replaced by tractors. The typing pools eliminated by computers. Technology has always destroyed jobs while creating new ones. Why should this time be different?
It's a reasonable question. The track record of technological unemployment predictions is genuinely poor. Goldman Sachs researchers note that "predictions that technology will reduce the need for human labor have a long history but a poor track record."
Approximately 60% of U.S. workers today are in occupations that didn't exist in 1940. History is on the side of adaptation.
But I think the historical parallels obscure as much as they illuminate.
AI affects cognitive work that was previously considered automation-proof. The programmers who automated factory work are now being automated themselves. Every previous technological revolution affected a category of work while leaving others as refuges. AI appears to be coming for multiple categories simultaneously.
The pace is different. Previous technological transitions played out over decades. The textile industry didn't transform overnight. Tractors spread across American agriculture over a generation. This transition is happening in years, possibly months. The skills that made someone employable in 2023 may not make them employable in 2027.
The distribution is different. A handful of companies—OpenAI, Anthropic, Google, Microsoft—control the most advanced AI systems. The productivity gains flow disproportionately to their creators and to the companies wealthy enough to deploy them. The historical pattern where technology creates broadly shared prosperity may not apply when the technology is controlled by a handful of actors.
And there's something harder to quantify. Previous technologies augmented human capability. The combine harvester made farmers more productive. The spreadsheet made accountants more powerful. But AI doesn't just augment. It mimics. It produces outputs that look like human thought, human creativity, human judgment.
This affects identity in ways that tractors never did.
When your expertise can be approximated by software, what does your expertise mean?
The Missing Voices
There's a conversation that should be happening but isn't.
Organized labor has been largely absent from the AI workforce debate. The unions that once negotiated technological transitions—ensuring displaced workers received retraining, severance, and transition support—have been weakened to the point of irrelevance in most knowledge-work sectors.
A labor organizer I spoke with—someone trying to organize tech workers in Seattle—was blunt about the challenge:
"Tech workers have spent twenty years believing they were above labor organizing. They thought their skills made them irreplaceable. They thought they were partners with management, not employees who could be discarded. AI is teaching them otherwise. But by the time they realize they need collective power, it's too late. You can't organize a workforce that's being laid off."
The policy discussion is similarly impoverished. Outside academic circles, serious discussion of alternatives—universal basic income, AI taxation, mandatory retraining funds, reduced work hours—is effectively taboo in mainstream political discourse. The assumption that markets will sort this out remains unchallenged despite mounting evidence that markets are producing concentrated gains and distributed losses.
I don't know what the right policy responses are. But I'm fairly confident that the current approach—hoping reskilling and market forces will handle everything—is insufficient.
The Augmentation Possibility
I've spent a lot of this analysis on the dark side. The other side is real too.
There's a version of AI and work that doesn't end in displacement. That ends in enhancement. The venture capital community has noticed that AI solutions designed for human collaboration often demonstrate more immediate value than purely autonomous systems. The "augmentation-first" approach is increasingly fashionable.
The productivity evidence is real. Studies estimate that around 80% of U.S. workers may see AI affect at least 10% of their tasks—but affecting is not replacing. When time savings are averaged across the workforce, the result is a 1.1% increase in overall productivity, translating to a 33% productivity gain for each hour spent using generative AI.
In practical terms: a lawyer who uses AI to draft documents spends less time on rote work and more time on strategy. A marketer who uses AI to generate campaign variations can test more ideas faster. A software developer with AI coding assistants ships features more quickly.
I experience this myself. I use AI to research, to draft, to edit. This article would have taken me three times as long to write without it. I'm more productive than I was two years ago. The technology has genuinely made my work better.
Research in Management Science found that when humans and AI work together on judgment tasks, collaborative performance is often better than what either achieves alone.
I believe augmentation is real.
What worries me is who gets to decide whether augmentation happens.
Whether AI augments or replaces isn't a technical question. It's an organizational and political question. The same technology that could make ten workers more productive could instead be used to fire five workers and make the remaining five work harder. The choice between these outcomes is made by people with particular interests and incentives.
And those interests, right now, favor displacement.
Investors reward efficiency. Executives are incentivized to cut costs. Boards respond to quarterly earnings. The augmentation story is compelling, but it requires decision-makers to choose long-term capability building over short-term cost reduction.
That's a choice many organizations don't make.
One CHRO I spoke with put it starkly: "I know that AI-augmented teams outperform AI-without-humans in complex work. I've seen the research. I believe it. But when my CFO asks why our headcount hasn't dropped given our AI investments, 'augmentation' is a hard argument to make. The pressure is always to do more with less. Augmentation sounds like an excuse for not cutting."
What Should Actually Happen
I'm going to tell you what I actually think rather than what's politically safe.
The comfortable consensus is that reskilling will solve the displacement problem. It won't—not at the scale required. Yes, invest in training. But be honest about its limits. Not everyone can be retrained for every new role. Some workers will not successfully transition, not because they lack ability but because their circumstances constrain their options. Age. Location. Family obligations. Health. If your organization is going to displace workers, acknowledge that some will struggle. Consider what you owe them beyond a severance check and a link to LinkedIn Learning.
The Entry-Level Extinction deserves more attention than it's getting. The automation of junior roles creates a pipeline problem. If no one learns the foundational work, there's no one to do the advanced work in five years. Before eliminating entry-level positions, ask: where will our senior talent come from in 2030?
Regulation is coming whether companies prepare or not. The EU AI Act's high-risk provisions take effect August 2026. Illinois's AI amendment takes effect January 2026. Colorado follows in February. Organizations that treat these as distant concerns will discover they're immediate problems.
And honesty matters more than most leaders acknowledge. The organizations that handle this transition best will be those that communicate honestly with their employees. Not corporate messaging about "empowerment" and "future-proofing." Actual honest conversations about which roles are changing, what skills will matter, and what support is available. People can handle uncertainty. What they can't handle is uncertainty combined with dishonesty.
For individuals, I'm reluctant to give advice, because the honest answer is often "it depends" and sometimes "there are no good options."
What I tell my own family members: Learn to use AI tools—this is table stakes now. But don't assume AI expertise alone provides security; the tools change faster than expertise develops. Invest in what AI can't do yet—complex interpersonal communication, creative problem-solving in genuinely novel situations, ethical judgment, physical presence. Build adaptability as a core competency; by 2030, 70% of job skills may change, and the workers who thrive will be those who view their careers as continuous adaptations rather than fixed trajectories.
And if you're young: think hard about field choice. The fields where physical presence matters, where human judgment is essential, where regulation creates barriers to automation—these seem more durable than purely cognitive work that can be approximated by algorithms.
Living in the Uncertainty
I reached out to Sarah Chen again while finishing this analysis.
She still hasn't found full-time work. She's doing contract consulting—helping companies set up the AI marketing systems that replaced her—at a fraction of her former salary.
The irony isn't lost on her.
"I'm not going to give you a happy ending," she told me. "I don't have one. I'm surviving. I'm figuring it out. I'm angrier than I let on, but anger doesn't pay rent."
I asked if it had changed her view of her career, of what she'd spent a decade building.
"What I want people to understand is that this isn't an abstraction. When you read these reports about '92 million jobs displaced,' each one of those is someone's life. Someone who had plans and commitments and bills to pay. Someone who did everything right according to the old rules and is now discovering the rules changed without warning."
I asked what she'd want decision-makers to know.
"That we're not numbers. That 'net job creation' means nothing when you're the one whose job got subtracted. That the people making these decisions about AI deployment—the CEOs and investors and policymakers—they're protected. Their kids will be fine. They're making choices that affect millions of people who have no say in those choices."
She laughed, but there was nothing funny in it.
"That's not very objective, is it? Maybe I'm just bitter."
Maybe she is. But she's also right.
The transformation underway is not weather. It's the result of choices—by companies, by investors, by policymakers, by consumers. The shape it takes is not predetermined.
We are making choices, right now, that will shape how the next decade unfolds.
The question is not whether you'll be affected. The question is whether you'll have a voice in shaping what comes next.