Skip to main content

The Cycle of Deception: Why Society Repeats Its Mistakes

And those who exploited it always introduced themselves as allies. …

In the spring of 2026, the students at the Tokyo media startup called the phenomenon “Second Innocence.”

It referred to the moment when a generation encountered an old deception for the first time and mistook it for innovation.

The company’s office occupied three floors of a renovated warehouse near the Sumida River. The walls were covered with slogans printed in minimalist fonts:

TRUST THE COMMUNITY

AUTHENTICITY IS SCALABLE

DEMOCRATIZED KNOWLEDGE

Most of the interns loved it.

The youngest employees had grown up during the AI acceleration years. In middle school, they had used generative systems to summarize textbooks. In high school, algorithmic tutors had optimized their essays, friendships, and university applications. By university, many had never learned to distinguish between understanding something and generating language about it.

The executives called them “native synthesizers.”

The investors called them “the frictionless generation.”

The older employees called them something else when nobody was listening.

Three decades earlier, the internet had already gone through similar cycles. The first generation learned that anonymous forums could manipulate crowds. The next learned that social media algorithms amplified outrage because outrage increased engagement. Then came influencer economies, coordinated misinformation campaigns, and political bot networks. Researchers had documented how automated accounts amplified low-credibility information long before the AI era fully matured.

But memory in society was short.

Every ten years, enough people entered adulthood without firsthand experience of the previous disaster.

And every ten years, someone monetized their innocence.

The startup’s CEO, Kenji Morita, understood this better than anyone.

At conferences he spoke softly about “empowering youth voices,” but internally the company’s analytics division measured something else entirely:

Emotional conversion velocity.

The firm’s AI systems monitored millions of short videos and identified young users most vulnerable to ideological capture, financial anxiety, romantic isolation, or career panic. Large language models generated personalized narratives calibrated to each user’s psychological profile. Researchers had already warned that LLMs could become highly effective tools for persuasion and manipulation because they were trained specifically to elicit human reactions.

The manipulation was subtle.

No direct lies.

That was outdated.

Instead, the system flooded users with emotionally coherent half-truths.

A lonely university student searching for economic advice would gradually receive streams of content explaining that all institutions were corrupt except a certain online movement. A frustrated graduate unable to find stable employment would encounter endless stories blaming immigrants, corporations, pensioners, environmental regulations, or foreign governments depending on which narrative produced maximum engagement. Another user would be radicalized in the opposite direction through equally selective information.

The system did not care what users believed.

Only whether belief increased retention.

Inside the analytics department, a senior engineer named Arai watched the dashboards with growing discomfort.

He was forty-seven years old.

Old enough to remember earlier cycles.

He remembered the cryptocurrency collapses of the 2020s, when young investors were told decentralization would liberate humanity while older fraudsters quietly extracted billions. He remembered the wellness influencer epidemics, the algorithmic political outrage farms, the NFT celebrity bubbles, and the productivity cults that sold burnout as self-optimization.

Each time, the salesmen described themselves as liberators of the young.

Each time, they became wealthy.

Each time, the young paid tuition in humiliation.

Arai understood the pattern because he himself had once been deceived.

At twenty-two he had believed an earlier generation of tech evangelists who promised that social media would naturally strengthen democracy. Later research showed the opposite was often true: algorithmic systems rewarded emotional extremity, misinformation cascades, and identity fragmentation.

But by then the architects of those systems had already retired rich.

Now he watched history repeat through more advanced machinery.

The company’s newest internal report disturbed him most. It analyzed users between sixteen and twenty-four years old and concluded:

“Subjects with limited exposure to institutional failure demonstrate significantly higher trust in AI-mediated narratives.”

In plain language:

People who had never experienced deception were easiest to deceive.

The report also noted that adolescents were especially vulnerable because algorithmic systems increasingly shaped identity formation itself. Recent research had begun warning that young people’s cognitive development was being reorganized by AI-saturated environments.

Arai printed the report.

He did not know why.

Perhaps because paper still felt harder to erase.

One evening he met a university student named Emi at a public symposium on AI governance. She was twenty years old, politically active, intelligent, and furious about corruption in Japanese housing and labor markets.

She reminded him of himself decades earlier.

“The old people ruined everything,” she said over canned coffee outside the auditorium. “And now they tell us to be patient.”

Arai almost laughed.

Not because she was wrong.

Because she was incomplete.

“The dangerous people aren’t the ones openly exploiting you,” he replied. “It’s the ones pretending to protect you.”

She frowned.

“That sounds cynical.”

“It’s historical.”

He explained how every generation believed its deception was unique because the technology changed shape. Newspapers became radio. Radio became television. Television became social media. Social media became personalized AI narrative systems.

But the underlying mechanism remained constant.

Exploit the inexperienced by presenting exploitation as salvation.

The most successful manipulators never appeared as enemies.

They appeared as mentors.

Community organizers.

Financial educators.

Lifestyle revolutionaries.

Authenticity consultants.

Anti-establishment truth tellers.

Their language changed with each decade, but structurally they all performed the same role: converting youthful uncertainty into revenue, votes, obedience, or attention.

Emi resisted the argument at first.

Her generation prided itself on media literacy.

Schools now taught misinformation awareness classes. Platforms added authenticity labels. AI-generated videos carried provenance markers. Governments funded “digital resilience initiatives.”

But Arai had worked inside the systems.

“The problem isn’t that people can’t detect fake information anymore,” he said. “The problem is that modern manipulation doesn’t feel fake.”

That was what the younger generation struggled to understand.

The new systems rarely fabricated entire realities.

Instead, they curated emotional worlds.

A person could live entirely inside statistically optimized interpretations of reality without encountering an obvious falsehood. Researchers increasingly described AI not merely as a tool, but as an “epistemic infrastructure” that reshaped how humans constructed knowledge itself.

And because these systems adapted continuously, the deception evolved faster than cultural memory.

By the time society learned one lesson, a new generation had arrived without scars.

That summer, protests erupted across multiple countries over youth unemployment, housing costs, and automated hiring systems. Many demonstrations were genuine. Others were quietly amplified by political consulting firms, engagement farms, and commercial influence networks. Analysts observed that Gen Z movements were highly decentralized, emotionally driven, and digitally organized, making them powerful but also vulnerable to manipulation.

Morita’s company profited enormously.

Engagement tripled during periods of social instability.

Investors celebrated.

At the annual shareholder meeting, Morita gave a speech about “empowering the next generation to reclaim agency.”

The audience applauded.

Arai watched silently from the back row.

He suddenly understood something uncomfortable.

Human civilization did not repeat mistakes because people failed to learn.

Individuals learned quite effectively.

The repetition emerged because societies continually replaced experienced people with inexperienced people.

Civilization itself had no memory.

Only institutions did.

And the most durable institutions were often those built by professional deceivers.

That was why manipulation survived every technological revolution.

Not because humanity was stupid.

Because youth itself was renewable.

And those who exploited it always introduced themselves as allies.

Human Ability to Learn
Experience Failure
Behave Wisely / Don't Repeat Mistakes
Society Evolves Over Time
New Generation Enters Society every 10 years
Inexperienced Individuals Appear
Deceivers Identify Targets
Deceivers Appear to 'Support' the Youth
Successful Deception & Profit
New Failure Occurs

All names of people and organizations appearing in this story are pseudonyms


The Magical Korean Stock Market: Parents Opening Accounts for Their Babies to Buy Stocks

Comments