Every year in Tokyo’s Shibuya district, a group of performance artists called Signal Corps organized what they called the Marathon of Signs — a public procession that blurred protest, art, and social commentary. This year, however, things were different.
The theme was “Real Signals vs. Noise” — a response to an age dominated by AI-generated content, viral misinformation, and the collision of human expression with algorithmic amplification.
Act I — The Empty Megaphone
On an overcast Sunday, the Signal Corps assembled at Shibuya Crossing, each member wearing a mask stylized like a QR code. The lead performer, Akira, stepped forward with a gigantic megaphone mounted on wheels.
He raised the megaphone and… said nothing.
Pedestrians paused. Some laughed, some took videos, others scanned the QR codes on the masks.
But when people scanned them, they weren’t taken to flashy sites or trendy NFTs.
Instead, each QR code resolved to a live data feed of misinformation trends — showing how often certain phrases were being amplified across networks.
The silent megaphone became a symbol: mass communication without substance is noise in motion.
Akira later explained in an interview with A Shimbun that the act wasn’t a “lonely clown” moment. It was performance grounded in shared meaning — a symbol of contemporary information overload. Without that context, the silence would’ve been hollow.
“If we just stood quietly with the megaphone, we’d be a street oddity,” Akira said.
“But placing it against the backdrop of real data makes it a question — what are we listening to?”
Act II — The Deepfake Chorus
Next, a group of performers dressed as everyday workers — teachers, delivery drivers, nurses — stepped into the intersection. Each wore a digital tablet on their chest, streamed from a real-time server.
But the faces on those screens weren’t theirs. Instead, they were AI-generated composites — familiar yet uncanny. Some faces looked like celebrities, others like public figures, creating an eerie chorus of voices repeating phrases such as “Tell me it’s real” and “Who decides meaning?”
Spectators murmured: Is that person real? Were they here?
By using deepfake visuals — a cutting-edge technology that’s both captivating and controversial — the artists transformed a high-tech fear into a shared public symbol:
Faces can be real, messages can be fake. Performance must anchor itself in meaning, not mere spectacle.
Act III — The Consensus Circle
As twilight settled, Akira invited everyone to join a circle around a giant LED globe — showing mapped data on climate impact, pandemic responses, and generative AI regulation from sites like the IPCC and WHO in real time.
People from all walks of life stepped in. A software engineer stood next to a student, next to a retiree. They read the globe’s projections, voiced concerns, shared stories.
It wasn’t scripted; it was emergent. But it worked because it was grounded in mutual relevance:
climate change affects all; information ecosystems shape democratic life; technology changes meaning.
The globe became the night’s most powerful symbol — not because it was flashy, but because it invited collective interpretation.
⸻
Why This Performance Worked
In academic terms, performance requires shared symbolic infrastructure — gestures, sounds, images that are already embedded in cultural understanding. Without that, an act becomes ambiguous at best, absurd at worst. Signal Corps used:
• Silent amplification to symbolize meaning-less broadcast
• AI deepfakes to reflect authenticity anxieties
• Live data visualizations to anchor performance in global, verifiable phenomena
Each act drew from real, contemporary issues — social media dynamics, generative AI ethics, climate change communication — and made them public through recognizable symbols. That’s how a performance avoids being a “lonely clown” and becomes a shared cultural experience.
⸻
All names of people and organizations appearing in this story are pseudonyms
Trump says Venezuela’s Maduro ‘captured’ after huge US military strikes

Comments