What Was the TED Talk?

Bill Gates wheels a hefty metal barrel out onto a stage. He carefully places it down and then faces the audience, which sits silent in a darkened theater. “When I was a kid, the disaster we worried about most was a nuclear war,” he begins. Gates is speaking at TED’s flagship conference, held in Vancouver in 2015. He wears a salmon pink sweater, and his hair is combed down over his forehead, Caesar-style. “That’s why we had a barrel like this down in our basement, filled with cans of food and water,” he says. “When the nuclear attack came, we were supposed to go downstairs, hunker down, and eat out of that barrel.”

Now that he is an adult, Gates continues, it is no longer nuclear apocalypse that scares him, but pestilence. A year ago, Ebola killed over ten thousand people in West Africa. If the virus had been airborne or spread to a large city center, things would have been far worse. It might’ve snowballed into a pandemic and killed tens of millions of people. Gates tells the TED attendees that humanity is not ready for this scenario — that a pandemic would trigger a global catastrophe at an unimaginable scale. We have no basement to retreat to and no metal barrel filled with supplies to rely on. 

But, Gates adds, the future might turn out okay. He has an idea. Back when he was a kid, the U.S. military had sufficient funding to mobilize for war at any minute. Gates says that we must prepare for a pandemic with the same fearful intensity. We need to build a medical reserve corps. We need to play germ games like generals play war games. We need to make alliances with other virus-fighting nations. We need to build an arsenal of biomedical weapons to attack any non-human entity that might attack our bodies. “If we start now, we can be ready for the next epidemic,” Gates concludes, to a round of applause. 

Of course, Gates’s popular and well-shared TED talk — viewed millions of times — didn’t alter the course of history. Neither did any of the other “ideas worth spreading” (the organization’s tagline) presented at the TED conference that year — including Monica Lewinsky’s massively viral speech about how to stop online bullying through compassion and empathy, or a Google engineer’s talk about how driverless cars would make roads smarter and safer in the near future. In fact, seven years after TED 2015, it feels like we are living in a reality that is the exact opposite of the future envisioned that year. A president took office in part because of his talent for online bullying. Driverless cars are nowhere near as widespread as predicted, and those that do share our roads keep crashing. Covid has killed five million people and counting. 

At the start of the pandemic, I noticed people sharing Gates’s 2015 talk. The general sentiment was one of remorse and lamentation: the tech-prophet had predicted the future for us! If only we had heeded his warning! I wasn’t so sure. It seems to me that Gates’s prediction and proposed solution are at least part of what landed us here. I don’t mean to suggest that Gates’s TED talk is somehow directly responsible for the lack of global preparedness for Covid. But it embodies a certain story about “the future” that TED talks have been telling for the past two decades — one that has contributed to our unending present crisis.

The story goes like this: there are problems in the world that make the future a scary prospect. Fortunately, though, there are solutions to each of these problems, and the solutions have been formulated by extremely smart, tech-adjacent people. For their ideas to become realities, they merely need to be articulated and spread as widely as possible. And the best way to spread ideas is through stories — hence Gates’s opening anecdote about the barrel. In other words, in the TED episteme, the function of a story isn’t to transform via metaphor or indirection, but to actually manifest a new world. Stories about the future create the future. Or as Chris Anderson, TED’s longtime curator, puts it, “We live in an era where the best way to make a dent on the world… may be simply to stand up and say something.” And yet, TED’s archive is a graveyard of ideas. It is a seemingly endless index of stories about the future — the future of science, the future of the environment, the future of work, the future of love and sex, the future of what it means to be human — that never materialized. By this measure alone, TED, and its attendant ways of thinking, should have been abandoned.

Though no longer at its cultural apogee, the public speaking platform is more active than ever. New TED talks are constantly being delivered and then uploaded and viewed on TED’s website and YouTube channel, which has 20.5 million subscribers. There are TED podcasts, a TED newsletter, a TED Ideas blog, and a TED publishing arm, which prints “short books to feed your craving for ideas.” Before the end of this year, TEDx events will be held in Barcelona, Tehran, Kabul, Hanoi, Wuhan, Anchorage, Jakarta, Algiers, Lagos, and Tomsk, the oldest university town in Siberia. People are still paying between $5,000 and $50,000 to attend the annual flagship TED conference. In 2021, the event was held in Monterey, California. Amid wildfires and the Delta surge, its theme was “the case for optimism.” There were talks on urban possibility (“finding new, smarter ways to live together”), a tech comeback (how genetic technology and NFTs “make the case for rousing the techno-optimist in us”), and climate confidence (“How we might beat this thing!”). 

I recently watched some of the talks from this conference on my laptop. They hit like parodies of a bygone era, so ridiculous that it made me almost nostalgic for a time when TED talks captivated me. Back then, around a decade ago, I watched those articulate, audacious, composed people talk about how they were building robots that would eat trash and turn it into oxygen, or whatever, and I felt hopeful about the future. But the trash-eating robots never arrived. With some distance, now, from a world in which TED seemed to offer a bright path forward, it’s time to ask: what exactly is TED? And what happened to the future it envisioned?

 

In ancient Athens, public speaking was understood primarily as a means of persuasion; learning to convince others was the duty of a democratic citizen. For Confucius, refined speech was the embodiment of refined ethics. In nineteenth-century America, popular lectures delivered in lyceums up and down the East Coast were seen as a form of moral uplift, raising the nation’s cultural standards and satisfying the middle class’s rapacious appetite for useful knowledge. The primary function of TED, by contrast, is to predict the future. 

The inaugural TED conference, held in Monterey, California in 1984, was organized by Richard Saul Wurman, an architect, and Harry Marks, a TV broadcast designer, who shared a conviction that the separate fields of technology (T), entertainment (E), and design (D) were converging, and that their convergence was going to change the world. Lofty futurism was nothing new for the Silicon Valley cohort that attended the first TED conference. Since the dawn of digital computing, the engineers and mathematicians building the new machines had spoken of how their inventions would instigate revolutions, upend institutions, disrupt industries, and transform what it means to be human. John von Neumann, sometimes considered the father of modern computing, is said to have confessed to his wife, “What we are creating now is a monster whose influence is going to change history, provided there is any history left.” The world was about to enter a new historical epoch. The change would be exponential and irrevocable across every sphere of human activity. Wurman and Marks packaged this futurist imaginary and sold it as a live event. 

There are two recordings that survive online from the 1984 TED conference. One is a demo by a Sony executive, dressed in a suit with a silk pocket square, who plays Richard Strauss’s Also sprach Zarathustra on the newly invented CD-ROM. The other is a presentation by Nicholas Negroponte, an MIT professor who would later launch the MIT Media Lab, in which he offers five predictions: electronic books, touchscreens, teleconferencing, a computationally mediated service industry, and laptops for every student in school (an idea that would eventually lead to the spectacularly abortive One Laptop per Child program, which Negroponte himself oversaw).

TED wasn’t held again until 1990, when Wurman selected a more wide-ranging lineup of speakers. This time, he added philosophers, musicians, religious leaders, and philanthropists to the roster. The talks had to be tech- and future-oriented, but also, crucially, entertaining and visionary. Fewer practical demonstrations, more high-level sermonizing. At TED2, Jaron Lanier spoke of the perception-bending potential of virtual reality. In just an hour and a half, Lanier said, he could build a digital world that attendees could later explore in the V.R. room at the conference. John Naisbitt, then a popular futurist and author, and his wife, Patricia Aburdene, talked about the “megatrends” that would transform our lives in the coming decades — more people would move back to country towns and work remotely via their computers, for instance. For some, this high-minded forecasting seemed odious. Ted Nelson, inventor of Project Xanadu, an ill-fated, hypertext-based alternative to the World Wide Web, also gave a talk at TED2. He began by calling the event “the most self-referentially smug conference that I have been at outside Apple and IBM.” But the new TED format was a commercial success, and over the 1990s, the conference gained a cult following, particularly among the increasingly wealthy Silicon Valley crowd.

Wurman’s curatorial decisions and fastidious quality control gave early TED a certain nerdy-chic exclusivity. He sat on stage for every talk. If speakers rambled or bungled a line, he would leave his chair and stand directly behind them until they noticed and left the stage. But after a decade, he grew tired of TED and, in 2001, sold it to Chris Anderson, a British media entrepreneur who made a fortune building websites (including the popular video game site IGN) and publishing magazines about “the web” (like Business 2.0) before things went south in the dot-com crash. 

Anderson’s worldview had been heavily influenced by the tech prophets of the day, like Kevin Kelly, WIRED’s born-again Christian founding editor. In his 1998 book, New Rules for the New Economy, Kelly would argue that in the digital age, value would be generated no longer through the movement of materials and energy, but through the spreading of ideas and stories — discourse converted with perfect efficiency into capital. Anderson was thinking along the same lines. In 1996, he had established the Sapling Foundation, a not-for-profit whose goal was to “provide a platform for the world’s smartest thinkers, greatest visionaries and most-inspiring teachers, so that millions of people can gain a better understanding of the biggest issues faced by the world, and a desire to help create a better future.”

When Anderson purchased TED, it was in the Sapling Foundation’s name, and at the 2002 TED conference, he sketched out a plan through which TED would further its ambitions. On stage, leaning forward in his office chair, he affected a kind of conceited chumminess distinctly reminiscent of David Brent from the British version of The Office. He began with an anecdote about his “eighteen months of business hell” in the late 1990s, but claimed that the experience had given him new insight into the future. The coming decades would not be about gatekeeping or rigid disciplinary boundaries or exclusivity, Anderson said. The future was about openness, connection, democratization of knowledge, collectivity. Ideas were there to be spread. Through this spreading,​ a better world would materialize. “To understand anything,” he said, “you just need to understand the little bits.”

 

Anderson’s speech was well-received — Jeff Bezos, an early TED enthusiast who was in the audience that day, gave him a standing ovation — and over the next few years, the conference’s reputation swelled. Anderson continued courting Silicon Valley’s future-builders for presentations. In 2003, Bezos himself, then still at the beginning of his meteoric rise to the top of the global rich list, gave a talk about how the internet was not a commodity, like gold; it was a utility, like electricity. Despite the recent downturn, there were plenty more fortunes to be made in the coming decades. “With innovation there isn’t a last nugget,” he said, grinning, as if he could see his own bright future unfolding before him. Anderson also invited several “thought leaders” who talked convincingly about how technological changes would transform society and human nature itself. This new TED tone was embodied by author Steven Johnson’s 2003 talk, “The Web as a City.” Johnson started with a story about the resilience of New Yorkers in the West Village after 9/11 and then twisted it into a metaphor for Google’s page-rank algorithm: how people and data points, if organized in dense amalgams, can generate a “collective intelligence” greater than the sum of its parts.

True to Anderson’s vision, TED’s reach continued to grow. 2005 heralded the first TEDGlobal event, held in the U.K. The biggest expansion came a year later, when TED began to upload talks online, where anyone could view them for free. Among the first talks posted was Sir Ken Robinson’s “Do Schools Kill Creativity?,” which has since become the most viewed TED talk of all time; it has been watched 72.2 million times in the sixteen years since. In an immaculately delivered speech, the British author and art education consultant proposed that the public education model developed during the Industrial Revolution was outdated: it valued literacy and numeracy and cared only about disciplining children so as to make them malleable for the workforce. In the future, according to Robinson, these skills would become increasingly irrelevant because of a vague set of changes that were afoot. “It’s the combination of all the things we’ve talked about,” he said, referring to other talks at the conference. “Technology and its transformational effect on work, and demography and the huge explosion in population. Suddenly, degrees aren’t worth anything. Isn’t that true?” In this coming new age, it would be the teacher’s role to help foster children’s creative sparks so they could generate original, fresh, and unlikely ideas to shape the world to come. “Our task is to educate their whole being, so they can face the future,” he said. “We may not see this future, but they will. And our job is to help them make something of it.”

It was around the same time that TED talks began to take on a distinct rhetorical style, later laid out in Anderson’s book TED Talks: The Official TED Guide to Public Speaking. In it, Anderson insists anyone is capable of giving a TED-esque talk. You just need an interesting topic and then you need to attach that topic to an inspirational story. Robots are interesting. Using them to eat trash in Nairobi is inspiring. Put the two together, and you have a TED talk. 

I like to call this fusion “the inspiresting.” Stylistically, the inspiresting is earnest and contrived. It is smart but not quite intellectual, personal but not sincere, jokey but not funny. It is an aesthetic of populist elitism. Politically, the inspiresting performs a certain kind of progressivism, as it is concerned with making the world a better place, however vaguely. “The speaker’s work and words move you and fill you with an expanded sense of possibility and excitement,” Anderson writes of the successful TED talk. “You want to go out and be a better person.” All this can be achieved, Anderson proposes, without any serious transfers of power. In fact, in Anderson’s view, not only is culture upstream from politics, but politics itself — dependent on what he calls “tribal thinking” — destroys the free movement of ideas, with all its world-changing potential. “The toxicity of our political… nonconversations is a true tragedy of the modern world,” he writes. “When people aren’t prepared or ready to listen, communication can’t happen.” 

 

TED was never the sole purveyor of inspiresting content. Malcolm Gladwell was inspiresting. The blog Brain Pickings was inspiresting. Burning Man was (once) inspiresting. Alain de Botton, Oliver Sacks, and Bill Bryson were masters of the inspiresting. “This American Life” and “Radiolab,” and maybe narrative podcasting as a form, are inspiresting. But it was TED talks that perhaps best distilled the inspiresting down into its most concentrated form — eighteen minutes of pure inspiresting. 

Jill Bolte Taylor’s 2008 talk “My Stroke of Insight” perhaps represents peak inspiresting. A cognitive scientist who spent years “mapping the micro-circuitry of the brain” (interesting), Bolte Taylor tells the story of how one morning she woke up to find that she was having a stroke. Instead of panicking, she saw it as an opportunity to study her own brain. The left side of her brain was bleeding, and she began to lose the cognitive functions required for analytical thought and self-identification. Her right brain took over and she felt herself suddenly to be living in an eternal present, at one with everything. She said “goodbye… to life,” closed her eyes, and felt her spirit become “like a great whale gliding through a sea of silent euphoria.” As Bolte Taylor recounts the experience in front of the TED audience, her eyes fill with tears. Her voice becomes incantatory. “We have the power to choose, moment by moment, who and how we want to be in the world,” she says. “I thought that was an idea worth spreading.” (Inspiring.) 

What is distinctly TED about Bolte Taylor’s talk is how she makes cognitive science somehow about self-improvement. Her talk does not aim to teach us only about the hemispheres of the brain, but also how to be better humans. This particular rhetorical strategy, already deployed by self-help gurus and life coaches, is apparent in many of the most popular TED talks at the height of the TED era, a period that loosely correlates with Barack Obama’s first term. In her talk, Elizabeth Gilbert, the author of Eat, Pray, Love, speaks about how each of us can, like her, become a creative genius. Brené Brown, a social work researcher, tells us how interpersonal vulnerability changed her life — and how it could change yours, too. We were no longer just watching TED talks for information about the future of the world out there. TED talks could help us make our future selves superior. 

Suddenly, circa 2010, everyone was sharing TED talks — via email, on Facebook, on personal blogs — and TED, a not-for-profit, was making a lot of money. Tickets to the annual conference were going for $7,500 ($15,000 for VIPs). TED set up partnerships with elite brands. TED speakers who achieved millions of views could parlay the attention into new careers as so-called thought leaders. Advertising executive Simon Sinek’s 2009 talk, “How Great Leaders Inspire Action,” massively boosted sales of his self-help-cum-entrepreneurship book Start with Why. Sarah Kay, a spoken-word performer whose TED-talk poem, “If I Should Have a Daughter,” amassed millions of views, told Business Insider, “I’m very much aware that my career would not be what it is had that video not gone online.” In other words, if you could master the TED-style talk, you could sell anything — even slam poetry. 

The TED format proved alluring to some academics, too, especially those hoping for a crossover career appealing to both scholarly and popular audiences. Brown’s vulnerability talk vaulted her research to international fame: each one of her books since has appeared on The New York Times Best Seller list. Though there was no peer review process for TED, the conference had a review system to ensure that speakers were presenting valid, trustworthy material. Quality control was compromised, though, through the creation of the TEDx franchise, which from 2009 let licensees use the brand platform to stage independent events around the world. At a 2010 TEDx event, Randy Powell, a man who claimed to be at the “forefront of the most advanced mathematics ever known to mankind,” spoke about what he called vortex-based mathematics. This previously undiscovered branch of math would, he said, “create inexhaustible free energy, end all diseases, produce all food, travel anywhere in the universe, build the ultimate supercomputer and artificial intelligence, and make obsolete all existing technology.” He got a standing ovation.

The video went largely unnoticed until 2012, when a handful of science bloggers found it and pilloried Powell’s claims. The talk, they said, was constructed entirely out of meaningless jargon. In an online forum, a theoretical physicist said that Powell was “either (1) insane, (2) a huckster going for fame or money, or (3) doing a Sokal’s hoax on TED.” Spectacles like this one damaged TED’s brand, and soon critics were accusing the institution of vapid oversimplification. Evgeny Morozov wrote in The New Republic, “TED is no longer a responsible curator of ideas worth spreading. Instead it has become something ludicrous.” A long profile of Anderson in The New York Times Magazine called TED “the Starbucks of intellectual conglomerates.” 

Perhaps the most incisive critique came, ironically, at a 2013 TEDx conference. In “What’s Wrong with TED Talks?” media theorist Benjamin Bratton told a story about a friend of his, an astrophysicist, who gave a complex presentation on his research before a donor, hoping to secure funding. When he was finished, the donor decided to pass on the project. “I’m just not inspired,” he told the astrophysicist. “You should be more like Malcolm Gladwell.” Bratton was outraged. He felt that the rhetorical style TED helped popularize was “middlebrow megachurch infotainment,” and had begun to directly influence the type of intellectual work that could be undertaken. If the research wasn’t entertaining or moving, it was seen as somehow less valuable. TED’s influence on intellectual culture was “taking something with value and substance and coring it out so that it can be swallowed without chewing,” Bratton said. “This is not the solution to our most frightening problems — rather, this is one of our most frightening problems.” (Online, his talk proved to be one of many ideas worth spreading. “This is by far the most interesting and challenging thing I’ve heard on TED,” one commenter posted. “Very glad to come across it!”)

 

The criticism leveled at TED foreshadowed a backlash against the technocratic elite that would continue to pick up steam in the following years. Twitter had failed to bring democracy to the Middle East. It became clear that social media was only free because our personal data was being mined and sold to advertisers. Obama was not the political savior many had hoped him to be. The banks had been bailed out, but most regular people were still suffering from the fallout of the financial crisis. For a generation coming of age, upward mobility, social equality, and the utopian promises of technology were revealed as mere fantasies. Inspiresting content in general was becoming less relevant, more cringe — a signifier of liberal ineptitude. On Tumblr and Twitter, there was a meme where you’d write some incredibly banal or absurd observation and then add: “Thanks for coming to my TED talk.”

Meanwhile, the TED talks continued, endlessly re-articulating tech’s promises without any serious critical reflection. Indeed, those associated with TED — the curators and the speakers — still appeared authentically attached to the futures they were articulating, despite evidence that these visions were illusory. In 2014, for instance, Elizabeth Holmes gave a talk at a medical-themed TED conference about the technology that her company, Theranos, was using to make blood tests more efficient. By the time she appeared at TED, many inside the company already understood that the technology was not working as it was supposed to. And yet Holmes willingly got on stage and sold the story, and TED promoted it, further propelling Theranos to its peak $10 billion valuation. 

Of course, Holmes’s fraud wasn’t TED’s fault, directly. But the public speaking platform’s philosophy, which conflated telling a story about an idea with its realization, fostered a certain myopic self-belief in people like Holmes that they could create the world ex nihilo with willpower and well-crafted oratory alone. The TED philosophy encouraged boldness of vision, but also denial of reality. As such, it was a magnet for narcissistic, recognition-seeking characters and their Theranos-like projects. Other questionably ambitious and ultimately doomed projects promoted through the TED platform include: Shai Agassi’s electric car idea, Better Place; Pranav Mistry’s SixthSense technology, which was supposed to make our devices obsolete by integrating computers with our bodies; and Peter Molyneux’s video game, Milo, The Virtual Boy, in which players interacted with an artificially intelligent eleven-year-old. I recently heard a business journalist say that when he’s short on story ideas, he’ll just look for people who have given TED talks and “figure out what they’re lying about.”

I became directly acquainted with the TED philosophy in 2015 — the year Holmes went under, the year Gates gave his pandemic talk. A friend was scheduled to appear at a TEDx in Sydney but had at the last minute decided against it and asked if I would do it in her place. I agreed and put together a talk about some of the research I was doing at the time on computer-generated poetry. The talk was later featured on the TED website. I got lots of new followers on Twitter and some emails, including one inviting me to an all-expenses-paid trip to a university in Switzerland where I would serve as a “creative A.I.” expert. I told the university I was no such thing, but the administrators there didn’t care. 

I went to the conference, the point of which seemed to be for young people to explain their inspiresting ideas to wealthier, older people. In the evenings, everyone gathered at the “$200,000 whiskey bar,” as it had been described numerous times in emails leading up to the conference, and the ideas would begin to flow. I was treated to many off-the-cuff TED-style pitches. An Israeli guy who was using drones (interesting) to monitor forests (inspiring). A Singaporean woman who was saving global bee populations (interesting) by putting beehives on the roofs of public housing and teaching the kids who lived there how to be apiarists and sell their own public-housing-branded honey (inspiring). The most popular young person was a French woman who was using machine learning (the most interesting thing of all) to help refugees (the most inspiring cause of all?). She called her idea “tech-fugees.”  Many of us were feeling anxious or uncertain or nihilistic about the coming decades, if not the coming months. The week before, Donald Trump had become the presumptive Republican nominee in the United States. Yet the TED progeny continued to offer bold, tech-centric predictions with unfaltering confidence.

And they have continued to do so. Indeed, if TED’s cultural high-water mark was during the Obama years, grand utopian fantasies are still spun by the technologists and tech-believers, despite all of their past failures and deceptions. Jeff Bezos recently predicted, for instance, that humans will soon live in space, and when we do, Earth itself will become a vacation destination, like a national park. It is a garish, puerile prediction, to be sure, but it is undeniably visionary — as are the crypto, AI, metaverse, geo-engineering, space-bound futures promoted by many tech leaders and their adoring acolytes. Amid seemingly intractable problems here on Earth, a vision of the future can resemble a life raft, and in the absence of viable alternatives, substanceless promises of space travel, crypto-utopias, and eternal life in the cloud may become the only things to look forward to.

TED of course can’t be held solely responsible for the increasingly eschatological pronouncements of this cohort. That would be, in a way, to buy into its hype too much. But as the most visible and influential public speaking platform of the first two decades of the twenty-first century, it has been deeply implicated in broadcasting and championing the Silicon Valley version of the future. TED is probably best understood as the propaganda arm of an ascendant technocracy. It helped refine prediction into a rhetorical art well-suited to these aspiring world conquerors — even the ones who fail.

My Ancestral Home, the Mall

Last June, faced with the prospect of turning nineteen for the sixth time, Miquela Sousa suffered something like an existential crisis. From the bathroom of the Silver Lake restaurant where loved ones had gathered to celebrate her birthday, she reached out to three million followers in an Instagram story, iridescent tears streaking mascara down her face. “What I wanted for my birthday was PROGRESS,” she lamented. “I want to know that my life is GOING somewhere.” 

Miquela’s “life” is completely invented, since the Instagram influencer, better known as @lilmiquela, is not a human. She is a CGI robot, whose “family” of digital designers and brand managers programmed her into existence. Her account first appeared on Instagram in 2016 with a series of nonchalant selfies, photos from meals out with friends, and a “Summer intern in NYC starter pack” meme (images including a Starbucks cup, Plan B, and a screen capture of the Google search “is bushwick safe”) simply captioned “LMAO.” In other words, the vague if overfamiliar trappings of Zillennial identity as curated online. Miquela’s hyperrealistic CGI is overlaid on our physical world, seamlessly layered between real people, real places, and real objects. At first glance, she resembles a living, breathing teenager clad in crop tops, printed hoodies, and cut-out dresses. But if you look closely, there’s something uncanny about Miquela’s poreless skin and immaculate flyaway hairs. The ambiguity sparked confusion among her fans: “I can’t tell if she is a robot or just really good at her makeup,” one commented. 

@lilmiquela on Instagram: Drop a 😩 if you woke up late, are 7 episodes into #Bridgerton and surviving purely off of Takis and vibes. This bath is my first major accomplishment of 2021 and I know I’m not the only one.

It would be two years before her true nature was revealed in an elaborate stunt. In 2018, another cyborg — Miquela’s equally mysterious frenemy, Bermuda (@bermudaisbae) — “hacked” Miquela’s account, forcing her to “tell the truth.” Miquela admitted, “I’m not a human being” in a screenshotted iPhone note, which tagged @brud.fyi as her “manager” for the first time. This was the L.A.-based media startup Brud’s way of pulling back the veil on itself as the creator of Miquela and Bermuda both. Staged posts from Miquela and Brud’s accounts followed, all folding into Miquela’s character development: first she swore to her followers that Brud had lied to her about her own identity, then Brud released a since-deleted statement that justified the “lie” as its way of withholding from her how she had been “rescued” from a “competitor engineer.” Miquela, who “couldn’t stop crying,” called it the hardest week of her life.

By initially obscuring its hand for so long, Brud enabled Miquela to achieve a singular kind of transcendence. Unencumbered by corporeality, she could be anyone, from anywhere: an endlessly mutable and moveable entity with “infinite potential,” as Brud’s Chief of Content Nicole de Ayora would later put it. Unconstrained by time or space, Miquela could make an appearance at Prada’s FW18 show in Milan one week (she posted a photo of herself at the venue wearing the new collection, and gushed that she, like Miuccia Prada, hoped to “push for change”), fly to New York soon after, then return to her native Los Angeles to record her latest single (detailed in another post, with a “photo” from the studio). In the influencer-dominated maelstrom of social media, where marketing, politics, and identity have collapsed in on one another, Miquela emerged as a model-activist-singer and full-time It girl, her various endeavors yoked together by a charismatic presence built over Instagram, Twitter, and TikTok. Brand collaborations followed extensive press coverage, and the comments sections once dominated by confusion were flooded with adoration. In 2018 she was named — alongside Kylie Jenner, Donald Trump, and The Students of Parkland, Florida — one of Time magazine’s 25 Most Influential People on the Internet. 

Though Brud is now known as Miquela’s manager (and by extension, her operator and presumed profiteer), both its motives and its earnings remain opaque. Shortly after the big reveal in 2018, cofounders Trevor McFedries and Sara DeCou issued a statement comparing Miquela’s potential social influence to Will & Grace’s “real-life impact on marriage equality.” As Emilia Petrarca wrote in The Cut, “It’s hard to tell if they’re trolling us with this reference or if they’ve been sipping too much of the Silicon Valley Kool-Aid.” McFedries has since described the company as “a modern-day Disney or Marvel,” specializing in “community-owned media” and “collectively-built worlds.” He and his team handle Miquela’s interviews with a similarly savvy vagueness. In a profile by Hero, a company that labels itself “ecommerce made human,” Miquela said, “brands have always been part of my world from the beginning.” Among even human influencers, that is par for the course, but Brud has never disclosed the specifics of its partnerships, and none of Miquela’s posts bear the legally obligatory #ad disclaimer. The U.K.-based online marketplace OnBuy estimated that Brud generated about $12 million through Miquela in 2020 alone, and that she charges almost $9,000 per brand-sponsored post. That does not include Brud’s venture capital support — its last round of funding in 2019 raised at least $125 million. 

Despite (or perhaps owing to) the project’s nebulous objectives, Miquela continues to captivate a growing, and largely uncritical, fanbase. Her appeal might be attributed, at least in part, to her ambiguous racial presentation. While her “background” has been described as half-Brazilian, half-Spanish, stylistically and phenotypically (or whatever approximates a phenotype for someone who doesn’t have a genotype), it remains hard to place Miquela, who — with brownish hair, brownish-greenish eyes, and a smattering of freckles from cheek to cheek — seems irreducible to any simple racial categorization. “Not even going to try to spout some fake body positivity influencer BS bc I’m skinny and ethnically ambiguous so it’s not like this thirst trap is groundbreaking,” she explains in the caption of a picture in which she models a sheer leotard revealing her beige-toned limbs; hers is a very Instagrammable kind of self-awareness as self-promotion.

@lilmiquela on Instagram: My favorite part about this look is the part where we defund the police.

Miquela’s image is like a Rorschach test: when I first encountered it, I mistakenly assumed she was half-Asian, which might have been a projection on the basis of her penchant for cheongsam-inspired attire. (See, for instance, a post in which she poses in a silky, Mandarin-collared two-piece with the caption, “my favorite part about this look is the part where we defund the police.”) The ambiguity seems intentional: Miquela’s appearance and style shapeshifts. In a photoshoot that accompanied a 2017 profile for Paper magazine, for instance, Miquela — in an homage to a 1999 Lil’ Kim cover of Interview magazine — is rendered darker and curvier than usual. The adjustments are easy to miss until you scroll through her next few posts, in which she looks lighter-skinned and thinner.

@lilmiquela on Instagram: Such an honor to pay homage to the queen @lilkimthequeenbee and @david_lachapelle for @papermagazine #breaktheinternet !!! And now that we’ve broken the internet break out your wallets and donate to @myfriendsplace by clicking the link in my bio! Gonna match all the $$ you guys donate to this amazing organization. ❤️❤️❤️❤️

When pressed in a 2017 Vogue interview to speak about her racial identity and presentation, Miquela correlated her fluidity with freedom. For her, “Identity, especially right now, is always in flux,” particularly as her generation tends toward an open-mindedness about new identities. Her outlook represents the “more empathetic world” and “more tolerant future” that Brud aims to cultivate through its social media storytelling, a mission displayed on the company’s website and habitually reiterated by McFedries, who likes to be called “head of compassion” rather than CEO. In Brud’s universe, the blonde-haired, blue-eyed cyborg Bermuda, who bears a striking resemblance to Tomi Lahren, is Miquela’s aesthetic and political foil. Bermuda’s outwardly pro-Trump, “cyborg supremacist” leanings frame Miquela as progressive by contrast — with a healthy dose of self-awareness. “I’m not sure I can comfortably identify as a woman of color,” she wrote in one post, because “‘Brown’ was a choice made by a corporation” and “‘woman’ was an option on a computer screen.” McFedries, meanwhile, has maintained that “Miquela’s journey is really one of otherness,” which has only encouraged her fans to “accept and embrace who they are.” But Miquela’s aesthetic is not designed to be other; it’s designed to be everyone. Miquela may embody something new in the history of technology, but she also animates an older idea that I’ll call the post-racial person — an idea we should resist taking at face value, no matter how perfect and poreless the face in question may be.

 

Multiculturalism, the forebear of post-raciality, used to embody a certain kind of political optimism. I grew up awash in this feeling — my parents emigrated from Malaysia to Australia in the 1970s, where they encountered a historically progressive government that celebrated diversity, both racial and cultural. In the ’90s, when I was a child, media and commerce were vehicles through which multiculturalism could spread across an increasingly borderless world. It was possible to don a kebaya while devouring sausage sizzles to mark Multicultural Day at school; the visuals for Michael Jackson’s “Black or White,” with faces of various skin tones morphing into one another, were as ubiquitous in Malaysia and Australia as they were in the United States; imported United Colors of Benetton advertisements appeared loosely aspirational. In this landscape, multiracial people became markers of social progress, their increasing presence a confirmation that the melting pot had come, or was coming, to a boil. Multiracial families were seen as hastening a more progressive era, organically bringing a diverse, racially tolerant society into existence — a prospect appealing in its immediacy and suited to the dominant individualism of the era, in which political aims were siphoned into the personal sphere. 

The post-racial person steps forth from this tradition, signifying a future in which multiraciality has not only ensured racial harmony but rendered futile any attempt to categorize people by race. In a melting-pot, difference is tolerated, but a post-racial world renders difference the same; there is no longer a color line to be negotiated. By the early 1990s, this fantasy was already well-established. A 1993 Time magazine cover entitled “The New Face of America” featured a digital composite of multiple women, resulting in a portrait with brown hair, olive skin, and light brown eyes — an effort to represent the country’s demographic destiny. The idea was satirized in a 2004 South Park episode in which time-travelers, described by a fictional CNN anchor as a “yellowy, light-brownish whitish color,” arrive from the year 3045. “Race is no longer an issue in the future,” the newscaster says, “because all ethnicities have mixed into one.” By 2013, after the election of Barack Obama, the trope reappeared in earnest in a National Geographic feature entitled “The Changing Face of America,” intended to show with photographic portraits that “we’ve become a country where race is no longer so black and white.” Each iteration of the trope confirmed a sense of deterministic optimism: just as the arc of history bent toward justice, so did the human race bend toward a unified beige. 

For a brief moment in 2008, some commentators hailed the arrival of that post-racial future, placing Obama at its helm. “What was remarkable was the extent to which race was not a factor in this contest,” wrote Adam Nagourney in The New York Times, of Obama’s crucial victory in Iowa. John McCain adopted the unifying first-person plural in his concession speech. His loss, he said, was a signal that “we have come a long way from the injustices that once stained our nation’s reputation.” Many seemed to agree. The election was widely portrayed as a litmus test, and the result was supposedly irrefutable evidence of a new racial equality. “Racism In America Is Over,” a Forbes headline declared. In this post-racial and, to an extent, post-partisan narrative, the conflict was not between liberal and conservative or even black and white. Instead, the divide ran along generational lines: the post-Selma cohort against their civil rights predecessors. Peter J. Boyer noted in The New Yorker that Obama appealed to the “post-racial generation.” His “political style of conciliation, rather than confrontation,” Boyer wrote, explained why civil rights activists were less taken with the new president. On NPR, Daniel Schorr declared that we were now in an “era where civil rights veterans of the past century are consigned to history and Americans begin to make race-free judgments on who should lead them.” CNN convened a panel of experts who went so far as to declare Jesse Jackson, Al Sharpton, and the entire civil rights leadership the “biggest losers” of Obama’s election. 

That the story of one person “transcending” race could be extrapolated into a story about a nation transcending racism revealed less about the status of racism itself, and more about the public’s perception of it: the murders of Trayvon Martin and Michael Brown during Obama’s second term, and the country’s conflicting responses, clearly revealed conditions that were far from post-racial. “When post-racialism rode to the center of American political discourse on Barack Obama’s coattails,” wrote the lawyer and scholar Kimberlé Williams Crenshaw in 2017, “it carried along with it both a longstanding liberal project of associating colorblindness with racial enlightenment and a conservative denunciation of racial justice advocacy, reverse discrimination, and grievance politics.” If liberals were eager to congratulate themselves on the new post-raciality, then conservatives were just as eager to pretend that racism no longer existed — a move that would later re-interpret anti-racism as an objectionable affront to whiteness. 

During the Trump years, multiracialism was positioned as progressive resistance to the administration. Within a few months of his inauguration, The New York Times published op-eds titled “How Interracial Love Is Saving America” and “What Biracial People Know.” In the latter, contributing opinion writer Moises Velasquez-Manoff cited a “small but growing body of research which suggests that multiracial people are more open-minded and creative.” Multiracial people, he suggested, could “work like a vaccine against the tribalist tendencies roused by Mr. Trump.” Pseudo-eugenicist arguments like this are about as politically useful as the numerous accounts dedicated to gushing over so-called “swirl babies.” The Instagram account @mixedracebabiesig, for instance, presents its 250,000 followers with a mosaic of tan infants and toddlers, each post’s caption specifying the pictured child’s ethnic and national composition. Other celebrations of diversity quickly devolve into fetishization, like the body of research on the attractiveness of “hapa,” or half-Asian, half-white faces. The recent “wasian check” TikTok trend, likewise, encourages users to show themselves followed by their Asian and white family members, and reliably attracts comments like “Wasian genes are ELITE.” And if mixed-race babies are fetishized, a digitally or surgically rendered mixed-race appearance is now the ideal. 

 

The “Instagram Face,” Jia Tolentino wrote in The New Yorker in 2019, is “distinctly white but ambiguously ethnic.” Evolved from surgically sculpted, photoshopped, bronzed, and heavily made-up images of models and influencers like Kim Kardashian and Emily Ratajkowski, the Instagram face projects what Tolentino called a “rootless exoticism.” If the face is genetically rootless, it is not aesthetically so: Instagram face, the makeup artist Colby Smith told Tolentino, is composed of “an overly tan skin tone, a South Asian influence with the brows and eye shape, an African-American influence with the lips, a Caucasian influence with the nose, a cheek structure that is predominantly Native American and Middle Eastern.” 

Some have accounted for the increasing presence of this image in mainstream beauty and marketing by maintaining that brands have finally “caught up,” or been forced to adjust, to an increasingly diverse demographic. As part of Ruth La Ferla’s 2003 report on “Generation E.A.: Ethnically Ambiguous” for The New York Times, Teen People’s then-managing editor Amy Barnett said the trend toward “exotic, left-of-center beauty” simply “represents the new reality of America, which includes considerable mixing.” According to Chris Detert, chief communications officer at Influential, a company that matches brands with influencers, post-raciality is just good business. “Racial ambiguity,” he has said, is helpful for brands because “it doesn’t play into one particular interest group, it ends up hitting a wider swath.” He added, “it doesn’t become a white or black or Asian or Hispanic thing; it all melds into one.” Claiming this new era as emancipatory, though, is naive. The ethnically ambiguous ideal, after all, is only a celebration of racialized difference if it dissolves into beige-toned sameness. The post-racial face remains white-proximate — skin tones may be tan, but they’re never brown or black.

Market logic trickles down: in 2020 hundreds of thousands of Instagram and TikTok users broadcast photos and videos of themselves — to an audience of hundreds of millions — completing the “fox eye challenge.” This look is achieved with upwardly angled eyeliner, sometimes in addition to surgery or less invasive thread lifts, which involve hooking medical-grade threads into the browline and hoisting it up. The TikTok-famous plastic surgeon Charles S. Lee, or @drlee90210, has claimed that the procedure, which vaguely mimics an East Asian phenotype, was inspired by the “exotic” locals his former medical partner encountered in Hawaii. The look’s popularity was no doubt propelled by celebrities like Bella Hadid, Kylie and Kendall Jenner, and Ariana Grande, who all adopted fox eyes in 2020. Grande, who was once accused of “black-fishing” (or trying to appear black) and has now been accused of Asian-fishing, was recently photographed with her double eyelids made to look like monolids. In a recent post, Miquela posed beside Grande’s Madame Tussauds wax figure, their skin tones indistinguishable, their upturned eyeliner wings in parallel. The caption read: “Everybody in LA is fake.” 

@lilmiquela on Instagram: Everybody in LA is fake

If makeup and plastic surgery used to be the only ways to look ethnically ambiguous, digital innovations make it easier. Virtual technologies for racial morphing have existed since at least Michael Jackson’s “Black or White” video, but now they’re available to the public at large. Using filters and face-altering apps, people can appear ethnically ambiguous online at the tap of a digitized button — and that’s just in our current, apparently fairly primitive internet era. Miquela is a harbinger of a world to come, in which we’ll all be able, and incentivized, to choose our own avatars online.

The original internet (or Web 1.0) was imagined as an egalitarian, borderless utopia, unencumbered by bodies that are distinguished by gender, race, and other physical differences. Web 2.0, which is characterized by platforms into which users feed their own content, has been criticized in recent years as a betrayal of these early principles: instead of a truly free, decentralized space, we have megacorporations that police speech and seem to encourage all manner of racist and sexist abuse, while facilitating so-called “fake news.” Web 3.0 (or Web3), its proponents say, will return to the ideals of Web 1.0. It has been described as more open, autonomous, and intelligent than Web 2.0, and it promises decentralization: redistribution of power away from Big Tech to consumers and creators. In Web3, the internet will be community-owned, driven by NFTs, blockchains, cryptocurrency, and the like, meaning that users will own the value of their data and content, which will be transferable between platforms. 

Naturally, Miquela is already blazing a trail through this uncharted territory. Brud has released several Miquela NFTs, the first of which sold in November 2020 to a bidder linked to the crypto investment fund Divergence Ventures for the equivalent of about $82,000 — a value that has since quadrupled. (Brud was valued at $144 million last year before it was acquired by the NFT start-up Dapper Labs in October 2021.) On Mirror.xyz, a blockchain-powered digital publishing platform, Miquela reflected on her decision to embrace NFTs. “I’ve tried to welcome all things new and different with open arms,” she wrote. Anticipating cynicism, she added that, “if we’re talking about NFTs as a way to get rich quickly in the crypto market, we might be missing the bigger picture. Like, blockchain… she’s new here! I don’t think we should put her in a box just yet. She’s got so much potential.” 

Despite the “potential” of a decentralized internet, the crypto space has already demonstrated evidence of racial bias. For instance, the company DeGenData recently found that amongst the popular NFT collection CryptoPunks, lighter-skinned, male avatars sold for higher prices than darker-skinned male and female counterparts. CryptoPunk investors attribute the disparity to the overwhelmingly white, male demographic of the crypto space — most investors select avatars that resemble them. “Some people may be concerned about using a black Punk as their avatar if they are white,” NFT investor Nick Kneuper told Bloomberg News. “I think they are worried they would get accused of digital blackface.” Many non-white NFT investors believe these disparities will naturally resolve as the crypto space diversifies. But given that racialized aesthetics now dominate Instagram and TikTok, it’s fair to expect that avataristic spaces — which similarly encode and generate social capital in digitized, visual forms — might follow social media’s trajectory, as NFTs expand into a more mainstream, and presumably more beauty and fashion-focused user base. 

The metaverse represents another future path for the web, and there, too, we’ll soon be fashioning our digital selves. In the video introduction to Meta, the name Facebook has adopted in anticipation of its planned expansion, Mark Zuckerberg observes that avatar-related innovations are a priority because the metaverse, designed around “expression,” “connection,” and “community,” will be “an embodied experience where you’re in the internet, not just looking at it.” To demonstrate, Zuckerberg summons his avatar — same skin tone, same hairline, same monochromatic tech bro uniform, same stilted affect — through which he gleefully transports to an outer space conference room featuring a supporting cast of racially diverse avatars (and a robot). 

The cosmic setting is suggestive: the metaverse, like Web3, represents an effort to conquer an entirely new world, leaving the problems of this one behind — Zuckerberg tellingly describes the metaverse as “the next frontier.” Fred Turner, a Stanford professor of communication, has contextualized this impulse within the Puritan tradition. Among technologists today, he has said, “We see people sort of leaving behind the known world of everyday life, bodies, and all the messiness that we have with bodies of race and politics, all the troubles that we have in society, to enter a kind of ethereal realm of engineering achievement, in which they will be rewarded as the Puritans were once rewarded, if they were elect, by wealth.” Key to the post-Web 2.0 mindset is the idea that creators should be rewarded for their as-yet mostly unpaid labor online. This is how Brud justifies and incentivizes its pivot to NFTs: “It’s time for Brud to create a new way to share the tools we’ve built, and to reward everyone who believed in us,” reads the company homepage. Brud even made some of Miquela’s NFTs free (they were claimed through a lottery process on Twitter in which entrants agreed to be contacted for future marketing purposes) and donated the profits from others to the non-profit Black Girls Code. Zuckerberg’s vision of the metaverse is likewise vaguely altruistic: it’s telling that one of the activities he portrays in his intro video is a “charity auction.”

If the post-racial person has always been the face of some idealized future, then here is that future, once again offered up for sale: a supposedly decentralized utopia, which when it arrives will likely exist as a more immersive version of today’s imperfect and vastly inequitable online worlds. We’ve been sold this future before and are still reeling from the consequences: not only racist harassment and targeting online, but the widespread fetishization of racialized groups by users, brands, and platforms alike. We might read the ethnically ambiguous post-racial archetype, then, not as a novel aspiration but as a past repeating itself, and ultimately as a flawed reaction to present anxieties. 

Miquela may stand for vaguely progressive causes and ideas, but what she really stands for is capital — she’s said it herself, in a way. After Miquela’s existential crisis, Brud gave her a USB drive filled with “programmed memories.” She was thrilled. “I finally get to explore who and what I really am,” she wrote. In the months that followed, Miquela let her fans follow along as she explored the set of fabricated milestones. What they revealed was culturally rootless, if not entirely generic: images of younger Miquela attending after-school soccer, experimenting with an angsty, emo look, and smiling in the snow (location not geo-tagged but cryptically labeled “MyPast”). As part of the exercise in self-discovery, she returned to the mall food court where she worked her first job at a Hot Dog on a Stick. She captioned it,“So many feelings for my ancestral home…THE MALL.”