
For people of a certain age, the title of this essay may call to mind Gabriel García Márquez’s novel Love in the Time of Cholera (1985), a narrative of age-defying and undying love that endures even in moments of catastrophe. Without claiming Márquez’s storytelling magic, I am borrowing his evocative title to reflect on a different kind of endurance: the persistence—and precarity—of writing art history in times of war.
For those of us engaged with the Islamic world, war is not a passing crisis but a persistent condition, predating Márquez’s novel and continuing, uninterrupted, to the present. The modern history of the Islamic world, like much of the formerly colonized Global South, has been punctured by armed conflict. Colonial conquests of the 19th and early 20th centuries reconfigured societies, economies, and cultural life. And the end of colonial rule brought no peace: postcolonial states unraveled into civil wars, territorial disputes, and neocolonial interventions.
My own life has unfolded in the shadow of war—from the 1956 Suez Crisis and the 1967 Arab defeat, to the wars in Lebanon, Iran, Iraq, Afghanistan, and, today, Syria, Libya, Sudan, alongside the festering open wound of Palestine and the ongoing Israeli genocide against Gaza. These wars have left indelible psychological and identitarian scars while ensuring the prolonged entanglement of former colonizers in the governance and resource control of postcolonial states.
But my aim is not to invoke a revisionist history as catharsis, nor to offer the consoling illusion that history writing can resist genocide or erasure—though in moments of despair, such a belief is seductive. Rather, I want to explore how war has shaped the formation, orientation, and theoretical entrapments of the field of Islamic art and architectural history since its inception. This is a causal context rarely examined but fundamental to understanding how “Islamic art” evolved as a Western scholarly endeavor, beginning with Napoleon’s invasion of Egypt in 1798 and extending into our present moment under the banner of the protracted, so-called “War on Terror.”
OF COURSE, THE LINK between war and history writing predates modern colonialism. In fact, the whole enterprise of history writing (or perhaps more accurately, history reciting) in the ancient world came about around heroic, nation-defining wars. Homer’s Iliad and Odyssey, Thucydides’s History of the Peloponnesian War, and the Islamic genre of Maghazi literature all establish conquest and armed struggle as narrative origins. War is not just a historical event but the engine of historical consciousness, framing the very act of historiography. Likewise, art has long been implicated in war: as booty, as spoil, and as symbol.
Take the “Griffin of Pisa,” a monumental Islamic bronze likely made in al-Andalus or Islamic Sicily. Most probably captured during Pisan raids in the 12th century, it was proudly mounted on the cathedral roof in Pisa before being transferred to a museum in 1828 once its artistic value was recognized. Or consider the Baptistère de Saint Louis, a Mamluk basin of extraordinary craftsmanship. Though its origin is undisputed—Mamluk Egypt or Syria—its date of acquisition is unclear. It could have been any time between 1249, the date of Seventh Crusade in Egypt; and 1292, when al-Ashraf Khalil finally expelled the last Crusaders from Acre in Palestine. The basin was used in French royal baptisms beginning around 1606, and was later attributed, at the end of the 18th century, to Louis the 9th, or Saint Louis: a king defeated and captured by the Mamluks. Both maneuvers suggest an attempt to rewrite a history of humiliation into one of triumphant appropriation. All these cases illustrate how Islamic art was not simply acquired, but conscripted into Western narratives of power and prestige.
The European desire to possess Islamic objects grew tremendously during the Renaissance and the Age of Discovery, with the continent’s rising maritime power and its expansion into Asia and Africa. Early collectors acquired these objects by ostensibly ethical means: through trade, gift giving, bequests, or purchase.
But soon as colonial power penetrated the Islamic world in the late 18th century, extraction became the norm. European consuls, officers, scholars, and explorers—often operating under multiple guises—dug, bribed, purchased, or plundered their way through Islamic cities and antiquities markets. By the 19th century, museum collections in Europe—especially in London and Paris—swelled with artifacts acquired through unjust partition of archeological findings, plunder, and outright theft. Restrictive antiquities laws, introduced in the second half of the 19th century, slowed this flow but never reversed it.
World War I laid the entanglements of scholarship and imperialism bare. Figures like T.E. Lawrence (“of Arabia”) and Gertrude Bell began as archaeologists and emerged as colonial agents during the dismantling of Ottoman provinces at the end of the War. Lawrence played a dubious role in the Arab Revolt against the Ottomans. Bell influenced the shaping of modern Iraq, drawing its borders and choosing its king, before becoming the Director of Iraqi Antiquities and establishing the Iraq Museum in 1926. As director she drafted a new legislation that allowed the continued hemorrhaging of archeological finds outside Iraq, which she pushed against the opposition of the Arab nationalist minister of education, Sati’ al-Husri. Another early Islamicist, Louis Massignon, was arrested on suspicion of espionage in Iraq, only to reappear as a key advisor to the Sykes-Picot negotiators and to enter Jerusalem in 1917, alongside his friend Lawrence, with the British General Allenby, who started the process to fulfill the Balfour Declaration to create a Jewish homeland in Palestine. K.A.C. Creswell, historian of early Islamic architecture, compiled a canon for the field while serving as Inspector of Monuments under General Allenby’s Occupied Enemy Territory Administration in Palestine and Syria in 1917-18.
THESE OVERLAPPING ROLES—archaeologist, soldier, administrator—tell us this: the very foundations of Islamic art history as a field are colonial. These foundations have had enduring consequences. Perhaps the most insidious is the field’s continued one-directionality, with one civilization observing, classifying, and interpreting the cultural production of another, as the latter remains largely excluded from the process.
This asymmetry persists today. With the exception of Iran and Turkey, and to a lesser extent Egypt, top academic positions, publications, and museum collections are centered in the West. Even as more scholars of Islamic background enter the field, the theoretical frameworks remain largely Eurocentric: the degradation of the classical heritage in the early Islamic period and the supposed prohibition of figural representation that led to abstraction are still syllabus staples. Local traditions of interpretation are rarely included and scholarship in Islamic languages is often dismissed as derivative or uncritical.
This imbalance is epistemological as much as institutional. The very notion of discovery, foundational to Western art history, is a prime example of this process. Islamic sites and artifacts were—and still are—declared “discovered” by Western scholars, as if they had previously been invisible or unknown. The implicit claim is not merely to first contact, but to first understanding. Even when local communities have revered, maintained, or interpreted these objects for centuries, their knowledge is framed as anecdotal, unsystematic, or lacking historical consciousness. Discovery, in this colonial schema, is not an encounter—it is a claim to epistemic authority.
The consequence of these claims to discovery is the erasure and belittling of indigenous interpretive frameworks—which are often labeled as intuitive, mystical, or uncritical. By contrast, Western modes of seeing are cast as analytical, dispassionate, rational, and implicitly, superior. This obfuscation robs local traditions of their historical agency and assigns meaning to their cultural output based on exogenous criteria. The act of “discovery,” then, functions not only as appropriation, but also as epistemic dispossession.
While this dynamic is not unique to Islamic art—it also affects African, Indian, Chinese, and pre-Columbian studies—it is particularly acute due to the political, historical, and civilizational entanglements between Europe and the Islamic world. Moreover, because Islamic art shares a lineage with the classical Mediterranean, its output is often framed as a detour from the presumed teleology of Renaissance and Modernity. Even within debates on Late Antiquity, Islamic art is often seen as derivative rather than generative, marginal rather than central. When historian Garth Fowden reminds us that “there are roads out of antiquity that do not lead to the Renaissance,” he reclaims a narrative space for Islamic continuity from Antiquity—a space often denied.
Denying indigenous interpretations means that religion—so central to understanding Islamic cultures—is often marginalized, with Islam frequently invoked only in introductory chapters, then quickly bracketed out. The deeper structures of Islamic belief—its influence on aesthetics, ethics, space, and meaning—are rarely engaged. One exception is the Perennialist school of philosophy, exemplified by the work of Seyyed Hossein Nasr, but their metaphysical readings are typically dismissed as ahistorical.
This reluctance to engage with Islam as a living worldview mirrors the secularism of post-Enlightenment Western epistemology, which still afflicts art history. The Islamic world, however, never underwent a comparable Enlightenment rupture with religion. Instead, certain secular ideas were absorbed and then filtered through religious sensibilities, resulting is a modernity deeply entangled with religion—which Western observers often find incomprehensible, particularly when Islamic symbols spark political protest. These moments of incomprehension reveal the limits of treating secularism as a universal model, and by extension, the secular art history’s inability to fully grasping Islamic visual culture.
This conceptual impasse is most visible in the historical amnesia surrounding 19th- and 20th-century Islamic art. Until recently, standard surveys simply ended before the onset of modernity. Scholars felt “uncomfortable in the 19th century,” to borrow a phrase from Islamic art historian Margaret Graves, because its eclectic artistic output challenged the dominant framework of rupture between the traditional and modern Islamic art. Accepting the creative continuity of Islamic art into the modern period would undermine the colonial narrative that depicted Islamic culture as static, in decline, and in need of European rescue. It would expose the “civilizing mission” for what it often was: a veneer for violence, looting, and epistemic conquest.
THE ETHICAL AND METHODOLOGICAL tensions haunting the modern study of Islamic art did not go wholly unnoticed, but acknowledgements by the likes of Oleg Grabar—perhaps the most influential figure in the field—stopped short of addressing their deep colonial roots. In an undated lecture draft he shared with me in the mid 1990s, Grabar reflected on the shifting landscape of Islamic art history and noted, with uncharacteristic unease, that “the most difficult to grasp change that was brought into the life of Islamic art in the last half-century is the importance taken by the contemporary world, its politics, the alleged sins identified with Orientalism, or the demands it made on all professionals.” Grabar saw that the field was no longer insulated from the political, ideological, and emotional ruptures of the present. Yet his phrasing—particularly “alleged sins”—betrays a certain ambivalence, if not reluctance, to fully reckon with the colonial entanglements that structured the very foundation of the discipline he helped shape.
Grabar’s observations were, nonetheless, astute. He recognized that “no one who has traveled or lived in Muslim lands can remain immune to the often very real emotional or cultural struggles which affect them. Algeria, Bosnia, Chechnya, Palestine, Kurdistan, Tajikistan, Kashmir, Afghanistan, Sinkiang, or the Sudan are all places where sad or tragic events have affected, or run the risk of affecting, the artistic heritage of these areas and, even more importantly, the education of men and women capable of learning about that heritage and of appreciating its products.” This is a powerful admission. Grabar acknowledges that political catastrophe does not merely damage monuments—it undermines the very possibility of local knowledge, of cultivating a generation of scholars from within the societies whose heritage is under study.
Yet what is striking—and telling—is that Grabar does not extend this diagnosis to the field itself. He expresses sympathy for those “affected” by war but not for how war—and the broader histories of colonialism and epistemic violence—may have shaped the structures, methods, and assumptions of the field he led. The impact of war, in his account, is circumstantial and external. What remains unacknowledged is that the condition of war—colonial, postcolonial, and neocolonial—has not simply damaged the raw material of study, but has structured the very ways Islamic art has been discovered, defined, interpreted, and institutionalized in the West.
Later in the same reflection, Grabar turns his attention to the emergence of a new audience, a demographic shift he sees as transformative but somewhat unsettling. “We have now for the study of Islamic art and for all studies of the Muslim world, as for many other ethnic groups in North America, a new public seeking an awareness of the past different from the awareness expected in the countries from which their parents came and different from the allegedly universal scientific and academic scholarship of old. This has contributed tasks for which we are not, as a profession, well prepared and which we have not always handled very well.”
This is a candid admission, but it also reinforces the epistemological asymmetry under critique. Grabar recognizes the growing presence of diasporic and Muslim-identifying scholars, but he frames their expectations as burdensome “tasks” for a profession built around a different—implicitly Western—conception of the past. His phrase, “allegedly universal scientific and academic scholarship,” is telling. The term “allegedly” introduces doubt, but this doubt is not pursued. Grabar does not ask why the field had presumed such universality in the first place, nor does he suggest that the methodologies and categories inherited from Enlightenment Europe might require fundamental rethinking in light of this new public.
In effect, Grabar diagnoses the symptoms but avoids naming the underlying condition. His reflections acknowledge dissonance but hesitate to name its source: the field’s colonial origins, its exclusionary canon, and its secular epistemology. As someone who supervised more Muslim doctoral students than any of his contemporaries, he surely recognized the tensions faced by scholars straddling two traditions—one rooted in lived cultural experience, the other in Western academic detachment. But his proposed solutions were incremental and procedural, not structural or reparative.
What Grabar could not—or would not—see is that war has not merely interrupted the study of Islamic art; it has been a constitutive force in its development. The colonial and postcolonial wars that ravaged the Islamic world were not just the background conditions for scholarly inquiry—they were the very crucibles in which the field of Near-Eastern and Islamic art history was forged. From the Crusades to the Napoleonic expeditions, from the World Wars to the ongoing “War on Terror,” the discipline emerged alongside and often through the very violence that it now seeks to study.
The generational shift Grabar observed—of Muslim scholars seeking to reclaim and reinterpret their heritage—is not a burden on the field but an opportunity to reimagine its epistemic foundations. If, as Grabar wrote, “we are not… well prepared,” then the task is not simply to accommodate these voices but to rethink the assumptions, categories, and canons that have excluded them in the first place. Only then can we begin to disentangle Islamic art history from its colonial inheritance and make space for a pluralist, dialogical, and more impartial understanding of the past.
As a Gen Xer, I grew up with the mantra “think globally, act locally.” Today, however, the tsunami of antidemocratic and oligarchic actions inundating the United States and many other geographies is decidedly global, overwhelmingly so. The trickle-up potential that acting locally once promised has clearly failed.
To fight back, we will need a polyphony of divergent yet parallel efforts. This will require us to reinvent culture work, transforming it so that we might look closely at deeply held “truths,” even when they provide comfort, and at long-maintained methods and behaviors that no longer serve us.
How is a culture worker to contribute in times like these? What follows is a lesson I learned during Trump’s first term. When I resigned my role as director of the Queens Museum in 2018 over a variety of problematic events that unfolded in the aftermath of Trump’s first election, I had to face a difficult reality: In the midst of a stressful period toward the end of my tenure, my husband said to me, “You may never work at a museum again.” I felt as if I had been kicked in the stomach, the air knocked from my lungs. Having spent 20 years inside cultural organizations, this seemed an impossible outcome. It challenged my vision of myself, my identity, and how I thought I might contribute to the world. Who would I be if not a culture worker laboring inside museums and other arts nonprofits? I knew the liberal model of singular leadership at the helm of institutions was deeply flawed; any culture worker knows that the work of institutions is a profoundly collective act masked by the hierarchies of organizational charts and inequitable pay. And yet this was the only work ecology I knew; my imagination was limited by my own life experience.
Yet the more I thought about it, the more I knew that this was exactly the choice I had to make. I had to break from a situation in which I could not hold the values I prioritized. Perhaps I could realize some of the ways I’d wanted the institution to function outside its walls.
Thus began a trajectory that has brought me far more than I could have imagined in that gut-punch moment that broke open my imagination to the possibilities of working otherwise. I hope it has also broadened the ways I contribute to the urgencies around me. Not all change results from as dramatic a set of circumstances as those I experienced at the Queens Museum, but the seismic shift in my day-to-day work made me vastly more attuned to the various complicities I had been negotiating every day at the museum. It also made me see my world anew, levering open a whole set of imaginaries about what
is possible in cultural work.
SHORTLY THEREAFTER, in 2019, the Warren Kanders controversy unfolded at the Whitney Museum of American Art. Kanders had served as vice-chair of the Whitney’s board while he owned a military gear company called Safariland that sold body armor, tear gas, and such with the tag line “less lethal solutions.” Once artists and culture workers learned that the tear gas Safariland produced was being used against asylum seekers at the US border with Mexico; against Black Lives Matter protesters in Ferguson, Missouri; and in Palestine against everyone, they led the movement to have him removed from the board.
What unfolded in the aftermath of these revelations holds a valuable model for how change transpires. In my book, Culture Strike: Art and Museums in an Age of Protest (2023), I wrote about how the microcosm of change that manifested at the Whitney did not align with hierarchical theories of change. Rather, in this situation, journalists were writing critical articles; staff were questioning their roles at the museum and making their concerns known to its leadership; activists staged protests in the public spaces of the museum and elsewhere; many unknowable conversations and conflicts ensued behind the scenes; and several artists demanded their works be withdrawn from the Whitney Biennial. Some of the people involved in these actions overlapped, others did not: In fact, many were skeptical if not downright hostile to the tactics employed by others. And yet, Kanders eventually resigned his position, in response to these collectively generated pressures. A variety of tactics, working in parallel yet not in tandem, produced pressure and power.
All this made me think back to a conversation I had years ago with Rhoda Rosen, a white Jewish South African woman who had been part of the African National Congress (ANC) in its fight to end apartheid. She recounted being surprised by the timing of the regime’s fall: It happened amid disagreements over tactics, at a time when she felt as though the internal unity of the resistance was becoming atomized and dispersed. It was then that the wall of apartheid fell. And so it was for the US Civil Rights movement. The sheer variety of groups working sometimes in concert but more often in parallel with one another was remarkable: There was the NAACP, the Black Panther Party, SNCC, SCLC, the Nation of Islam, the Weather Underground, and CORE, among others—often taking fundamentally divergent approaches and tactics. And yet, change came.
As a culture, and particularly within movement-building, there is strength in heterogeneity. The friction between differing lived realities and tactical approaches makes the overall message stronger. It makes space for more people to enter the fray. Productive conflicts can emerge to strengthen positions. Working in parallel rather than explicitly collaborating has the effect of resisting the flattening of messages into sound bites. It allows perspectives to exist in all their complexity, and encourages solidarities to form in spite of difference. If we honor the advantages of being uncoordinated, might we also alleviate the proverbial “circular firing squad”?
Amid a profusion of attacks on free speech, human rights, and civil liberties; the dismantling of basic public goods and services; and threats to a democratic and Constitutional order, our individual and collective responses are increasingly urgent. Many of us are asking ourselves deep questions about how to act, both personally and within institutional work. To avoid consequences, should we pre-conform to restrictions we believe are coming? Do we opt for sleight of hand over overt resistance in order to protect what we have, to survive to fight another day? Or do we disobey? Take the bigger risk, make the bolder statement, resist openly—possibly inviting greater retribution?
IF DEMOCRACY AND FREEDOM are at stake to the degree I and many others believe they are, we have no other choice but to resist, to refuse compliance with what we know is unjust. The contributions of cultural and knowledge institutions to democracy means that they must hold powerful, even “dangerous” ideas. This is also why they become targets. What role do they perform under autocracy and oligarchy? There is no museum or library that can fulfill its stated mission in the absence of self-determination and an active civil society. Without these, their reasons for existing collapse, obviating their social and educational functions. Resistance may take many forms, but what is essential is that we enact a refusal to obey, and particularly to refuse to pre-comply with what we imagine might be coming. How we each might do that and in what circumstances is where the finesse lies.
I want to suggest that a multiplicity of resistances is most likely to produce the change we need. These can come together only via networks of solidarity drawn from shared interests, permanent or temporary. Which does not mean we’ll agree with or even understand our allies fully. In her recent and important book, Imperfect Solidarities, Aruna D’Souza makes the case for honoring the reality of incomprehension, highlighting this condition as a strategy for survival as well as for more complex and effective solidarities. She writes, “To be able to act together without full comprehension, to be able to float on the seas of change: What would a politics based on that capacity look like?”
While I don’t know the answer to this question, it is clear that a perfectly harmonized chorus is not possible, and is potentially undesirable. After all, homogeneity is exactly what demagoguery desires. A polyphonic chorus can say so much more.
One of the more demoralizing aspects of the current moment is the way that what’s coming seems to be a fait accompli. How might we make justice seem inevitable instead? It won’t happen by way of a universally-agreed-upon, least-common-denominator approach: That is a recipe for failure. Rather, through a cacophonous pileup of disobedience, we too can become inevitable.
As a critic myself, I get it. Most art exhibitions aren’t amazing. I personally think about gallery-going the way I do thrifting, even though I don’t buy art because it’s much more expensive than used clothing. At both galleries and thrift stores, I like to poke around in hopes of being pleasantly surprised. Most of what’s on display is, by definition, average: a pair of innocuous chinos; an abstract painting that would look nice above your couch. Some of it’s a bit cringe, the art equivalent of a tuxedo T-shirt. And some is interesting but just not your size or style. Only at rare moments, often when the search feels futile, do you stumble upon something incredible: a jacket or a sculpture that feels as though it exists just for you, whose improbability makes the discovery that much more meaningful.
All of which is to say that I’m suspicious whenever other critics complain that most art—or most movies, or most music—is bad these days. Most days, most work isn’t incredible. Combing through it all, fatigue is inevitable. But that fatigue causes some critics to mistake the rarity of aesthetic elation for a uniquely humdrum contemporary culture. One sign of this error is rampant nostalgia for the way things used to be—when the critic was younger, or else during an illustrious historical era.
This nostalgia pervades the work of several critics who’ve been grumbling that contemporary art is stagnant. Sean Tatol’s self-published Manhattan Art Review tosses off zesty negative reviews that stirred up productive interest in art criticism’s stakes throughout 2023. That same year, New York Times critic-at-large Jason Farago wrote the civilization-level version of a “kids these days” think piece, “Why Culture Has Come to a Standstill,” which argues that aesthetic style no longer advances and that perhaps “ours is the least innovative century for the arts in 500 years.” Dean Kissick’s polarizing 2024 Harper’s screed, “The Painted Protest,” vents frustration with the curatorial paradigm shifts resulting from 2010s identity politics, and romanticizes the pre-Trump art world of its author’s early adulthood.
Art isn’t what it used to be, in good and bad ways, but every generation experiences a version of this phenomenon as it ages. What stands out about these critical complaints is their frustration toward how the world itself has changed, often in ways hostile to artists. Today’s technological and economic conditions exert novel demands on US arts professionals, creating an industry where overwork and precarity are the norm. It’s no surprise that artists have adapted to these conditions and it’s no surprise, if a bit cliché, that some critics wonder if that means art’s best days are behind it.
FARAGO’S “WHY CULTURE HAS COME TO A STANDSTILL” argues that Western culture’s best days have passed but that, once you accept the fact, you can have a more fulfilling relationship to what remains. The article asks “why cultural production no longer progresses in time as it once did,” and answers that phones and other digital tools create so much “chronological confusion” that the concept of aesthetic progress no longer makes sense. Instead, we have “a culture of an eternal present,” exemplified by Amy Winehouse’s hit 2006 album Back to Black, which sounds “neither new nor retro,” “as if it came from no particular era.”
The argument’s premises aren’t particularly objectionable; however, the conclusion Farago draws from them is silly. He contends that “the lexical possibilities of many traditional media are exhausted,” and thus no major stylistic innovations are possible within them. As a result, he believes audiences ought to let go of the lingering high modernist belief that “good art is good because it is innovative.” But you get the feeling Farago is less at peace with his cultural disappointments than he pretends, given that his subsequent reviews continue to dredge up examples of aesthetic stagnation, always linking back to this article.
Part of Farago’s complaint is that digital dissemination reduces art to mere content: “In the 20th century we were taught that cleaving ‘style’ from ‘content’ was a fallacy, but in the 21st century, content (that word!) has had its ultimate vengeance, as the sole component of culture that our machines can fully understand, transmit and monetize.” The digital revolution has had seismic implications for the production and distribution of culture, similar to that of the printing press centuries ago. So it’s bizarre—and laughably premature—to speculate, as Farago does, that “we are now almost a quarter of the way through what looks likely to go down in history as the least innovative, least transformative, least pioneering century for culture since the invention of the printing press.”
What’s actually happening is that culture as Farago knows and prefers it is changing as a result of techno-economic pressures. In recent decades, cultural platforms have undergone transformations even more dramatic than the content they showcase, with profound effects on how and why artists operate: from becoming content creators, to collaborating with AI. As a staff writer for the paper of record, at a time when such jobs are near-extinct and the term “paper of record” feels like an anachronism, Farago is aware of the changing status quo. He just chooses to cling to yesterday’s norms even as he pretends to let them go.
Kissick, on the other hand, laments that his youthful optimism about art’s potential led to disappointment. Like Farago, Kissick believes contemporary art feels exhausted because it fixates on historically marginalized identities and folk knowledge, especially in major biennials. Also like Farago, he’s tired of how art relies on the same type of “spin-offs, remakes, quotations, interpolations, and revivals” omnipresent in the movie, music, and fashion industries. Unlike Farago, Kissick feels less willing to accept a diminished role for art.
When “The Painted Protest” was published in mid-November 2024, shortly after the United States presidential election, everyone had an opinion about it. The piece received recognition for articulating that a cultural moment has passed—the identity politics that predominated as “faith in the liberal order began to fall apart around 2016”—and it received criticism for its tendentiousness and misplaced romanticism. Kissick’s characterizations of 2010s cultural liberalism traffic in straw men and overstatements (art “amplify[ing]” historically marginalized voices “shouldn’t, it seemed, be inventive or interesting”). But his core argument captures how efforts at greater inclusivity in the fine arts shifted, over the past decade or so, from an institutional critique to the institutional norm. He asks, “When the world’s most influential, best-funded exhibitions are dedicated to amplifying marginalized voices, are those voices still marginalized?” And answers that the project of inclusion “has been completed,” even “hollowed out into a trope.”
This passage’s false dichotomy flattens nuances: a voice can be centered by cultural institutions yet remain politically or economically marginalized. But it puts a finger on why the 2020s anti-woke sentiment, though it often lapses into petty grievance, has had counter-hegemonic appeal not just to some arts audiences but also to a segment of the US electorate. These recent tugs-of-war over cultural power, which go back further than Kissick’s article acknowledges, feel fraught not only because social media inflames conflict but also because there’s so little actual power available to most participants, owing to unequally distributed resources.
OVER THE PAST HALF-CENTURY, US neoliberal austerity has exacerbated pressure on artists, curators, and arts writers, making institutional success feel increasingly zero-sum. At the same time that middle class creative and intellectual career paths have grown more precarious, the costs of housing, health care, and college have risen faster than wage growth. The art market, where idealistic press release rhetoric often runs cover for the machinations of extreme wealth, renders these material disparities conspicuous. For artists and culture workers without a financial safety net, these conditions discourage taking aesthetic or personal risks and encourage play-it-safe professionalism.
That’s why, for all its controversy baiting, the most telling section of “The Painted Protest” is a head-scratching paean to mega-curator Hans Ulrich Obrist, also known as “Hurricane” HUO. Kissick interned for Obrist in 2008 and fondly recalls the latter’s frenetic lifestyle: “He circumnavigated the world relentlessly, meeting everyone he could and introducing them to one another, in person or over email on his two BlackBerries, insisting on the urgency of their conversation.” Obrist “almost destroyed himself,” concludes Kissick, “as a committed early-twenty-first-century citizen should, in an orgy of connectivity.” This rose-tinted portrayal feels jarring, given the extent to which Kissick romanticizes the transgressive bohemian freedom of a life in the arts. Yet Obrist’s pathological overwork was the prototype for digital hustle culture, for the always-on professionalism that many in the arts today adopt out of financial necessity, a sense of self-importance, or both.
I stopped visiting thrift stores in my 30s, around the same time I started visiting art galleries routinely. In some ways, I substituted one hobby for another; both scratch a similar itch. The lifestyle change was also pragmatic: the more I wrote about art professionally, the less free time I had for other things, and thrifting is an inefficient way to build a wardrobe. In fact, to free up energy in my overscheduled life, I adopted a personal uniform for each season and social or professional occasion. This HUO-style life hack made my days more efficient but also made thrifting for unique items moot.
The physical exhaustion that Obrist normalized laid the groundwork for the aesthetic exhaustion these 2020s critics decry. Culture workers are conditioned to believe they can’t get ahead, so they live frenetically, fueled by the fear that they’re falling behind. There’s more than a little truth to that belief. But it’s worth considering the role that overstimulation and burnout plays in declaring so much work uninspiring. Most arts professionals are overworked and underpaid, and confronted, as on dating apps, with a buffet of cultural options whose sheer quantity dulls the luster of every individual possibility.
In this light, the recent curatorial vogue for artistic folk wisdom looks not just like an effort to center the historically marginalized, but also a longing for “simpler,” less networked, times and places. Nostalgia for one’s youth à la Kissick, or for the great eras in art history à la Farago, might differ in content but not in form. As Kissick puts it: “Everyone, it seems, wants to escape the present. We just long for different pasts.”
I still long to be pleasantly surprised, but it gets harder as you get older. What would surprise me right now are critics who articulate positive visions of the art world they want to see, rather than grouse about what’s dull or different. But those kinds of articles are harder to write, and receive less attention, than sensationalized negativity. Farago and Kissick, in those aforementioned articles, actually do include lists of their contemporary aesthetic pleasures; Tatol, too, consistently reviews exhibitions he loves (though there are fewer of them than ones he hates). The bright spots in these critics’ fields of vision contravene their gloomy theses about art’s exhaustion. Incredible work still happens, about as often as it always has; our jobs and our phones are creating new obstacles, as well as new opportunities, to make and find it.
I REMEMBER THE FIRST TIME I saw an ad on a banana; it was for Frozen 2, and it felt like the beginning of something, of everything becoming an ad. I think I ate that banana in 2019, and look how far we’ve come: This year’s blockbuster, the Lego movie, itself basically an ad for Lego, is also a biopic of Pharrell Williams, the rapper who is also the creative director for Louis Vuitton. Where does the culture end and the ad begin? They don’t want you to know.
When a Frozen or Pharrell-type phenom emerges, brands will work to find a way to capitalize on the attention they attract. In the art world, the crossovers are mostly happening between artists and luxury fashion, since both realms attract wealthy clientele. When these fashion collaborations began taking over the art world, around the time of the banana ad, I was optimistic: fashion money sure beats the nefarious sources of wealth routed through art’s opaque market. And together, artists and brands were making cool stuff, like that Anna Uddenberg sculpture for Balenciaga, and the photos Tyler Mitchell took for Ferragamo in the Uffizi.
But then, it started to feel as if art and marketing were beginning to collapse into one thing. I felt this acutely over the summer, when Carrie Mae Weems launched a Bottega Veneta campaign. In one of the black-and-white photographs, A$AP Rocky sits at a kitchen table facing a mirror; the artist stands behind him, her hands on his shoulders. The picture, overlaid with the Bottega logo, debuted on Father’s Day, and is a rejoinder to, or remake of, Weems’s iconic 1990 “Kitchen Table” series.
That series had a kitchen table, but also a message—one that feels hard to square with ideas of luxury and Father’s Day. Over the course of 20 images, accompanied by words, we see a Black woman (Weems herself) become a mother, and then watch as that mother learns to be alone. The father of her child is out of the picture, and there she sits, at her far-from-luxurious kitchen table, in a sparse room painted white. She is poised, resilient. In that ordinary room, Weems built a rich world. Over the course of the series, others join her at the table, enacting everday conversations and domestic dramas. It’s always the same shot—dangling overhead light, door to the right—but the people in the pictures change, as do the pictures on the wall behind them (one shows Malcolm X). Her evolving world-within-a-kitchen is a nod to the unsung worlds countless women have nurtured in countless kitchens.
Seeing an artwork as powerful as this one become an ad felt wrong. There’s plenty of art that can easily be repackaged into mere style, with little trade-off. But this collaboration was harder to square with the original—which is not, I don’t think, a dig at Weems for participating in something that so many other artists have participated in too. It’s the opposite: a compliment to the power of the original series, which is decidedly art, not just an aesthetic, or a personal brand.
“SELLING OUT” IS WHAT WE ONCE CALLED THIS, back when it was mostly white men with generational wealth who got to label things, back when it seemed possible to be a creative who didn’t have to compromise to eat. But as Jay Caspian Kang put it in the New Yorker last year, “The people who came of age during or after the 2008 financial crisis … do not have much patience for Gen Xers who wax nostalgic about bands that ignored major-record-label attention or Adbusters or whatever else.”
For weeks, “what do you think of the Carrie Mae Weems Bottega ad?” was the question I posed at any dinner table where I happened to sit. Predictable defenses concerned not the image itself, but the idea that it was good for the artist to have gotten paid, and good for brands to support the arts. Others added that an artist might as well say yes: brands will steal your ideas anyway, so you might as well get something out of it.
Fair enough: for its 2004–08 iPod campaign, Apple cribbed Robert Longo’s iconic 1980s series of silhouetted figures flinging their bodies around in movements that can only be described as “dancing.” Longo was irritated, he later said in an interview with W magazine, but in 2010 he was approached by Bottega Veneta too; he told W that they effectively said, “instead of ripping you off, we want to hire you.” And he jumped at the chance, making new pictures of businesspeople apparently in ecstasy—this time wearing Bottega.
A more 2020s-flavor rip-off occurs in Charli XCX’s video for “360,” which is itself also an ad for Google products. One scene is a dead ringer for a Deana Lawson photograph. In a drab living room with mismatched furniture and shoddy lighting fixtures (yet subjects well-lit), a group of people face the camera, their bodies set in attitudes somewhere between candid lounging and dramatic posing. Bellies poke out between glamorous garments. Stilettos are lodged in thick carpet. The incongruity makes the scene at once more real and more staged, Lawson’s signature move.
THROUGHOUT THE 20TH CENTURY, photographers strove mightily to gain respect as artists—to conceive of the camera as not just a commercial or mechanical tool, but as an artistic one. Alfred Stieglitz, with his illustrative work as with his gallery, 291, as well as with his journal, Camera Notes, galvanized a generation in the 1910s to think of photography as being every bit as pictorial and expressive as painting and sculpture. In the 1970s, William Eggleston did the same for color photography, insisting the medium’s allure wasn’t only for ads, but for artworks as well.
It worked: Art photography is distinct enough from advertising now that photographers are playing around with muddling the two. Tyler Mitchell, Juergen Teller, and Roe Ethridge have had success ping-ponging between galleries and glossies. For Mitchell, this is about celebrating Black joy and excellence in all its forms, from the glamorous to the everyday. Meanwhile Ethridge, exhibiting everywhere in the 2010s, saw his early work framed as ironic commentary on advertisements and editorial clichés. But quickly, his artistic and advertising work became hard to distinguish: Chanel Bracelets with Mackerel (2013), an edition of 5, shows fish trapped in luxury bangles. It’s in the collection of the Whitney Museum of American Art … and an outtake from an ad campaign that he shot.
Complain, if you like, that melding luxury ads and contemporary art is selling out. Jay Caspian Kang is right, in his New Yorker essay, to bemoan the fact that “we’ve largely abandoned the part of the ‘sellout’ critique that assumes nothing truly interesting or revolutionary can ever be found on mass-market platforms.” And yet, it’s hard to feel as if today’s art world is very “revolutionary” compared to mass markets. These days, protecting art for art’s sake might just make you an elitist gatekeeper.
Meaning one can argue, paradoxically, that there is in fact a class politics involved in addressing an audience beyond galleries and museums, for even if your audience can’t afford luxury, they probably enjoy fantasizing about it, and are more likely to see an ad on a yellow cab or in a magazine than in a museum. They won’t have had to pay admission, and, as a bonus, they won’t feel like they don’t get it if they don’t have an art history degree (never mind that there might not be much to get). What’s more, ads may actually have the power to influence the cultural imaginary, to change the things we desire: representation, it’s all the rage.
THE MUSIC INDUSTRY, LESS PRONE TO ELITISM than art, dealt with this question of ads in the aughts. Collaboration proponents argued that musicians would do well to adapt to a changing mediascape if they wanted to survive financially, as Tina Turner and David Bowie did in a 1987 Pepsi ad, and that they might even be able to infiltrate the mainstream with radical ideas. Twenty years on, few musicians make money from actual music anymore. It’s all tours, ads, and sneaker collabs. Could visual artists be next?
This past January, Cindy Sherman, who resisted commercial work for so long, released a Marc Jacobs campaign. Like Weems, she adapted her signature move—dressing up as other people for the camera—in pictures shot by Teller. Sherman’s breakthrough work of the late 1970s was formative to the Pictures Generation, a group of artists responding to how utopian countercultures had by then become commercialized, making pictures that betrayed a media landscape where everyone was both critic and consumer. Making an ad then, in some ways proves Sherman’s own point about pictures. Unlike with Weems, there’s no sincere message being supplanted with a product. Instead, Marc Jacobs is just another costume Sherman dons.
Nan Goldin’s Gucci campaign this past fall saw Blondie singer Debbie Harry in the back of a vintage car with a small dog and a fancy bag. Like Goldin’s iconic work of the 1980s, this is a portrait of an artsy New Yorker all made up for a night out downtown. But remove the intimacy, the spontaneity, the lo-fi camera, the resourcefulness, and the grit from a Nan Goldin photograph, and what do you have left? Just a regular picture. Harry is impeccably lit and the shot feels impersonal, like a still from a Hollywood remake of that stunning 2022 documentary about Goldin’s life, All the Beauty and the Bloodshed. The campaign proves that Goldin’s signature is, in fact, inimitable, even by her.
We should be asking, is fashion supporting the arts or is it subsuming them? A museum director recently mentioned to me that younger patrons are proving harder to attract because they are investing in their closets instead. Besides, these days they can get their art fix—painting, sculpture, installation—from runway shows.
Then again, there are brands, like Dior and Chanel, that seem to get why total collapse won’t do, and so are sponsoring museum exhibitions and buying ads in art magazines as well as tapping artists for collaborations. They’re supporting those ecosystems that make art art, rather than gobbling it all up for themselves, and turning it into something else. Because without those ecosystems and their dialogues, not only will the art simply be less good … there will be fewer opportunities to show off your fancy outfits at galas!
What’s truly worth defending against the collapsing worlds of art and luxury isn’t exclusivity or art world insider baseball. It’s that sphere set apart for experimentation and risk, for weirdness and uselessness, for art that can challenge, delight, and surprise.
IF YOU ARE A MILLENNIAL AND ART WORLD ADJACENT, chances are you’ve come across the Instagram posts of artists Brad Troemel or Joshua Citarella. Originally famous for artistic gags and trolls, these days, Troemel posts curated selections from TikTok that he appends with ironic captions referencing aspects of contemporary internet culture: hustle porn, new age manifestation, or therapy talk, to name a few. Citarella, meanwhile, posts about his research into niche online political identities, from graphs analyzing where Gen Z falls on the political compass test to his own artworks’ meme-like iconography. Both figures use their social media accounts as portals to Patreon-funded content that takes such forms as videos, podcasts, newsletters, livestreams, and private chat servers, all scrutinizing trends in contemporary art and technology.
Their online presence marks an intriguing shift from post-internet artist to content creator. The two artists were closely associated with the post-internet scene, that notoriously amorphous movement from the 2010s whose predominantly millennial practitioners were “extremely online” at a time before that condition became an epidemic affecting almost all the 40-and-under population. Before finding their focus on Instagram and Patreon, Troemel and Citarella regularly exhibited in brick-and-mortar galleries. For a 2016 New York show at the now defunct gallery Feuer/Mesler, Troemel created sculptures following Pinterest tutorials. For Citarella’s 2015 show at Higher Pictures in New York, he created photographic and sculptural works playing with the dual meaning of rez—gaming slang for “resurrection,” though for most people online, just short for “resolution.” The two also collaborated on projects seeking alternatives to the aesthetic and economic paradigms of the “trad art” gallery realm. Together, they ran the influential Tumblr group called the Jogging, which posted manipulated photographs and memes from 2009 to 2014, and in 2015, started the direct-to-consumer online art store, Ultra Violet Production House.
Since the beginning, the pair has had a somewhat polarizing reputation as artist-provocateurs. When the New Yorker profiled Troemel in 2017, in a piece titled “The Troll of Internet Art,” the artist claimed that the duo’s best-selling item was the NADA Spiders for Change Fund (2016). For every $1 donation, they claimed they would release six poisonous spiders at the 2016 New Art Dealers Alliance Fair. In exchange for receiving photographic proof of a spider found at the fair, they said they’d donate $100 to charity. The work was a frat house prank, presumably unrealized, wrapped in fine art packaging.
“WHERE ARE YOU NOW?” was the title of Orit Gat’s look back at the post-internet movement last year in Frieze. In April, video artist Andrew Norman Wilson gave one answer in the form of an extended personal essay for the Baffler. There, Wilson recounted in unsparing detail his seemingly unending financial precarity, despite his apparent success by traditional art world markers, among them acquisitions and commissions from MoMA, the Getty, and the Centre Pompidou. The story’s particulars are specific to Wilson (a bizarre house-sitting arrangement involving a horny tortoise; chronic, undiagnosed illness) but its general patterns (underpaid gig work; high student loan debt; inadequate health insurance) are all too familiar to Millennials who don’t come from wealth and chose to pursue a creative career at great personal cost.
All post-internet figures have had to adapt, and only some—such as Cory Arcangel, Hito Steyerl, and Simon Denny—remain in the art world. In this magazine last year, Emily Watlington argued that the rise of NFTs and AI caused many in the post-internet movement to abandon digital art and go back to the land. Other post-internet artists pivoted to different cultural pursuits in response to the economic conditions Wilson’s essay details. Amalia Ulman, for example, known for her 2014 Instagram performance art hoax “Excellences & Perfections,” continues to show work in galleries but has also branched out into the film industry. Artie Vierkant, known for his “Image Objects” series in which he printed digital images then fitted them onto 3D sculptures, now cohosts a leftist podcast, Death Panel, and in 2022 coauthored the book Health Communism, published by Verso.
Troemel and Citarella, meanwhile, shifted to content creation (though Citarella continues to exhibit in galleries and museums). Call it the post-post-internet hustle, if the original movement’s name isn’t confusing enough for you. For artists who amassed sizable social media followings in the 2010s, monetizing their practices this way makes sense both as a bulwark against art market vicissitudes and as proof of concept that their practices can operate outside the art world institutions they were bent on critiquing. It also highlights the difficulties of maintaining an anti-capitalist practice in an industry where stable employment and livable wages are scarce.
HOW HAS THIS SHIFT to content creation impacted the work? Troemel’s practice has mellowed with age. His principal output now comprises the aforementioned “reports,” which, when not abridged as social media posts, take the form of 30-plus-minute-long video essays about contemporary arts culture, available to his Patreon subscribers. Wearing a T-shirt and gray Yankees cap, Troemel narrates heavily researched videos on topics such as the culture wars, celebrity art, and AI, while slideshows illustrating his points play onscreen. The tone is equal parts anthropological and bemused, as though Troemel were cataloging online arts discourse so as to marvel at its excesses.
For example, in the Cloutbombing Report (2023), he argues that early 2020s media schadenfreude toward the Dimes Square art scene’s mythos was motivated by “culture industry Millennials [who] were forced, for the first time ever, to confront a scene distinctly younger than themselves.” He calls this confrontation with aging “a wound to the ego everyone is forced to experience,” and adds, as a tweak, “no matter how much they’re babied.” Yet his critique omits the simplest explanation for why Millennials and others remained wary of the Dimes Square scene, which is that they disagree with its post-left politics. Troemel’s digs at what he calls “Millennial cultural liberalism”—2010s efforts toward greater inclusivity on the basis of race, gender, sexuality, and disability—are common in his reports. He sides with fellow Dimes Square edgelords in believing that such inclusivity values art solely for its moral instrumentalism, “rather than [to] nudge viewers toward asking their own questions.”
Viewers would do well to ask their own questions about Troemel’s reports. These first drafts of art history, written by a participant-observer, contain useful syntheses of recent zeitgeists. But they can be surprisingly moralistic—calling out call-out culture, in essence—and he often cherry-picks evidence for inflammatory effect. In the Cloutbombing Report, for instance, Troemel decries the “unrealistic behavioral and communicative standards” of online discourse, as a decontextualized July 31, 2023, post from @thefatsextherapist’s 150K-follower Instagram account appears on screen. “Don’t call it feminist art,” reads the post about that summer’s Barbie movie, “if there are no meaningful representations of fat people in the body of the work.” Troemel uses the post to argue that IRL human interactions require conflict negotiation skills that URL ones don’t, but the original post’s comments section shows users exercising precisely those skills, sometimes with considerable nuance. What’s more, a public post that readers can engage with invites more opportunities for negotiation than a private video monologue.
Troemel’s daily compilations of TikToks and memes, which decontextualize user-generated content from niche communities, pair earnest videos about mental health or trauma with overheated cringe, such as a clip of a shirtless male nutrition guru purporting to drink “aged urine” from a mason jar. The absurdist captions are stuffed with buzzwords from the Discourse: “The best healing remedies come from inside your own body; your waste contains everything you need to become your best self.” While it’s unclear what critique, if any, Troemel is making in such moments, his caricatures of the internet’s innumerable micro-trends perpetuate the same engagement bait dynamics as the original content.
Compare Troemel’s treatment of niche online content to Citarella’s 2020 book 20 Interviews, which contains Q+As with members of online political subcultures, a practice Citarella continues to this day on his podcastDo Not Research. The subjects are young adults trying out niche political identities gaining new traction on Instagram, such as anarcho-primitivism, techno-libertarianism, and fully automated luxury communism. While Citarella’s interviews bear similarities to Troemel’s reports in their anthropological curiosity about online behavior, they are more neutral and respectful in tone, even when the subjects’ beliefs conflict with Citarella’s social democratic ideology.
Citarella approached his spoofy-sounding 2021 auto-ethnographic project, “Auto Experiment: Hyper Masculinity,” with similar open-mindedness. The artist undertook a year’s worth of manosphere diet and exercise regimens, from eating raw eggs to weightlifting programs, to see if they would change his left-wing politics. He didn’t become a rugged individualist, but he does continue to lift weights. With newfound common ground, he’s discovered that young men online who were predisposed toward right-wing politics became more willing to listen to his differing views—and in some cases even change their minds. Like Troemel, Citarella believes that in the past decade, too much emphasis has been placed on cultural inclusivity; in his case, on the grounds that it distracts from society’s underlying class inequities. But rather than sneer at caricatures of liberalism, he endeavors to create space for intergenerational leftist solidarity.
Citarella chronicles others’ behaviors so as to open lines of communication between siloed constituencies, whereas Troemel maintains an us-versus-them gadfly mentality whose core ideological commitments remain vague beyond the schadenfreude of mocking his over-earnest foils. Regardless, both men have found that “shitposting doesn’t scale,” as Citarella once put it. The trolling that he and Troemel utilized when younger, among friends and peers, doesn’t translate as their audience grows and their context collapses. The shifts in both artists’ practice—from provoking online arts discourse to chronicling it—are responses to these conditions.
AS THE YOUTHFUL AMATEURISM of online culture has calcified into atomized professionalism, some tech-minded artists have responded by pursuing alternative paradigms to platform capitalism. The Dark Forest Anthology of the Internet, published in 2024 by Metalabel, a digital space for the cooperative release of creative work, provides a handy introduction to these ideas. The title concept comes from Kickstarter cofounder Yancey Strickler’s May 2019 newsletter and is adapted from Chinese sci-fi writer Liu Cixin’s 2008 novel, The Dark Forest. Strickler’s basic point is that, as social media and other public online platforms (called “the clearnet”) grew in prominence during the 2010s, many people retreated to private, curated digital enclaves (called “dark forests”) organized around shared interests. If you weren’t already attuned to such communities, the dark forest concept likely flew under your radar.
This Anthology should help change that. It presents a genealogy of ideas responding to Strickler’s initial essay, from writer and consultant Venkatesh Rao’s May 2019 concept of the “cozyweb,” to designers Arthur Roing Baer and GVN908’s February 2021 explanation of modular “moving castles.” Contributions from visual arts–oriented content creators include two essays from New Models (Caroline Busta and Lil Internet) and one from Do Not Research (Joshua Citarella). Cumulatively, the book makes the case that niche digital communities not only provide bastions of “safety, meaning, and context” within today’s adversarial clearnet but may also form the basis for tomorrow’s social and professional institutions.
These counterinstitutions are emerging both from financial necessity and from fatigue with the polarization of online discourse during the Trump presidency and Covid years. Subscribers pay for access to both content and community, as exchanges on dark forest platforms experience less context collapse—less bad faith antagonism—than exchanges on clearnet platforms. But curiously, these concerns with safety and visibility motivating dark forest withdrawals from the public fray echo liberal language concerning the safety and visibility of people with historically marginalized identities. This parallel is at odds with the reservations expressed by many dark forest community leaders, like Troemel and Citarella, who both operate communities through Patreon about 2010s-style identity politics.
This tension says something about where Millennials are now, in the arts and beyond. After a decade and a half of unprecedented access to everybody else’s takes—or at least the performative versions of those takes—it’s become easy to find your digital people, but hard to feel like you can be left in peace with them. The clearnet attention economy’s context collapse makes even historically centered individuals feel overexposed. You can retreat into a like-minded enclave, and participate in the group’s flourishing or ressentiment, but a big part of doing art and politics, and many things in between, involves sharing its fruits with strangers. For that, you need to open lines of communication and build a culture, maybe even an economy, that others like you, as well as others different from you, also want to see.
IF RECENT HEADLINES are any indication, one of the most pressing issues right now is the threat posed by fake or manipulated images. The wide availability of generative AI, along with the increasingly user-friendly interface of image editing software like Photoshop, has enabled most people with a computer and internet access to produce images that are liable to deceive. The potential dangers range from art forgery to identity fraud to political disinformation. The message is clear: images can mislead, and the stakes are high. You should learn to detect the real from the fake.
Or should you?
The most recent headline grabber is an instructive case in point. A suspect photo of Princess Kate offered grist to the churning mill of royal conspiracy theorists. To mark British Mother’s Day, Kensington Palace released a photo of Middleton with her three children,
the first photograph of her to be published since she had surgery in January. Major news agencies like the Associated Press promptly killed the photograph, citing anomalies that cast doubt on its authenticity. Rumors exploded, and Middleton subsequently issued an apology, claiming responsibility for the bad Photoshop job before announcing the reason behind her desire to conceal: the princess has cancer.
Before all this was clarified, journalists identified the characteristic tells of a manipulated, or outright fabricated, image in the Middleton photo. Their close attention to these attributes is not unlike how I, as
an art historian, examine a painting. Such signs, amounting to what one might think
of as connoisseurship in the age of digital images, include:
In gathering a credible team to search for these traits, the Associated Press performed a task that ought to become a standard service offered by news agencies now: arbitrating the authenticity of news imagery disseminated to the public. The reliability of this task, of course, requires that news agencies remain free from state, corporate, and political influence, further incentive to protect democracy. Because, useful as this list may be for the moment, when it comes to combatting AI, it’s more of a stopgap measure that misses three bigger issues.
One issue is that every image is worth scrutinizing as a cultural object that conveys values—but only if we can be certain about its origins. How can we interpret a photograph of an event from 1924 if the photograph was digitally fabricated in 2024?
The second issue is that the responsibility for assessing the authenticity of images has fallen to untrained citizen volunteers.
And the third is that, shortly after this piece is published, the list above will be obsolete: Both image editing programs and generative AI are perpetual works in progress. Individuals can try to keep pace with these developments, but the effort can never amount to more than a rearguard maneuver, whatever damage done by deceptive images a fait accompli. And none of these concerns even begins to address the biases inherent in generative AI, which is trained on datasets overwhelmingly populated by white faces.
The Middleton episode is telling not because it involved a manipulated photo: celebrities have been the subject of doctored images forever, from the earliest idealized sculptures of emperors to every photoshop fail a Kardashian has committed. And it is easy to empathize with Middleton’s wanting privacy at such a time. But still, the affair is suggestive of a new regime of mistrust prompted by the broad availability of AI-generated imagery. Far more alarming than the misleading images themselves is the crisis of confidence we are experiencing, accompanied as it is by the erosion of public consensus about what constitutes a credible source. This consensus is the basis for productive communication and good-faith debate. Yet the barrage of bullshit on the internet cultivates an environment of acute cynicism that is detrimental to civic participation.
To be clear, skepticism is healthy, and gullibility is dangerous. Images can lie not simply because they have been generated or manipulated algorithmically. Images can
lie because of the words that caption them, or for what they leave out.
But the problem is not skepticism. Nor is it only that anyone can create and widely distribute a faked image. It’s that this ability has given everyone a permission structure to doubt. Everyone, in other words, has been granted license to choose which images they will and will not believe, and they can elect to unsee an image simply because it doesn’t confirm their priors: the mere possibility of its algorithmic generation opens it to suspicion.
This then encourages people to become their own image detectives, exacerbating the boom in conspiracy theories that gave us anti-vaccination campaigns and allegations of voter fraud. It not only normalizes suspicion as everyone’s default setting, it also suggests that the algorithmic tools at everyone’s disposal (i.e., Google) can themselves reverse-engineer algorithms,
and that they are all that is needed to discover the truth.
WHAT, IF ANYTHING can art history offer us in this regard? Close looking can’t solve the problem: soon enough, the target will move. The problem concerns the culture of images, and that’s something that art history can help us assess, and perhaps even resolve. More than 30 years ago, art historian Jonathan Crary opened his book Techniques of the Observer by commenting that “the rapid development in little more than a decade of a vast array of computer graphics techniques is part of a sweeping reconfiguration of relations between an observing subject and modes of representation.” Unchecked, the ultimate outcome of this reconfiguration will be profound doubt that threatens to plunge us all into nihilism and paralysis. One could argue that this, and not the faked images themselves, is the endgame
of those who wish to weaken people’s belief in the value of basic civic institutions and the fourth estate.
If the tips I offered above about sussing out photoshopped or AI-generated images are useful, then by all means, deploy this form of close looking to every image online. But the better solution, I think, lies not in connoisseurship but in provenance: not in close looking but in sourcing.
Art historians look carefully at images to search for incongruities. In authenticating or attributing a painting, we don’t just look at brushstrokes and pigments. We consider the painting’s ownership, the hands through which it has passed, and other information about the history that the painting has accumulated along the way. Our present situation demands a similar process for digital images—known as digital forensics—but the public at large cannot be responsible for this process. At some point, every person needs to accept that they cannot claim impartiality or universal expertise: I cannot tell if a bridge is safe to drive over or determine whether my lettuce contains E. coli. So I value agencies and organizations that employ experts who can. The same goes for the sources of information I consume, including those that provide images illustrating current events, who should be responsible for doing the provenance research outlined here. That’s as far as my own provenance research can go.
One model for alleviating the paranoia may be as simple as supporting news agencies and image archives that employ professionals to authenticate the images they reproduce. The Associated Press has now shown this can be done.
If this seems impractical, I have to ask: what’s more impractical, strengthening journalistic integrity, or requiring that all consumers of news become their own digital forensics experts?
Read more about “Artificial Intelligence and the Art World” here.
Years later, it still seems unbelievable. A designer is tapped to build a grand public structure, with a budget of $75 million, as the centerpiece of a Manhattan real estate project. As he works, the cost rises above $150 million—more than the annual expenses of the Whitney Museum, more than the price of an F-35 fighter jet, more than any artist before could ever possibly hope to have at their command. Eventually, it is said to climb further, to $200 million, with some landscaping added.
The design is closely guarded until 2016. Then, renderings are released. The grand reveal: this designer is planning to make … a tower of stairs—154 flights, to be exact, all arrayed in a kind of upside-down cone, like shawarma on a spit, stretching 16 stories (some 150 feet) into the sky. In 2018 the designer offers a wan explanation: “What I like about stairs—as soon as you start using your body, it breaks down potential artistic bullshit, because there’s just an immediacy to straining your leg,” he tells the New Yorker’s Ian Parker.
Then, early 2019, Thomas Heatherwick’s Vessel opens to the public in Hudson Yards, the crowning jewel of a complex of towering corporate offices, luxury apartments, luxury stores, and a luxury hotel developed by a luxury gym chain. Its pristine copper-colored cladding gleams in the sun. It looks alien and a little menacing, like a digital creation clicked and dragged from a computer screen into real life. It is vacuous in its celebration of vertigo-inducing capital and private ambition, and even though it closes to visitors not long thereafter, in May 2021, it has to rank as one of the defining architectural projects—one of the defining artworks—of the era.
Miraculously, this managed not to derail the 53-year-old Englishman’s career. Gargantuan, eye-catching Heatherwick schemes continue to crop up around the world. Boris Johnson has compared him to Michelangelo. Diane von Furstenberg has termed him a “genius.” For engineer Tony Fadell, the “father of the iPod,” he is “a creative genius.” Billionaire Stephen Ross, the man behind Hudson Yards, is said to view him as “the ultimate genius.”
It is no crime for artists and designers to be adored by the wealthy and powerful, of course. It’s essential. (Michelangelo certainly knew this.) But Heatherwick has become the go-to artist of the ultra-rich. Why?
ONE ANSWER IS THAT Heatherwick really can make punchy spectacles—edifices that become landmarks that patrons tout with easy pride. An early success was the Rolling Bridge, conceived for a London office and retail development where it was installed in 2004. More a kinetic sculpture than a bridge, it unfolds grandly from an octagon into a now-nonfunctional 36-foot-long footbridge over a canal in Paddington Basin. (Comprising thousands of complex moving parts that stopped working in 2021, it may never be repaired.) A few years later, his UK Pavilion for Expo 2010 in Shanghai, covered with 60,000 thin acrylic rods, was a shimmering Op art tour de force. And his similar starburst of a sculpture for Manchester, England, the nearly 200-foot-tall B of the Bang (2005), emanated the thrill of a vision brought improbably to life. Sadly, it was removed because parts of its 180 spikes kept falling off. Even the lobbying of Antony Gormley, another lover of bombast, could not save it.
But these are essentially razzle-dazzle, one-note pleasures, perfect examples of Ed Ruscha’s old line about the reaction that bad art elicits: “Wow! Huh?” Whereas good art draws those same words in reverse. Heatherwick’s 2007 Spun Chair, rendered in polished copper and stainless steel, could be a mascot for his methods: a sleek chair (picture a thread spool pinched at the center) that sitters can tilt at an angle and spin in a complete circle. It’s fun for a few spins.
Heatherwick’s competitor (and collaborator on a 2022 Google building in California), Bjarke Ingels, nailed it when he told the New Yorker: “There’s a Harry Potter-esque, Victorian quirkiness in the work. An element of steampunk, almost.” He comes bearing showy designs that aim to be icons for a development, a neighborhood, a city. A prime example is his 2017 plot with Mayor Johnson to build a $260 million Garden Bridge—a tree-filled pedestrian walkway—across the River Thames in London, scrapped after having sucked up $48 million in public funds.
The Heatherwick phenomenon is not a tale of gentrification. That work has usually been done by the time he gets the call. Long ago, white-cube galleries in West Chelsea and the rent-spiking High Line paved the way for Hudson Yards, which was helped along by almost $6 billion in tax breaks enacted by dubious rezoning that made Harlem, Central Park, and Hudson Yards all one low-employment district (never mind that only one of these had people living in it: the latter is a former train yard). He is, instead, an exemplary architect for a time when cities have become unbearably expensive and the wealthiest do not believe they should have to pay taxes.
HEATHERWICK, HOWEVER, positions himself as a man of the people. In his new manifesto of a book, running nearly 500 pages, he goes on the attack against the past century of design. “Some architects see themselves as artists,” he writes in Humanize: A Maker’s Guide to Designing Our Cities. “The problem is, the rest of us are forced to live with this ‘art.’” He inveighs against buildings that are “boring”—too flat, plain, straight, shiny, monotonous, anonymous, serious. Some 50 pages are devoted to a diatribe against Le Corbusier, “the god of boring,” whose theories “gave permission for repetitive order to utterly overpower complexity,” which Heatherwick prizes.
“Modernist architects think boring buildings are beautiful,” Heatherwick grouses. Their minimal, theoretically loaded work has lent cover for the cheap, knockoff stuff that sits alongside it. Against these elites and their “emotional austerity,” their buildings that “make us stressed, sick, lonely, and scared,” he adopts the language of the populist politician. “I am going to make a promise to you,” he writes in a lightly condescending letter to the “passerby” that closes Humanize. “I will dedicate the rest of my life to this war. But I need you … to join us. Our aim is modest: we just want buildings that are not boring!” And if boringness sounds difficult to measure, do not worry: Heatherwick Studio has made a “Boringometer” to determine how interesting a structure’s shapes and textures are, on a scale of 1 to 10.
The obvious irony is that many Heatherwick structures read like desperate, failing attempts not to be boring, via some whiz-bang trick. They illustrate Sianne Ngai’s theory of the gimmick—a device induced by late capitalism that falls flat for appearing to work both too little and too hard. Bulbous, grenade-shaped windows monotonously line his 2021 Lantern House apartment building in Manhattan’s Chelsea neighborhood, while his newly opened 1,000 Trees mall in Shanghai features, yes, 1,000 trees, each sitting on its own mushroomlike column high in the air around the stepped building. It suggests a videogame environment, as do renderings for his overwrought multifarious proposal for an island in Seoul’s Han River.
While purporting to speak on behalf of everyday people, Heatherwick is careful to do nothing that could actually offend the ultra-rich. In a revealing passage in Humanize, he praises Antoni Gaudí’s curvaceous Casa Milà in Barcelona for “wanting to fill us up with awe and break us out in smiles.” Says Heatherwick: “Even though this building was made to provide high-end apartments for wealthy people, I believe it is a gift.” We should be grateful.
Heatherwick’s pitch sounds precisely attuned to the ears of politicians who are disinclined to pursue projects that might actually benefit the public at a time of government austerity (forget about the emotional strain). Self-styled technocrat Michael Bloomberg blurbed Humanize, praising it as “a powerful prescription for buildings that put the public first.”
“Our most vulnerable people live in the most boring buildings,” Heatherwick writes on a page that is illustrated, bizarrely, by the burned-out Grenfell Tower, where 74 people died in 2017. “Why should absence of boredom be a luxury good?” Heatherwick, it should be noted, has not pursued any large-scale, or affordable, housing projects that I am aware of.
Making buildings and cities that are more hospitable, livable, and generous is a noble pursuit, but the designer of a cold and imposing nine-figure stairway to nowhere does not feel like the right man for the job—not least because he and his developer-patron declined to install safety features after a series of suicides there. (Following the fourth, they finally closed it; nets are reportedly being tested.) Standing below it, I do not feel that I am receiving a gift.
STILL, IT IS EASY to share a common enemy with Heatherwick: boring buildings that exhibit little regard for those who use them. We all spend time in places made with little imagination and even less care. We deserve more. As he writes, “we’re richer than we’ve ever been at any point in history.” Heatherwick, making that pitch to deep-pocketed developers, has not often been able to deliver satisfying structures, but his brio should inspire everyone, whether commissioned architects or apartment renters or voters, to ask for more.
In any case, some ideas that Heatherwick floats in his tome for creating better buildings are sensible mainstream ones that practitioners and activists do advocate, like reducing regulations and simplifying planning processes. (Such moves could also assist wealthy developers, to be sure.) But my favorite Heatherwick prescription is an eccentric one, and absolutely peak Heatherwick: “Sign buildings.” Instead of “staying in the shadows,” he says, a building’s creators should “be proudly named at eye level on the outsides of their projects.”
“Why would anyone involved in the process of building buildings be against this?” he asks. “Why wouldn’t you be proud? Why wouldn’t you want to sign your canvas?”
ON FEBRUARY 28, 1974, Tony Shafrazi walked into the Museum of Modern Art in New York and spray-painted kill lies all in red across the achromatic surface of Picasso’s Guernica (1937), in protest of United States atrocities in Vietnam. The next day, his action appeared on the front page of the New York Times, as he had intended: Shafrazi had notified news agencies in advance.
On October 14, 2022, nearly 50 years later, Phoebe Plummer and Anna Holland walked into the National Gallery in London, opened a can of tomato soup, and splattered it across glass protecting Van Gogh’s Sunflowers (1889). The duo then smeared superglue on their palms before affixing them to the wall below the work. Plummer, whose voice was quivering with emotion, demanded: “What is worth more? Art or life?”
The gesture, planned by the activist group Just Stop Oil, was a call to arms against the fossil fuel industry. The action immediately went viral. News reports invariably called it—as well as similar subsequent interventions—an “attack.” Museums, one after another, have continually condemned the “endangerment” of artworks, while being careful not to denounce the activists’ politics. As climate protests in museums have proliferated, debates have focused on the “cost” of these actions, while ignoring the urgency of the activists’ appeals. Similarly, the Times called Shafrazi a “vandal,” but made no direct mention of Vietnam.
If these truly were attacks, the injuries sustained by the artworks were ephemeral. MoMA conservators scrubbed the spray paint from Guernica’s varnished surface by the end of the day. The National Gallery cleaned and rehung Sunflowers within six hours. The climate activists deliberately targeted the work’s protective glass and frame, not the painting itself. Materially, this doesn’t constitute an attack on the artwork at all; rather, both gestures are political performances that operate primarily within the symbolic sphere.
BUT WHEREAS SHAFRAZI had intended to reactivate Guernica’s antiwar message, to make the painting feel as urgent as it had during the Spanish Civil War, climate activists like Plummer and Holland understand artworks as inseparable from a larger social world. Their action was less about Sunflowers as a painting and more about the value and function of art within economies of attention and exchange.
And yet, the art media quickly questioned: why Sunflowers? They asked the same in the many subsequent cases: Why Monet’s Haystacks? Why Degas’s Little Dancer? Why Laocoön and His Sons? Protesters offered various explanations, but the common denominator is clear: all these works benefit from a nearly consensual agreement that each is a masterpiece. Their hyper-visibility lends social drama and social meaning to the activists’ interventions. The works—often described as “priceless” by journalists—are focal points of cultural and monetary value. They are, therefore, the exact points where these values might be called into question.
The sense of endangerment that these actions elicit forces us to reckon with the matrix of values in which the works are suspended and upheld. Plummer asked onlookers at the National Gallery, “Are you more concerned about the protection of a painting or the protection of our planet and people?” As might be expected from any challenge to the status quo, many museumgoers reacted negatively. In the recordings from the National Gallery, you can hear hushed cries of “Oh, my gosh!” and an urgent call for security. Climate protesters acting outside the rarefied context of the museum tend to elicit even stronger reactions. In a video of a Just Stop Oil action at the Chelsea Flower Show, an indignant onlooker doused protesters with a sprinkler until restrained by a uniformed guard.
Some climate activists (or climate-aware non-activists) fear alienating the non-activist public with acts of civil disobedience. They prefer to maintain an institutionally sanctioned and law-abiding public face. But as an artist who engages climate change in my own work, I see value in these acts. By hijacking the attention we pay these artworks, the activists’ gestures have triggered public conversations around fossil fuels and climate that would not have happened otherwise, redirecting attention where it is badly needed.
DESPITE CRITICISM TO THE CONTRARY, Plummer and Holland have expressed reverence for Sunflowers. Holland showed up to their court appearance in a Sunflowers T-shirt, and the duo has described an imagined solidarity with Van Gogh. In a Frieze interview with Andrew Durbin, Plummer opined: “Van Gogh said, ‘What would life be if we had no courage to attempt anything?’ I’d like to think Van Gogh would be one of those people who knows we need to step up into civil disobedience and nonviolent direct action.” In the same conversation, Holland championed the series’ beauty and iconic standing. The activist duo’s unlikely pairing of symbolic violence and aesthetic beauty is what granted their gesture its potency.
Preservation is one of the museum’s chief functions, but it has also long served as a space for discourse, a public arena for democratic debate. In 2019 the International Council of Museums controversially proposed a new definition for museums that began: “Museums are democratising, inclusive and polyphonic spaces for critical dialogue about the pasts and the futures.” While this line was ultimately excised, the body agreed in 2022 that museums must “foster diversity and sustainability” and invite community participation. Actions like throwing soup on Sunflowers have succeeded in reasserting the museum as a political space. The dissensus that makes these events so uncomfortable to observe (the heckling, the disconnect between activist and onlooker) is part of what gives these encounters a strongly political dimension: antagonism must be part of democratic processes in a deeply divided world.
A Forbes opinion piece headlined “Will Hurling Tomato Soup on Van Gogh’s Sunflowers Advance Climate Policy?” by Nives Dolsak and Aseem Prakash takes a typical position of agreeing with the protesters’ message but questioning their methods. Then, they go on to delineate a list of policy changes that they see as truly actionable.
The very existence of the piece proves that museum actions opened the door to conversations around climate, creating a global audience that far exceeds those present when two activists glued themselves to a museum wall.
These are the questions the protesters want us to ask: should, and will, governments grant new licenses to extract fossil fuels? Will governments meet decarbonization targets? Will they set timelines that avoid, or mitigate, life-threatening environmental effects? All governmental actions taken to date have been far too modest. Immediate and radical change is necessary.
To unsympathetic observers, the anger embedded in activists’ gestures can seem like an excess of feeling. But that anger is rooted in real suffering and loss. As I write, tens of thousands have been killed by flooding in Libya, a disaster caused by a lethal combination of infrastructural failure and unprecedented storms. After a raging wildfire, Maui remains a scorched wasteland with close to 100 dead. Smoke still trails from Canada where millions of acres of boreal forest have burned over the course of a single summer.
Ironically, the enormous scale of these climate-fueled disasters makes them hard to see and easy to dismiss. At a recent Extinction Rebellion march, I overheard a woman, turning away a flyer, hit back with: “Save the bloody world? No thanks, not today. Maybe tomorrow.” I make art that confronts climate change because I believe art can make the invisible visible, the unheard heard, and the unsensed sensible. Similarly, museum climate protests harness art’s power to unveil.
WHILE I BELIEVE these actions have been successful, I don’t think they are replicable. The actions that have the most staying power are the ones that have appropriated particular artworks in peculiar ways. The orange tomato soup created an image, temporarily, that looked as if Van Gogh’s blossoms, or his oils, had melted in the heat of the Arles sun. This bit of visual play surely helped rocket the event through the algorithms. The activists’ youth (Plummer was 21, Holland, 20) was certainly another important factor. Other similar gestures (pea soup on Van Gogh’s The Sower, black oily drips on Klimt’s Life and Death) have not generated quite the same scale of response. The protest must go on, but it will take on new sites and new forms.
Among the dozens of museum interventions carried out over the past year, another stood out to me for the artwork it engaged. In August 2022, activists from the group Ultima Generazione carefully planned an action involving Umberto Boccioni’s 1913 Futurist icon Unique Forms of Continuity in Space. Four group members glued their hands to the plinth supporting the bronze sculpture to avoid touching the work itself. They called not only for an end to new fossil fuel permits, but also for government-led expansion of renewable energies.
To me, the Futurists represent the very birth of fossil fuel modernism: they celebrated speeding automobiles and soot-cloaked cities, studded with smokestacks. But the Futurists were not ignorant of the dangers of the dawning machine age: their fiery fete was, also, a dance of death. The Futurists’ calls to tear down the old museums seem at first glance to presage recent museum eco-actions.
Climate activists meticulously stage events in ways that limit harm, suggesting a very different attitude toward artistic heritage. Environmental activists are, after all, making a plea to preserve the world. The Futurists’ radical program of historical extermination—and giddy embrace of the breakneck thrills of a machinic future—are precisely what environmental activists are countering. For them, what has become radical is to oppose a profit-driven ethos of endless appetites and infinite garbage heaps. The Futurist vision, with all its destructive drives attached, has become our world.
I recently visited Sunflowers in London and sensed a fresh energy among the crowds descending on the work. Following in the footsteps of Guernica in 1974—a year after Picasso’s death and a year before Franco’s—Sunflowers had just recently made headlines. Beneath it, I could make out two patches of fresh paint on the slate-blue walls, where Plummer and Holland had affixed their hands, and what may have been a tomato stain on the varnished floorboards. Amid an urgent global crisis, it’s easy to dismiss the role of art. But this unforeseen confluence of art and activism confirms that art has shaped and will continue to shape social and political responses to the climate emergency.
This article appears under the title “Soup & Sunflowers” in the Winter 2023 issue, pp. 40–42.
Riva Lehrer is a Disabled artist and writer based in Chicago. She teaches Medical Humanites at Northwestern University.
A few days ago, I received a barrage of messages about the mysterious kerfuffle at the Mütter Museum. The Philadelphia institution has long collected, preserved, and displayed human specimens in order to “help the public appreciate the mysteries and beauty of the human body while understanding the history of diagnosis and treatment,” per their website.
Recently, the museum hired a new executive director, Kate Quinn. Then promptly, they took down their online exhibitions and popular YouTube channels. Soon enough, they announced on Instagram that they were temporarily putting their collection under review, “in recognition of evolving legal issues and professional standards pertaining to the exhibition of human remains,” per a statement released on June 6. They anticipate that the review will be complete by Labor Day.
People knew from my memoir Golem Girl (2020) that I had a formative experience at the museum, and that I had been lecturing on its collection for years as a Medical Humanities Instructor at Northwestern University.
The Mütter joins medical and natural history museums around the world who are debating the ethical treatment of human remains. There is the question of provenance: at the Mütter, some specimens may have been accepted into the collection under dubious or outright unethical circumstances. Mütter curator Anna Dhoty has written about one unclear holding. Other provenance issues have recently been resolved after decades of negotiation. And in some instances, there is virtually no paper trail at all.
All this gets at a deeper, more troubling question: can it ever be ethical to own, or exhibit, someone else’s body? And if so, how should those bodies be displayed?
Because most of the collection represents bodies with impairments, the Mütter has long elicited a complex range of reactions from the Disability community. For years, Disabled colleagues and friends said that they were appalled by the way that the museum displayed nonnormative bodies. Many felt that the Mütter engaged uncritically in the tradition of the freak show even when it has the opportunity create a space for Disabled people to construct families of choice.
As the Mütter debates the fate of their holdings, in other quadrants of my community (I am Disabled), complex ideas about the display of nonnormative bodies are emerging. For so long, we’ve been absent from movies, art, and books, save for the occasional ableist trope. We are starting to recognize that this museum is one of the very few places where we actually see ourselves, where we confront our reality and our place in history.
My first visit to the Mütter in the fall of 2006 changed my life. The establishment started out as the private holdings of a surgeon named Dr. Thomas Dent Mütter, but in 1858, he donated his specimen collection to the Philadelphia College of Physicians as a teaching tool for medical students. A few years later, the Mütter Museum opened to the public.
At the time of my first visit, I was teaching anatomy at the School of the Art Institute of Chicago. I wanted to learn from the displays, to understand more about the physical and medical forces behind variant bodies.
All the while, I was very aware of the fact that I myself could easily have been a specimen. All around me were bodies that resembled those of my friends: bodies impacted by genetics, birth events, diseases, and accidents. But the biographical information accompanying each “specimen” rarely went past a skimpy medical narrative. Worse, almost nothing described them as living, complicated people. My frustration and anger built with each successive case.
Then, downstairs, I rounded a corner before staggering to a halt: there, I confronted a tall glass case containing shelves of fetuses with spina bifida, which is my disability. These fetuses far preceded my own birth; before the mid-1950s, when a surgery was developed to fix the lesion, children like me were rarely treated, and tended to die very young.
Even though I taught anatomy, I’d always avoided looking up pictures of what a spina bifida fetus looked like. But now, here I was—my own fetal body, with its swollen balloon-like eruption sticking out of my back. I stopped breathing. My friend caught me just as I passed out.
It didn’t come right away. But after months had gone by, the feeling I was left with was a sense of communion. That trip to the Mütter became one of the most profound and transformative experiences of my life.
However.
It still frustrated me that there was no information about spina bifida—its causes, its medical history. Worse, there was nothing representing people with neural tube defects who are alive today. While it is crucial that the museum supply sufficient medical information about its holdings, Disabled people are not looking to be depicted as mere medical problems. My memoir, Golem Girl, is about my life as a monster. This is how I’ve been treated most of my life, including, too often, by the medical establishment.
Many Mütter displays do seem to portray us as freaks. The best—or worst—example is a floor-to-ceiling case containing three complete skeletons and one skull. One skeleton is identified as a “normal” man, at around 5’10”. The 7’6” skeleton, of a man known only as the “Kentucky Giant,” towers over him. A Little Person* named Mary Ashberry stands at about one-third of the Giant’s height. Mary died sometime in the 1850s, in childbirth. The skull of her stillborn child is plopped unceremoniously at her feet. (Earlier, the skull was shown in Mary’s hands but the armature for keeping it there was unstable).
The three are placed side-by-side so as to underscore their extreme variance. “Mr. Normal” is the yardstick against which the other two are measured.
Troubling displays like this impact how the medical community, and the general public, perceive and interact with disabled people. I have spoken over the years with many disabled women (including Little People) who became pregnant or wanted to. Most of them faced difficulty finding or retaining medical providers. Often, their doctors either refused to help, or discouraged them from becoming pregnant. Mary Ashberry died, it would seem, because no one could help her deliver a baby that was a danger to her smaller pelvis.
Let’s imagine that Mary Ashberry has a family that would care about the disposition of her remains; the Mütter contacts her descendants, and says they’d like to continue to exhibit Mary. Imagine how they might feel if they were presented with that terrible three-body case.
Now, let’s imagine Mary and her infant’s remains are placed in their own dedicated display (and for God’s sake, get the infant’s skull off the floor!). This display would feature text written by Disabled women, especially Little People, discussing their experiences as OB/GYN patients, and detailing what it’s like to be pregnant in public with a nonnormative body. I suspect the family might feel differently about this kind of display.
I believe absolutely in the rightness of human display, but it matters how you do it.
The Mütter is the inspiration for my Medical Humanities course called “Drawing in a Jar.” It’s open to first- and second-year med students, who learn to draw using nonnormative fetuses in Northwestern’s collection—a collection very similar to the Mütter’s.
The technical demands of drawing allow them to get used to looking at the fetuses, and gives them time to sort out their reactions. They’re often surprised by the beauty of these entities. My students’ final assignment is to present a fifteen-minute biography of a person who has the same impairment as the fetus they’ve drawn, and who has lived within the last twenty-five years. Their subject must have had a public presence, whether in the form of a career, a documentary, a memoir, or a biography. Medical data is limited to five minutes of the presentation; the rest must be a story of a person, not a condition. Too often, med students are taught that the Disabled are tragedies to be eradicated.
My med students are increasingly trained using digital tools. I’ve asked if they’d have had the same experience if we’d used 2-D imagery or even 3-D prints of fetuses as reference for their drawings. Every single one has said it would have been far less transformative. Herein lies the Mütter’s potential power.
I’ve taken them to the incredibly problematic touring “Body Worlds” exhibit on multiple occasions. We discussed its many ethical conundrums, and through it all, it was obvious that my students have a ravenous curiosity about the human body, as does the general public. We all long to know what we are.
I am an artist. I make collaborative portraits with people who undergo stigma, due to the shape of or performance of their bodies. Art is my life—but renderings are no substitute for confronting a body.
In an article on WHYY, Quinn, the Mütter’s new director, points out that though some visitors love the museum, “there are also people who find it gross and choose not to come back,” citing TripAdvisor reviews.
The Mütter has the opportunity to change ableist narratives and perceptions of nonnormative bodies, instead of portraying us as monsters and freaks. Recently, and to their credit, they had created some videos that do exactly this, but now, those have been taken down, too.
It seems as if the executive director and certain members of the Mütter board find nonnormative bodies embarrassing or distasteful. Were they to remove us, they would not be giving us back our dignity. They’d infer that our bodies are repulsive. Disgusting. Like a pregnant woman in the time of Queen Victoria, it’s best we are hidden from public view.
Medical museums like the Mütter are, in effect, family albums for the disabled. Many of us (as am I) are the only impaired people in our families. I often go several months without seeing anyone who looks like me. Without my brothers and sisters of the spine, I would never have written my memoir, I would not be teaching at Northwestern, and I would not understand the immense potential of this treasure of such a collection. Should the board and executive director of the Mütter take the collection off display, this would be an incalculable loss. Let’s not tear it down. Let’s do it better.
I beg the Mütter Board; I beg Kate Quinn: every body can be an unlocked door. You have the power to let the bodies speak.
*The community uses Dwarf, Little Person, Person of Short Stature, and Person with Dwarfism, according to personal preference.
For close to 30 years—up until last week—courts have wrestled with the question of when artists can borrow from previous works by focusing in large part on whether the new work was “transformative”: whether it altered the first with “new expression, meaning or message” (in the words of a 1994 Supreme Court decision). In blockbuster case after blockbuster case involving major artists such as Jeff Koons and Richard Prince, lower courts repeatedly asked that question, even if they often reached disparate results.
But in a major decision last week involving Andy Warhol, the Supreme Court pushed this pillar of copyright law to the background. Instead, the Court shifted the consideration away from the artistic contribution of the new work, and focused instead on commercial concerns. By doing so, the Court’s Warhol decision will significantly limit the amount of borrowing from and building on previous works that artists can engage in.
The case involved 16 works Andy Warhol had created based on a copyrighted photograph taken in 1981 by celebrated rock and roll photographer Lynn Goldsmith of the musician Prince. While Goldsmith had disputed Warhol’s right to create these works, and by implication the rights of museums and collectors to display or sell them, the Supreme Court decided the case on a much narrower issue.
When Prince died in 2016, the Warhol Foundation (now standing in the artist’s shoes) had licensed one of Warhol’s silkscreens for the cover of a special Condé Nast magazine commemorating the musician. Explicitly expressing no opinion on the question of whether Warhol had been entitled to create the works in the first place, the Court ruled 7-2 that this specific licensing of the image was unlikely to be “fair use” under copyright law.
This is not necessarily a problematic result, given that Goldsmith also had a licensing market. Yet despite the Court’s attempt to limit itself to the narrow licensing issue instead of deciding whether Warhol’s creation of the original canvases was permissible, the reasoning of the decision has far broader and more troubling implications.
To know what’s at stake, it’s important to understand the fraught doctrine of “fair use,” which balances the rights of creators to control their works against the rights of the public and other creators to access and build on them.
What’s sometimes lost is in this discussion is that copyright law’s purpose (perhaps surprisingly) is to benefit the public—benefit to an individual artist is only incidental. The theory behind the law is that if we want a rich and vibrant culture, we must give artists copyright in their work to ensure they have economic incentives to create. But by the same logic, fair use recognizes that a vital culture also requires giving room to other artists to copy and transform copyrighted works, even if the original creator of those works objects. Otherwise, in the Supreme Court’s words, copyright law “would stifle the very creativity” it is meant to foster. Thus, to win a fair use claim, a new creator must show that her use of someone else’s copyrighted work advances the goals of copyright itself: to promote creativity.
Unfortunately, the Warhol decision took this already complex area of law and made it even more complicated. Lower courts and legal scholars will be fighting for years about its applications. But one thing is clear: it is now far riskier for an artist to borrow from previous work.
Not only did the Court downgrade the importance of whether a new work is transformative, whether it “adds something new and important” (to use the Supreme Court’s words from a previous case). The Court also painted a bizarre picture of Warhol as an inconsequential artist. Surely the Justices of the Supreme Court know that Warhol changed the course of art history. But the Warhol who emerges in the majority opinion is a tame portraitist whose work is just not that different from the photographs on which it is based.
In the Justices’ formulation, Warhol is a “style,” an artist whose “modest alterations” of the underlying photograph brought out a meaning that was already inherent in it, whose work portrayed Prince “somewhat differently” from Goldsmith’s image. Justice Elena Kagan, in a scathing dissent, charged that the majority had reduced Warhol to an Instagram filter.
Nowhere in the majority opinion would you recognize Warhol as a once-radical artist, the one de Kooning drunkenly approached at a cocktail party to utter, “You’re a killer of art, you’re a killer of beauty.” Nowhere does one see the Warhol whom philosopher Arthur Danto called “the nearest thing to aphilosophical genius the history of art has produced.” That Warhol is the paradigm of an artist who brings new “meaning and message” to the work he copies, the very kind of artist that the now-diminished emphasis on transformative use was meant to protect.
Of course, this decision is not just about Warhol. For that matter, it’s not just about other Pop artists, or about appropriation artists.
Any artist who works with existing imagery should now reconsider her practice. Hire a lawyer, maybe try to negotiate a license and be ready to move on if you get turned away or can’t afford the fee. The safest and cheapest route—a consideration particularly relevant to younger artists and those who are not rich and famous—is to just steer clear of referencing existing work. Maybe that’s the right direction for art; maybe copying and relying on past work should be discouraged. But given the centrality of allusion, emulation, and copying to the history of art, it’s hard to imagine that’s a good thing. This is particularly so in contemporary digital culture, where, as I have argued, copying has taken on even greater urgency in creativity. But like it or not, these are not questions that artists, critics, and art audiences get to decide. The Supreme Court just changed the future of art.