Mural by the meme artist XVALA covering graffiti artist Banksy’s Steve Jobs Mural in Calais, France | March 2019 | Photo: Wikimedia Commons, used under CC BY-SA 4.0

Memes, Antisemitism, and The Press

Tow Center

--

By Susan McGregor

Decades before the dawn of the consumer internet, evolutionary biologist Richard Dawkins coined the term “meme” in his popular 1976 book The Selfish Gene. As Dawkins conceived it, a “meme” was essentially anything — from clothing styles to melodies — that was replicated across human groups without relying on actual genetic information. While present throughout human history, this newly-minted term for these “units of cultural information” captured the imagination of academics in a variety of disciplines. “Memetics”, as the short-lived discipline was called — offered a novel way to think about how human culture propagated itself; for some, it even suggested a way to describe the organization of the human mind.

Though today memes are a ubiquitous feature of our ubiquitously-connected online lives, pre-internet “memes” rose to prominence especially during the 20th century, when they were almost exclusively artifacts of propaganda and the popular press. While not all memes are visual, many of their most enduring forms tend to be: as early as the 1920s, for example, the new technology of photography (aka the “flashlight”) saw the creation of an illustrated version of the classic side-by-side image macro now known as the “expectation vs. reality” meme:

A 1921 cartoon from humor publication The Judge reminiscent of a current image macro format

As new reproduction technologies — from photocopiers to fax machines to the internet — appeared in the late 20th century, the creation and proliferation of memes ceased to be the sole purview of government propaganda machines and mass media outlets. Using photocopies, paste and stickers, memes began to move off the page and into the physical world. For example, the well-known Shepard Fairey sticker campaigns featuring a likeness of Andre the Giant and the word “OBEY” became a classic meme of the late 1980s and early 1990s, when the imagery emerged in New York and proliferated across cities around the world:

Shepard Fairey’s now-classic OBEY sticker

More recently, the meme has even moved beyond paper, as with Invader’s urban mosaics, which first appeared in the early 2000s:

A mosaic in Manchester, U.K. by graffiti artist Invader

The Internet Gets Memed

The rise of the social web, however, has long since made the term “meme” more or less synonymous with the Internet. Alongside Dawkins’s classic definition of meme, the Oxford English Dictionary now lists an Internet-specific version: “An image, video, piece of text, etc., typically humorous in nature, that is copied and spread rapidly by Internet users, often with slight variations.”

In the 2013 book Memes in Digital Culture, author Limor Shifman offers a more precise definition of Internet memes, and argues for the study of memes in communication scholarship. When viewed through the lens of social media in particular, the communicative quality of memes is clear: today, it is common to see an image-based meme constitute the entirety of a post or reply on social media platforms like Twitter and Facebook. Clearly, such memes are fit to purpose, offering a way to convey context, nuance — even emotion — far more accurately than any combination of 280 characters. Interestingly, however, one the dominant meme forms on social media is still often the “image macro” — a combination of image and text — which itself has evolved little from the early 20th century “expectation vs. reality” cartoon.

The success of the image-macro meme on social media in particular is likely driven by a number of factors. As noted, the annotated image has the power to efficiently encapsulate whole histories of meaning in a character-limit-friendly way, but even goes beyond this by invoking complex moods or experiences that defy direct description no matter the words available. For example, the first two panels of K.C. Green’s famous strip from the cartoon Gun Show, has become an oft-repeated meme: “This Is Fine.”

The first two panels of “On Fire” from K. C. Green’s webcomic Gun Show

The well-documented fact that the human eye is drawn to images within fields of text has probably also played a role in the image-macro’s success — as, very likely, has the existence of social platforms like Instagram and Pinterest that are specifically designed for image-sharing. Taken together, it’s unsurprising that image-based memes are often the preferred format for attention-getting social media content. And as everyone from celebrities to politicians now uses social media as a means to communicate with the public, the need to take memes seriously — as Shifman argues we should — is more pressing than ever. True media literacy, for both the press and the public at large, must include an understanding of not just what memes mean, but how they “mean.”

Just as we should not mistake a handful of people online for multitudes, we should not allow one person to claim many voices for the sake of political expediency.

Memes for disinformation

According to Shifman’s definition, an Internet meme is

A. a group of digital items sharing common characteristics of content, form, and/or stance

B. that were created with awareness of each other; and

C. were circulated, imitated, and/or transformed via the Internet by many users.

In the current social media environment, it is perhaps this final characteristic of meme culture — the person-to-person dissemination of the meme — that has made the form such an attractive, powerful, and potentially dangerous format for the spread of disinformation, misinformation, and hate.

In the social network context, information that spreads from one individual to the next — rather than in an obviously centralized, “top down” way — achieves much greater reach. This is true not only because the platform companies themselves have (should they choose to use them) the data and tools necessary to work against actors that seek to “game the system,” but also, research indicates, because the level of trust readers place in the content they are seeing is dependent not on the original creator (or even the substance) of that content, but on the trust the recipient has in the person who shared it.

Thus, a successful meme-driven disinformation campaign will reach a substantial audience by virtue of the very format, just as the propaganda campaigns of the past leveraged mass-media formats — especially film, radio and television — to achieve the same effect. In stark contrast to these prior efforts, however, meme-driven disinformation campaigns are incredibly cheap to create. Image-driven memes are often intentionally low “production value,” relying, almost as a matter of form, on low-quality images and amateurish design, allowing a large number of them can be created quickly and inexpensively.

This combination of low production cost and high potential payoff — even if success is rare — makes meme creation an efficient mechanism for anyone wishing to spread both disinformation and generally hateful content. An ongoing example of this approach is “Mimetic Monday” an invention of the neo-Nazi website The Daily Stormer, which collects and publishes dozens of racist and anti-Semitic image memes each week in a single post. As authors Alice Marwick and Rebecca Lewis describe it in their 2017 Data & Society report Media Manipulation and Misinformation Online, the the unique characteristics of image-based memes allow online actors like The Daily Stormer to try to try influence online discourse by creating an effectively endless stream of hate-based image-macros, including those that are explicitly anti-Semitic, racist, or directly use Nazi symbols and imagery.

Because these images are so easy to make, even the work of a relatively small outlet like The Daily Stormer can have an outsize effect, as the meme-based approach allows them to attempt, as Marwick and Lewis put it: “many messages and strategies, pursuing those that stick and abandoning unsuccessful tries.”

One key objective of such campaigns is for messages that begin as disinformation — that is content intentionally designed to deceive or mislead — to transform, in part through a filtering effect of the meme’s iterative evolution as it spreads, into misinformation — that is, the unintentional person-to-person sharing of false, misleading or otherwise problematic content.

As Marwick and Lewis point out, this is in large part what makes memes useful as propaganda, allowing “alt-right users to spread elements of their ideology to ‘normies,’” a derisive term for the broader public that does not share their extremist views. In other words, carefully crafted memes, if successful, help suffuse a much broader cross-section of social media with hate-tinged messages than is likely otherwise.

But how exactly does this happen? For Dawkins, meme diffusion is almost exclusively a process of imitation, wherein small changes to the original meme might be introduced, but key features that make it recognizable remain the same. In Shifman’s view, too, there is a key element of imitation, though this is more closely coupled with the notions of both iterative transformation and mutual awareness — the idea that variations on a meme exist in a state of something like constant conversation with one another, what Shifman describes as the necessary intertextuality of meme culture.

Yet there is a temporal — or perhaps more accurately, an atemporal — reality to both the spread and interpretation of Internet memes that neither Dawkins’ nor Shifman’s definitions can fully encompass: While it is true that many memes reach their peak within a recognizable timeframe, there are also persistent memes whose visibility ebbs and flows throughout various communities over timeframes of years or even decades. Alternatively, memes may originate within a particular Internet subcommunity before “emerging” onto the broader social web. In these contexts, it is hard to argue that the newer memes are created as either meaningful imitations of the original, or even with a real “awareness of” the original meme.

Instead, such memes may be adopted and travel through new communities online via a process of inference, as suggested by Scott Atran in his 2001 article in Human Nature, The Trouble with Memes. In this context, memes’ transmission depends not on the viewer’s understanding of the original meme’s context or associations, but instead on their ability to form a cogent interpretation of the image independent of its original context.

The level of trust readers place in the content they are seeing is dependent not on the original creator (or even the substance) of that content, but on the trust the recipient has in the person who shared it.

In this context, the conditions for the spread of misinformation are much more favorable, as the person sharing the memetic “image macro” with friends and family may be unwittingly sharing content that has much more pernicious roots — and hateful connotations — than their personal frame of inference would suggest. Even more crucially, however, is that while a given individual or community may be naive to the meme’s original meaning, its unwitting spread can, in retrospect, offer the appearance of broader “support” for its more hateful and extreme interpretations. At this point, the meme may then catapult to mainstream awareness through coverage by the traditional press and mainstream media, with the consequence of both elevating and cementing the hateful message on the national stage.

The Problem of Pepe

Perhaps the best-known recent example of the creation, co-option, and elevation of an image-based meme comes from the 2016 presidential election cycle, when a variation on a character from Matt Furie’s comic strip Boy’s Club, Pepe the Frog, gained national and persistent attention as a symbol of “alt-right” and white nationalist attitudes. Although the clear association of Pepe with explicitly anti-Semitic and pro-Nazi imagery is today undeniable, the evolution of Pepe from relatively innocuous cartoon frog to widely-acknowledged hate symbol demonstrates the complex task of disambiguating and attributing specific “meanings” to memes, even as they become increasingly powerful mechanisms for both personal and political communication and representation.

The character of Pepe the Frog, was first introduced online via a MySpace web comic that the creator, Furie, estimates was first uploaded in 2005.

The cover to Fantagraphics’s collected edition of Matt Furie’s comic Boy’s Club

A few years later, a panel from one of the comics showing Pepe saying “feels good man” generated perhaps the earliest of Pepe memes, in which the character’s face was integrated into images or the “feels good man” slogan was used on message boards as a stand-in explanation for odd or niche behaviors or attitudes. For several years, the Pepe meme simply existed as a reaction image for, the most part, with variations like “feels bad man” — and mostly on reddit and “chan” image boards (the now-notorious 4chan and 8chan).

Memes created from Matt Furie’s Boys Club

In 2014, however, Pepe memes saw an explosion of both recognition and activity, as Tumblogs, Facebook pages, and Instagram accounts for the character were created, with variations posted by celebrities like Katy Perry and Nicki Minaj. 2014 also saw the vastly increased usage of a variation known as Smug Pepe, whose condescending, sometimes even abusive stance contrasted with other versions of the character, and who was considered by members of the original message boards to constitute a different character entirely.

And in 2015, during the early months of the US presidential campaign, Smug Pepe would be the first to gain widespread attention, when Donald Trump Pepes were circulated among his supporters looking through a border wall at Mexican immigrants, and then-candidate Trump personally re-tweeted an image of himself as a Smug Pepe behind a presidential podium.

By early 2016, Pepe had allegedly become the subject of a concerted effort to “reclaim Pepe from the normies” through the creation of aggressively anti-Semitic versions of the character. This new connotation of the meme quickly gained traction, both through media coverage and through the use of Pepe memes to harass Jewish journalists. Shortly before the 2016 election, Pepe’s use in an image shared by Donald Trump, Jr. caused considerable controversy amid widespread mainstream coverage of the character as a specifically white-nationalist symbol. Despite a variety of efforts on the part of both the Anti-Defamation League and Furie to divorce Pepe’s identity from the alt-right, the association persisted through the French election in 2017. Pepe’s association with far-right French politician Marine Le Pen (as “Pepe Le Pen”) appeared the be the final straw for Furie, who had the character killed off in a book given out for free shortly after Le Pen’s unsuccessful presidential bid.

In many ways, Pepe’s murky trajectory from Internet subculture icon to mainstream meme to alt-right avatar exemplifies the complexity of assessing speech in the social media sphere: when Donald Trump Jr. tweeted an image of “The Deplorables” featuring a Pepe character several months after Pepe’s purported conversion to an anti-Semitic trope, was he simply a vector for misinformation, unaware of the character’s associations? Or is the image meant as a dog-whistle to anti-Semitic supporters?

An image tweeted by Donald Trump Jr. of Pepe as one of his dad’s team of “Deplorables”

Similarly, when Ron Paul tweets a openly anti-Semitic update of a fundamentally anti-Semitic trope and places the blame on a “staffer”, how are we to assess the “real” message being sent?

This racist and antisemitic cartoon is itself a composition of other memes that are easily recognizable to certain segments of the internet. Even the false attribution to conservative cartoonist Ben Garrison is a nod to the somewhat more subtle — but equally problematic — tropes that his work often invokes.

What can we do?

The advent of social media as a means for politicians to speak directly to a multitude of “publics” with the click of a button poses difficult questions about how that speech should be assessed, especially given the pernicious potential for a post-and-retract cycle to “play” to all sides. On the one hand, knowledge and use of the anti-Semitic meme is seen by its proponents as an inherent validation of both their views and their efforts to make those views more visible in the mainstream. For others, the convenient (even if it is true) scapegoat of an unnamed “intern” or “staffer” being responsible for the offensive content does little more than taint the politician as being “unsavvy” or “out of it.” The result is a dangerous potential for cynical politicians and other public figures to speak — or tweet — “out of both sides of their mouths.”

In this sense, these memes fulfill the ostensible purpose of far-right groups that compose and promote such memes: namely, having them act as “‘gateway drugs’” to the more extreme elements of alt-right ideology,” as the Data & Society report frames it.

For this reason, there is an urgent need to develop cogent, proactive responses to the kind of anti-Semitic and racist content that emerges online, especially in more-difficult-to-parse format of image-based memes. While there are obviously no definitive methods for effectively identifying and responding to this material, these examples do suggest steps that may be useful for minimizing both the reach and the influence of these memes on social media.

First, as a journalist, I believe there is an essential role for the press to play in carefully weighing the potential amplifying effect that certain types of media coverage can have on otherwise fringe ideas. As data scientist and social media researcher Gilad Lotan pointed out after the 2016 presidential election, the mainstream media “picking up” on what were otherwise simply online phenomena — memes like “Hillary’s health” or “Pepe the Frog” — had both an amplifying and compounding effect on misinformation and disinformation alike. Was “Pepe the Frog” truly a white nationalist symbol by early 2016? Certainly, he was the protagonist of many anti-Semitic and pro-Nazi images, but the vast majority of these were almost created and curated by a handful of individuals. Nonetheless, media coverage of Pepe’s “transformation” brought those ideas to the fore of a national discussion, giving credence and legitimacy to both the ideas themselves and to the individuals who put them forward.

Whether Pepe started 2016 as a white nationalist symbol, he ended it as one, giving those wishing to co-opt other events, ideas and discussions the with those ideas the power and platform to do so with as little as a single image.

Because of this, journalists and journalistic outlets in particular have a responsibility to not just thoroughly research the context of memes, but to be especially skeptical of those claiming authoritative knowledge of a given memes or meme culture. From a practical perspective, the message boards from which many memes emerge are both ephemeral and intentionally designed to obfuscate the identities of posters, meaning there is no real mechanism for determining whether any particular thread is populated by more than few unique individuals.

Journalists would therefore do well to follow the same type of verification efforts with memes as is now considered best practice with other types of user-generated content: seek out the original poster or creator of the image, use reverse-image search tools to determine the provenance of the image and, most importantly, always be wary of the sensationalized claims that are likely designed with the explicit intent of co-opting your attention, your coverage, and your credibility. For example, the “Ben Garrison” cartoon that a staffer allegedly posted to Ron Paul’s Twitter account is itself a compilation of racist and antisemitic tropes that was originally posted to a 4chan message board where these caricatures frequently appear. Thus, while the primary image is drawn from an anti-imperialist cartoon created by Carlos Latuff (whose intense critiques of Israel themselves have been criticized as anti-semitic), the faces superimposed on the original represent racist stereotypes that are commonplace on 4chan’s “politically incorrect” (/pol/) board (while cartoonist Garrison has asserted that such collages are the result of “trolling,” his own views don’t appear to stray far from those of his imitators; unlike Furie, Garrison does not appear to have filed suit over these “defacements”). Thus, no matter who actually authored the Ron Paul tweet, the message is clear: someone on Paul’s staff frequents very particular corners of the internet, where similarly antisemitic and racist messages are commonplace.

Further, both journalists and the general public would do well to demand more of politicians who choose to use social media as a means to communicate with the public, and should be wary of dismissing as simple “gaffes” any message with anti-Semitic or hateful overtones. As work by the Columbia’s own Knight First Amendment Institute has made clear, politicians who use social media to communicate with the public can and should be held fully responsible for their use of those platforms, and we should not accept convenient disavowals for social media speech with the excuse that our elected officials simply “do not know better” — if they wish to wield the power of social media and memetic speech, then they must be responsible for the way they use it as well.

To dismiss a particular post online as simply the work of a “staffer” or “intern” — and then quickly delete the evidence — not only calls into question the very credibility of their other statements, but is evidences of a dangerous proclivity for rewriting history. I suggest that rather than allowing ourselves to accept as representative only the statements that politicians themselves prefer, we demand that they account thoroughly and publicly for the mechanism by which blatantly anti-Semitic content “finds its way” into their feeds. Just as we should not mistake a handful of people online for multitudes, we should not allow one person to claim many voices for the sake of political expediency.

While in many ways the image I have drawn of the social media landscape may itself seem bleak, I do wish to highlight that, just as memes can spread messages that are divisive and hateful, they can also spread messages of support and solidarity. As some of you may know, in early 2016 a set of journalists was targeted with anti-Semitic posts and emails that placed their names inside a set of triple parentheses — a practice soon identified as a textual representation of the “echo” applied to Jewish names within an alt-right podcast. Although intended to intimidate, however, the meme was instead quickly co-opted by Twitter users who added the parentheses to their names in a show of solidarity with those who had been targeted. With that, the parentheses quickly lost their power to intimidate, and instead became a symbol of support for all those who had been targeted.

And while not every anti-Semitic meme can be similarly dismantled, it is a cogent reminder that like all messages of hate, their power persists only to the extent that the rest of us let them go unanswered.

Susan McGregor is Assistant Director of the Tow Center for Digital Journalism & Assistant Professor at Columbia Journalism School, where she helps supervise the dual-degree program in Journalism & Computer Science.

--

--

Tow Center
Tow Center

Written by Tow Center

Center for Digital Journalism at Columbia Graduate School of Journalism

No responses yet