Anonymous communities can easily be mixed up with as a thick mess of senseless social interactions. At least, that is how I saw this world when I first decided to study anonymous communities for my Master’s thesis. I thought I would study how surveillance operates in anonymous social media applications—specifically, a very popular (at the time) application called Yik Yak.
When I first downloaded the app a month before I decided to dedicate two years to it—my room mate had convinced me to check it out. An seemingly infinite central feed of anonymous comments that were sorted by a slurry of up-votes and down-votes. The Yak feed is tied to a geolocational system that connects the app to particular locations. My Yak, was the Queen’s University Yak. It was a busy feed. And it was constantly changing. To me, it seemed to be a chaotic and nebulas thick tangle of associations. A fun challenge for a scholar following and Actor-network inspired philosophy.
The popular posts stood out from the unpopular posts by an upvote/downvote feature. It was kind of like a mash between Twitter and Reddit with a touch of anonymity.
After a stint of digital ethnographic work and a ton of interviews with enthusiastic and committed users I began to see something else. Something that, as an outsider, was invisible to me at first. There was an elaborately balanced Yik Yak community. As Gary T. Marx asserts, anonymity is entirely a social process. The only way for anonymity to occur is through a faceless interaction with another faceless person. This includes social regulations, exploitations, and oppressions. But also, playfulness and a culture of care.
I would like to play with a concept I’m thinking of called (a)social. ‘a’ can be used as a negation. ‘a’ can also be used to represent anonymity. But mostly, ‘a’ will be used to approach a society which remains almost entirely faceless. A community of people interacting around nothing more than posts from people who occupy similar space. Similar cultural values.
Though I have major problems with the corporate side of Yik Yak with their capitalist motives and try-hard branding schemes, their application has facilitated the construction of an elaborate community. It’s created an (a)social experiment. It is a community that both contains a culture of trolling and a culture of care.
All things are a collective endeavor. The (a)social communities are also a collective endeavor. In Donna Haraway’s most recent philosophical publication, Staying with the Trouble, she discusses her concept of sympoiesis—a collective unfolding of reality. This collective includes everything. All human, inhuman, and nonhuman components that are threaded into the collective mess.
When we load up Yik Yak to our mobile phones and post snippets of thought to the main feed (or engage in grueling arguments over all controversies in the comments)—we work with silicone, wires, codes, telecommunication companies, algorithms, molecules, humans, bots, and entire scaffoldings of bureaucracies, legal frameworks, and governments. Interacting with the Yak spans the world over.
Furthermore, the Yak’s platform—allows particular functions and blocks others—shaping its users to interact in particular ways. They impose standards, through their Code of Conduct, which they enforce through algorithms looking for offensive key words. And they sometimes change up everything in an update (to remove their main feature, anonymity). These are the institutional forces that shape and provide stability to the community.
However, I have noticed that there is something more powerful at work in maintaining the community. It seems that the mess of interactions from users balance out particular norms and ways of acting. This is done through both the comments section and the up-vote/down-vote feature. These are the vernacular forces that generate norms and cultures. Certain topics, maybe, offensive topics, are down-voted (a -5 score from votes deletes the comment from the feed). This vernacular power, though institutionally enabled, allows for a regulation of trolls and bullies without Yak’s employees ever having to get involved.
(a)social sympoiesis initially looks like a senseless and dense knot of relations. It’s noisy and confusing. But, once, as an ethnographer, you begin the arduous work of untangling these associations—it begins to look like every other community. Despite all of the contradictions, despite the arguments, the controversies, and the confusing faceless interactions—the Yak community is able to balance out, stabilize, and “hang together” as a coherent whole.
Though such an (a)social collective is not shielded from the larger world. Once, for whatever reasons or motivations, Yik Yak decided that their users didn’t want to be anonymous and forced every user to get user handles (and suggested they link up their Facebook page)—the entire community collapsed. All that is left are groups of Yak “refugees” with no where to go but to be visible to the world.
Let’s see some examples:
“We collect information to provide better services to all of our users – from figuring out basic stuff like which language you speak, to more complex things like which ads you’ll find most useful, the people who matter most to you online, or which YouTube videos you might like”.
Let’s look at Facebook.
“We give you the power to share as part of our mission to make the world more open and connected. This policy describes what information we collect and how it is used and shared. You can find additional tools and information at Privacy Basics”.
Another example of the good intentions of Facebook.
“We work with third party companies who help us provide and improve our Services or who use advertising or related products, which makes it possible to operate our companies and provide free services to people around the world”.
What is missing is the way these companies use the user data to swing huge profit margins. This is arguably the most important factor that transparency is suppose to solve. Both Facebook and Google, the two Silicone Valley tech-giants, make a strong claim that they do everything to better serve their customer base–however, their intentions are more so geared towards data monetization.
However, this does not mean that such data can’t be re-identified (for more in-depth explanation—check out this revealing paper). It is often said in the surveillance studies community that meta-data is more revealing then data. meta-data isn’t merely unidentifiable and arbitrary. If it was, why on earth is it treated like the gold of the digital age?
Though Google and Facebook are certainly important and high quality tools, these issues are still exceedingly problematic and must be addressed. One reason we should be concerned is that we rely on these social media tools several times a day to maneuver through our social and cultural lives.
It isn’t just a corporate product anymore; it is an indispensable piece of social/cultural capital. Don’t believe me? Try to quit Facebook and/or Google—I bet you can’t. It is very difficult to engage productively in our communities without using what these corporations have to offer. These corporations are silently bleeding us dry of our privacy in order to establish high profit margins. And I am willing to bet that most people don’t know the extent of all of these injustices.
The more privacy that we allow these corporate entities to take from us, the more they will push to widen the gaping hole in our already transparent lives. This can be a terrifying prospect in terms of politics of freedom of speech and expression. You may not be hiding something now—but we all hide something eventually. And we have a right to do this. For an example, the right to obscure your sexuality. This is a touchy subject as the homophobic and heteronormative ideologies that lead to hate crimes tend to fluctuate. One day you are accepted for who you are, the next, you might be beaten to a pulp or socially stigmatized. You may lose your job. Your house. And in some terrible cases, your life. We need the right to hide. And we need the right to choose anonymity.
So what now? Don’t be so complacent. The tech-giants are simulating transparency without telling the whole story. They know very well that if their surveillance capacities are pulled into the spot light that they will be forced to change. But they also know very well that no one reads these documents anyways. And because no one reads them—they are left quite untouchable.
Technology and its surveillance capacities are constantly changing and improving. However, our laws and policies are not keeping up with this. It is why it is so important to speak-out and inform those around you. As academics, as citizens, and as consumers.
Pokémon Go is lulling the world in to a humungous augmented distraction. A distraction that is covering up some pretty intense politics. It is almost as if we stepped into Ernest Cline’s Ready Player One—where distraction through virtual reality meets the war between anonymity and surveillance.
It has been well publicized that this new app, of which is fueling a Pokemania (a nostalgic resurgence of interest in Pokémon every time a new rendition of the game is released), has some rather arbitrary and invasive access to your mobile phones data—particularly, unhinged access to your Google account and other features of your mobile device.
What is Pokémon Go? This almost seems pointless now, seeing the popularity of the game—but for those of you who have not tuned in to the pokemania. Pokémon was a TV show released in the late 90s, which became dream fuel for a generation of children and young adults. It featured a young boy, Ash Ketchum, who embarked on a Journey to capture Pokémon in a technology known as the “pokeball” through the direction of the Professor (A man who studies Pokémon). After the Pokémon is caught, the young boy (and the thousands of other Pokémon trainers) would aspire to train it to battle other Pokémon.
Shortly after the show caught on, Nintendo released Pokémon Red and Blue for the Gameboy Colour. These games became an absolute hit. I remember walking to school with my eyes glued to my little pixelated screen—traversing over roads and dodging cars while battling with Pokémon and trading them with my other schoolyard peers. The games slogan repeating through my cranial, “Gotta Catch Them All”.
Nintendo continued to release Pokémon games designed for their various game platforms up until present. Each successive game included an obsessive and nostalgic excitement that took over the gaming community. Or anyone who had grown up playing Pokemon Red and Blue, as well as collecting the Pokemon cards.
Pokémon Go is a game that can be played on a mobile smart phone that uses geolocational data and mapping technologies that turn the phone into a lens peering into the Pokémon world. Through the interface of your mobile device, you can catch Pokémon wandering the “real” world, battle through gyms, and find items that will aid your journey. It augments the world around the user so that everything and everywhere becomes a part of the game.
Just like its predessor, a game known as Ingress, many of the geo features in the game were set up around important places: art exhibits, cultural or historical sites, and parks. Following the maps would lead you through a productive tour of a cities geographical culture.
I want to explore the obsessive and nostalgic excitement through a techno-socio-cultural lens. I will unpack this critique into three parts: (1) the sociology of privacy, (2) Big data and algorithmic surveillance, and (3) the culture of nostalgia and the digital sublime.
Before I continue with this post—I want to assert that it is not an all-in-all terrible, megalomaniac, Big Brother type game. Pokémon Go is enabling new ways for people to engage in the social world. Check out this sociological blog post exploring just that. However, it would be silly to not apply a critical perspective to this.
There are some restrictions I’d like to apply to my analysis: (1) Pokémon Go is not an immature or irrelevant activity, millions of people of all ages and cultural backgrounds are playing it—meaning it has a ton of significance. As well as, (2) The people playing Pokémon Go are not zombies or passive consumers, they are very intentional and unpredictable social actors that have the ability to understand their situation.
Sociology of Privacy
One thing that boggles the minds of surveillance studies scholars is how the vast population of people using social media and mobile applications do not care about invasive surveillance embedded in everything they use.
In my own interviews of Facebook users in 2014, many of my participants claimed, “I have nothing to hide”. A pervasive mentality that enables big corporate and governmental entities to gain access and control to large swaths of data. This nonchalant attitude towards surveillance allows for massive ground in the dismantling of our rights to privacy. Though such an attitude is not surprising, as the entire ecosystem of social media is set up to surveil.
David Lyon, in his book “Surveillance After Snowden”, asserts that privacy is generally seen as a natural and democratic right that should be afforded to all citizens—but admits that a problem lay in that informational privacy is not as valued as bodily or territorial privacy. Even if information, data, and metadata are much more revealing than the both bodily and territorial surveillance.
Lyon notes three important points about privacy that are all very relevant to the current epidemic of pokemania: 1) the collecting of information has now been directly connected to risk mitigation and national security, implying that we are not safe unless we are surveilled. 2) Everyone is now a target of mass surveillance, not just the criminal. 3) Data collected through mass surveillance is made to create profiles of people—these may be completely inaccurate depending on the data collected, but you will never know the difference.
I would like to add a fourth. How can the data be used to swing massive profits? The corporation Niantic, creators of Ingress and Pokémon Go, use their privacy policies to legitimate “sharing” (sic: selling) of data with governments and third party groups. Government surveillance is often the focus of criticism. However, capitalist corporations are not often held accountable to ethical practices. Who is selling this data? Who is buying this data? And what is this monetized data being used for?
As Lyon asserts, Privacy is not about individual concerns—it is important socially and politically for a well-balanced democracy. Edward Snowden has been known to say, “It’s not really about surveillance, it’s about democracy”. While we continue to allow powerful groups to chip away at our privacy for entertainment, we literally give up our ability to criticize and challenge injustice.
Snowden reminds us that when we give up our democracy to the control room—there is zero accountability, zero transparency, and decisions are made without any democratic process.
So while we are distracted trying to catch a Snorlax at the park, we are giving away more and more of our lives to mysterious and complicated groups that want nothing but large profits and control. For a much more scathing review of this, see this blog post on surveillance and Pokémon.
Big Data and Algorithms
So what about the data. What is big data? First off, it’s all the craze right now. As data scientists, social scientists, policy makers, and business gurus scramble to understand how to use, abuse, and criticise such a thing. Big data is consistent of two large disciplines—statistics and computer science. It is the collection and analysis of unthinkably large amounts of aggregated data that is collected and analyzed largely by computer software and algorithms.
Boyd and Crawford (2012) offer a much more precise definition. They assert that Big Data is a “cultural, technological, and scholarly phenomenon” that can be broken into three interconnected features:
Technology – Computer science, large servers, and complicated algorithms.
Analysis – Using large data-sets compiled from technological techniques to create social, political, cultural and legal claims.
Mythology – Widespread belief of the power of Big Data to offer a superior knowledge that carries immense predictive value.
The big problem that remains is how to find, generate, and collect all of this data? In terms of social media and video games much of this has to do with offering a “free” service to consumers who take the role of the “prosumer”. The prosumer is a social actor that both produces and consumes the commodity they are “paying” for.
In terms of social media (like Facebook), while users interact with each other, they are producing affective or emotional data through liking things, sharing things, and discussing things, that are then collected by algorithms and fed back into the system through targeting advertisements. The user is implicit in both the production and consumption of that data.
The user is given free access to the social media platform, however, they pay for it through giving the platform a transparent window into their lives that is than monetized and sold for large profits. People’s reactions to this form of surveillance are variant: some people offer scathing criticisms, others don’t give two shits, and some act a little more cautious.
Why is this important for Pokémon Go? Because you trade your data and privacy for access to what Pokémon Go has to offer. It is incredibly clever of think tanks in Niantic—using the nostalgic Pokemania to usher users into consenting to ridiculous surveillance techniques.
It gets worse. As Ashley Feinberg from Gawker identified, the people responsible for Niantic have some shady connections to the international intelligence community. Causing some in the surveillance studies field to fear that Pokémon might just be an international intelligence conspiracy (It sounds crazy—but it makes complete sense).
David Murakami Wood coined to the concept of “vanishing surveillance”. This is a phenomenon, intentional and unintentional, that allows surveillance capacities in devices to fade into the background. Resulting in users not being aware, or at least completely aware, that they are being watched. Pokémon Go, an innocent video game that is enabling new ways of being social in public, becomes an invisible surveillance device that may have international and interpersonal consequences. And it is the Pokémon themselves that allow for the surveillance to vanish from sight and mind.
A Culture of Nostalgia
So what drives people to consent to all of this? What kinds of cultural patterns allow and shape us to an almost fanatical state when a Pokémon game is released?
The first factor within the culture of Pokémon is its appeal to nostalgia. Jared Miracle, in a blog post on The Geek Anthropologist, talks about the power of nostalgia. It taps into the childhoods of an entire generation—it even moves outside the obscure boundaries of gamer culture into the larger pop cultural context. It wasn’t only geeks that played Pokémon. It was just about everyone. This might provide an explanation to why so many people are wandering around with their cell phones before them (I’ve seen them wandering around Queen’s campus today, while I was also wandering around).
However, it is not all about nostalgia. I believe that the nostalgia plays a role in a bigger process of the digital sublime and the mythologizing of the power of media.
What is a mythology? According to Vincent Mosco, in his book The Digital Sublime, defines myth as, “stories that animate individuals and societies by providing paths to transcendence that lift people out of the banality of everyday life”. This is a form of reality that represent how people see the world from the every-day-life perspective.
Myths are also implicit in power. “’Myth’ is not merely an anthropological term that one might equate with human values. It is also a political term that inflects human values with ideology… Myths sustain themselves when they are embraced by power, as when legitimate figures… tell them and, in doing so, keep them alive”.
These myths, along with nostalgia for Pokémon paraphernalia, generate the digital sublime. A phenomenon that has us go head over heals for new technology. The mythologies that support it can be positive or negative.
Positive mythologies might sound a little like this: “Pokémon Go is allowing us to leave our homes and experience the world! We meet new people and we are empowered by new ways of interacting with each other. Hurrah!”.
Negative Mythologies are also important: “Pokémon Go is creating a generation of zombies. People are wasting their time catching those stupid Pokémon. They are blindly and dangerously wandering around, falling off cliffs, and invading private property. Damn those immature assholes”.
Both of these mythologies cross over each other to colour the experiences of those who play and those who watch.
We need to be careful of generating mythologies about the capacity for games to facilitate freedom, creativity, and sociality. We also need to be careful not to apply to much criticism. Such mythologies not only create a basic, overly simplistic way of understanding gaming, surveillance, and human culture, it also blinds us to nuance and detail that may be important in its broad understanding.
Drawing things together—A Political Economy of Pokémon
Taking a techno-socio-cultural perspective allows us to engage with Pokémon Go with a nuanced understanding of its positive and negative characteristics. It is possible to look at how this media creates a complex ecosystem of social concerns, political controversies, and cultural engagements with nostalgia, mythologizing, and capitalist enterprise.
Pokémon Go is indeed enabling a ton of new ways of interacting and helping people with mental illness get out of their homes to experience the world—however, we can’t forget that it is also an advance technology developed by those who have interest in money and power.
Regardless of the benefits that are emerging from use of this application, there are still important questions about privacy and the collection and use of Big Data.
So Pokemon Go isn’t just enabling new ways of being social with the larger world. It is enabling new ways of engaging with issues of surveillance, neo-liberal capitalism, and social control through the least expected avenues.
After all of these problematics become more and more public—will we still trade off our freedom for entertainment?
We have all likely heard of the panopticon. An architectural design of a prison, thought up by Jeremy Bentham, that was suppose to maximize surveillance capacities so that prisoners always felt as if they were being watched. Even when they weren’t. It consisted of prisons revolving around a central guard tower that could watch every move of every prisoner, all the time. However, the guard tower is made to be opaque—so the prisoners can’t watch the guards.
In 1975, Foucault borrowed this idea to illustrate his concept of disciplinary power in one of his most famous books—Discipline and Punish. The basic idea around Foucault’s use of the panopticon is that when people feel as if they are constantly being watched, they begin to self-discipline. The panopticon can refer to a prison. But it is meant to refer to society in general. Or many of the institutions in a society. The more people feel that they are being watched, the better they act. This watching could be through authorities, or even, your neighbors.
Though Foucault’s concept of disciplinary power is super important to many who study sociological theory—his example of the panopticon is overused and often misleading. It does not accurately represent the nature of surveillance in contemporary society.
The idea of the panopticon better characterizes a society of “total surveillance”. A completely, balls-to-the-walls, 1984, Big Brother-type (dys)utopia. Thankfully, there is currently no technology on earth that can allow for total surveillance. We may be a society of ubiquitous surveillance, but not a society of total surveillance.
So how do we “move beyond the panopticon”, as so many social and cultural theorists have been calling for? There is one useful theoretical framework that builds on Foucault’s work. This is the concept of the oliopticon. A concept that was proposed by Bruno Latour during his incredibly critical arguments of Reassembling the Social.
Latour criticizes Foucault for drawing up a total surveillance “utopia” that is made of “total paranoia and total megalomania”.
“We, however, are not looking for utopia but for places on earth that are fully assignable. Oligoptica are just those sites since they do exactly the opposite of panoptica: they see much too little to feed the megalomania of the inspector or the paranoia of the inspected, but what they see, they see well…”
Latour is staunchly reminding us that something that is everything is nothing at all. The panopticon is made to be too perfect. It is made to see all. It’s something, that as academics, we can’t possibly empirically record or understand. But the oligopticon is the existence of countless scopes meant for watching. Countless surveillance devices. They only see everything together, but because they rarely communicate it could hardly be called “total surveillance”.
“From oligoptica, sturdy but extremely narrow views of the (connected) whole are made possible—as long as connections hold. Nothing it seems can threaten the absolutist gaze of the panoptica, and this is why they are loved so much by the sociologist who dream to occupy the center of Bentham’s prison: the tiniest bug can blind oligoptica”.
However, this does not entirely rule out the panopticon. As Kitchin and Dodge in their book Code/Space assert, the power of codes and algorithms may some day be able to unite many of the streams of the oligoptica to create a menacing panoptic machine. However, due to the unstable nature of the practice of scripting code, running code, and working hardware—it is liable to bugs, errors, and absolute mutiny. So don’t hold your breath.
The panopticon, for now, has its place—but it’s a more appropriate theme for a science fiction novel than a good work of social science or philosophy. It serves as a powerful reminder of where a ubiquitous surveillance society could lead us, but not as a very good characterization of surveillance today.
Tinder has become an almost ubiquitous dating app that has built quite a lot of controversy even as it sits in most of our mobile phones.
While users swipe left and right searching for people they might be interested in, questions of legitimacy and authenticity in love, intimacy and dating arise. I was motivated to write this blogpost after a debate with a great friend of mine about the authenticity of love in Tinder.
Questions arose for me: If love is a social/cultural construct, is it authentic? What does Tinder do to change how love manifests itself? What has changed that we can’t right away see?
These tensions hang about within an atmosphere of heavy technical mediation that silently organizes, classifies, and structures how users interact with each other.
In an article by Fast Company, Austin Carr had the wonderful opportunity to interview Tinder CEO Sean Rad about the algorithm that measures user desirability and matches users to their equivalents. This is the ‘Elo score’ program.
Tinder users do not have access to their desirability rating, however, this rating does predetermine who gets to see who. It brings together a large and heterogeneous set of data on users in order to measure their desirability. Users only have access to viewing profiles that share a similar scoring.
Algorithms are active mediators in the construction of networks of intimacy. I would like to call this ‘software mediated intimacy’. Before we explore this concept we need to understand the basics and histories of two concepts: actor-networks and intimacy.
On actors, networks and the silent power of algorithms
Actor-networks are a set of conceptual tools originally devised and made popular by science and technology studies scholars Latour, Callon, and Law.
The most fundamental distinction of this school of thought, sometimes referred to as actor-network theory (ANT) is the principle of generalized symmetry. This concept holds that humans and nonhumans (atoms, systems, computers and codes) are equal in their ability to shape intentions of actors in networks of associations.
An actor, which can be human and/or nonhuman, is made to act.
This act is always done in relation to other acts.
So actors, together acting, create actor-networks. Humongous, unpredictable webs of associations.
This model of understanding society—through complex, overlapping networks of humans and nonhumans, is very useful for understanding how actors shape each other over social media platforms.
There are countless nonhuman’s working in sprawling actor-networks that shape, structure, and mediate how we interact with others online. Most of this remains hidden within our electronic devices. Strings of codes and wires affect us in ways that are sometimes more meaningful than our interactions with humans.
Some ANT scholars call this the blackbox—where a vast actor-network becomes so stable it is seen to act together and it becomes a node in other, more expansive actor-networks. Think about how many blackboxes exist in the many components of your mobile devices.
I would go as far to say that most Tinder users are unaware how much their romances are mediated by algorithms. These algorithms are typically trade secrets and the musings of Science and Technology scholars—not quite specters in the imaginations of users.
On the social construction of love and intimacy
Love is not universal, it is a complex set of changing associations and psycho-socio-technical feelings that are tied to space, time and historical circumstance.
Anthony Giddens, in his book The Transformation of Intimacy, demonstrates the social and cultural influences that actively construct how we understand, perceive, and pursue, love and intimacy.
In the past, the pre-modern, relationships were forged through arranged marriages that were mainly tied to economic priorities. This is something that Foucault, in his work The History of Sexuality, called the deployment of alliance—in other words, the creation of social bonds and kinship alliances that shaped the distribution of wealth and ensured group and individual survival.
In a complicated move that I am unable to detail here—the deployment of sexuality emerged that changed the very nature of the family. It also led to the love and intimacy that we know today—a social bond that is not necessarily tied to the family unit.
Gidden’s follows, as Western societies began to experience the characteristics of modernity—intimacy and love began to emerge alongside it. Romance emerged when the family institution of premodernity collapsed.
Why am I talking about this? I am talking about the work of Giddens and Foucault in order to illustrate the love and intimacy are a constantly shifting social and cultural construct. Every generation experiences love and intimacy differently.
This is probably the reason why your parents scowl about your one night stands and the fact that you haven’t given them grandchildren yet.
Love and intimacy is going through a massive shift right now.
‘Software mediated intimacy’—algorithmic love
In his essay A Collective of Humans and Nonhumans, Latour speaks about our progression into modernity as a deepening intimacy between humans and nonhumans. An integration between humans and their technology. A folding of all sorts of intentions and agencies.
In this case, as humans become further enveloped in their associations with mobile devices and ubiquitous social media. Most of our interactions become mediated through strings of codes, complex algorithms and digital hardware.
In the case of Tinder—the algorithms are mediating and structuring how we meet and communicate with people who may become love interests. Of course, these algorithms don’t exactly control such love. Human actors enter into associations with the Tinder actor-network with all kinds of intentions and expectations.
Some actors want a fling, a one night stand. Others might want a long-term relationship. Others may even just want attention or friendship.
So even though Tinder is sorting who gets to talk to who—a heterogeneous and constantly shifting clusters of intentions are constantly shaping how Tinder-based relationships will turn out.
And it’s not perfect. Love and intimacy falter and wilt just as much as it blossoms.
But that’s not my point. ‘Software mediated intimacy’, like all of the forms of before it, is another form of social and culturally curated love and intimacy. So it’s not authenticity that is the primary question here. All forms of love emerge from some degree of construction or performance. There is no universal standard.
However, what is different and what is often not seen is the degree to which love can be engineered. Computer scientists and engineers, at the beckoning of large (or small) corporations, embed particular logics and intentions into the algorithms they construct.
As Dodge and Kitchin remind us, in their book Code/Space, this is not a perfect product of social control—but a constant problem to be shaped. Even so, it is disconcerting that so many users are being shaped by silent human and nonhuman mediators.
Tinder’s algorithm and the ‘Elo Score’ is a trade secret. Not opened up to the public. Or the academe. So I am left scrambling for an answer that only raises more questions.
‘Software mediated intimacy’ can offer us a new and novel way to construct relationships, whether they are temporary or sustained. It is a form of social interaction that is just as authentic as prior forms of love and intimacy. However, the codes and algorithms and the complexities of computer science and statistics make it overwhelmingly easy for corporations to shape users based on private interests.
We must learn how these processes work and how they might be democratized so that the user may benefit as much as the capitalist who produces and hordes such software. We must embrace ‘software mediated intimacy’, in order to learn about it and master how it works so we might sidestep the potential and precarious exploitation seeping through our engineered social bonds.
Note: This blog post will be followed by a more in-depth analysis of this understanding of Intimacy. I am seeking to expand this into a larger, theoretical project. Any comments, criticism, or thoughts would be incredibly useful.
The Affinities by Robert Charles Wilson is a science/speculative fiction about a man named John Fisk, whom gets caught up in the whirlwind of a new, emerging social order.
The affinities, a group of 22 technologically and scientifically engineered social groups, were created by a man named Dr. Meir Klein, a social psychologist whom discovered the ‘socionome’ through a fictional scientific field call ‘teleodynamics’.
Klein names twenty two affinity groups, and they are all reserved only for those who are preselected to belong to such groups through rigorous psycho-socio-technical testing.
A collection of neurological, psychological, and sociological data are collected from social actors and entered into a computer program that has an advance algorithm that sorts patients into different affinity groups that are based on essential qualities of the individual that the actor has no control over.
John Fisk is chosen for Tau—one of the largest affinities.
Just as a precursor to this review: Wilson has embedded so many layers of academic and creative work into this novel. I am only able to focus on a particular dimension—algorithmically engineered social groups.
Once a person joins an affinity, they get drawn in by some psycho-social force that creates bonds stronger than family and a fierce loyalty akin to wolves.
The book begins, following John who is suffering some sort of ambient existential crisis. This character does not belong to his family, his school, nor to his friendships. He is lacking an essential quality that most humans long for: belongingness.
Once he joins Tau and attends his first meeting, he is immediately hurled into a cult like loving embrace with the other Tau members. A sense of belonging emerges so strong that I felt myself becoming overwhelmed by a sense of nostalgia for something I’ve never experienced.
Here is the central problem: How do you engineer social relationships to address one of the most prevalent symptoms of late-modernity—loneliness and anomie.
Wilson’s story explores the power of computer algorithms and social technology to augment and engineer human interaction. Such social engineering has the potential to create a perfect group dynamic powerful enough to accomplish anything. Including bringing about a sense of purpose that dissolves the problem of social anomie.
Though ‘teleodynamics’ and the ‘socionome’ are works of fiction, such algorithms already play such a large role in our lives.
These are the algorithms that organize and structure how we use most social media applications. Facebook, Instagram, Twitter and Tinder utilize complex algorithms that mediate, and sometimes exert control, over our interactions with each other.
This phenomenon is ubiquitous. It’s everywhere, and it’s usually disguised, and made invisibile, in complicated electronic devices.
It’s easy to get bedazzled by the spectacle of affinity groups. The beginning of the story leads us to follow lonely John Fisk get carried away by the social bonds created in Tau. He is no longer lonely. He has found a sense of belongingness that his broken family was unable to give him.
However, as the story progresses we get to see snippets of a dystopia burning away at the overall spectacle. Not everyone can get into an Affinity. After testing, many are turned away as their psycho-social profiles do not match any of the 22 affinity groups.
They are rendered outsiders. In a move that is reminiscent of eugenics—these outsiders are put into a seriously disadvantaged position as affinity groups ignore their concerns.
Twenty two utopias are formed and they close the door on anyone who doesn’t belong. Furthermore, the hyper-loyalty begins to make pan-affinity tensions as affinity groups begin to push against each other for supremacy.
We are left with a sense of anomie and a lack of belongingness. The very issue that the Affinity groups, constructed by Klein, were supposed to elevate.
There are many layers of inter-personal and inter-group conflict that emerge out of these tensions. The affinities are battling secret battles against each other, not so different from gangs; the politicians of the state oppose the fiery emergence of affinity based governance; and those who don’t belong are asserting their right to belong.
The affinities change everything from the most micro social interactions to the most macro global politics.
However, once this new form of algorithmically engineered social groups take root in the social and economic infrastructures—they are here to stay. Everything is different. Eventually the technology for testing people for affinity groups becomes affordable and public.
Coders and social scientists begin to experiment on alternative affinities. Or at the very least, innovative ways of using algorithms to structure relationships. And of course, like everything on the Internet—an open source version emerges to deal with all those who don’t belong. Everyone left behind.
This project is led by an organization called New Socionome. A decentralized and open-sourced activist group trying to create access to affinities for everyone.
In the end, the social landscape is changed for good and it is uncertain what the future will look like. This book is a useful tool and a powerful story that engages with the developing tensions surrounding emerging algorithmic relationships that are silently shaping the lives of millions.
Wilson’s work also explores the existential issues of belongingness in a society where family bonds are becoming fractured and falling into a state of anomie. This existential angst that can be elevated with technological assistance. Though he is careful to portray that such assistance is not a perfect, utopic solution.
Science and speculative fiction have an immense power to explore issues between humans and technology. The Affinities does a great job at this. Oftentimes, algorithms are silent mediators of our communication. So silent, that no one seems to notice their prevalence. It’s stories like this that draw attention to a problem that has existed for over a decade.
I had a recent run in with the public eye because of an op-ed article I wrote for the Queen’s Journal on a controversial exploration of the limits of free speech in terms of the ability of the (in)famous Queen’s Alive (anti-abortion) group to table and canvass for supporters on campus.
This lead to a public debate and lots of discussion. It also lead to a ton of mudslinging and attempts at public smearing.
I had also experienced this in the past when myself and another advocate for queer rights filed a human rights complaint against a magazine for a publishing an unsavory article illustrating a scathing hatred for queer folk (they referred to us as evil and pagan). I was waist deep in understandably complex, multidimensional and very contested discourse. These discourses led to unpleasant hate messages and full transparency in the provincial (and to some degree, international) media.
That is not the topic of this blog post. What I would like to discuss isn’t the status of free speech or the unpalatable existence of hate speech. Rather I am interested in the intense visibility that activists (and others) are exposed to through unpredictable new media interactions. These interactions are typically escalated and amplified by the Internet. This is a dimension of contemporary surveillance not conventionally covered by many academics. It is the subjects of surveillance that I would like to explore.
I find myself constantly confronted by a mentality that artifacts of popular culture are inferior to those strong pillars of Western intellectual culture (Shakespeare and Gatsby). I’ve encountered this from peers and mentors, classwork and coffee shop discussions. In this blog post, I’m going to challenge these common assumptions that position popular culture as something less than intellectually stimulating—or worse yet, mere entertainment. I am not trying to say that other forms of art and creation (“high culture”) are bad, I quite enjoy Shakespeare and Gatsby. But I also love Star Wars, Rick and Morty and the Hunger Games. The Internet, and other developments in digital technology, has allowed for the proliferation of popular culture. The Internet and computer software has provided affordable mediums and methods for all kinds of people to create “things”. All kinds of “things”! Memes, amateur YouTube videos, blogs, creepypasta (amateur scary stories), and enormous catalogues of emotional responses in the form of animated GIFs.
“Folklorists, unlike literature scholars, or art historians, or music scholars, we don’t look to the productions of the rare geniuses of human kind as the only cultural products worth paying attention too. We look to other kinds of cultural productions, productions that I think make the state of our digital lives seem a little less dire… The problem is with the assumption that the collective works of Shakespeare is the only valid cultural output…”
Through studying, interpreting and understanding folklore, or the stuff and knowledge of everyday life, we get a pretty good illustration of how people interpret and understand the world around them. This is important for all kinds of reasons.
Brenda Brasher (1996), in her work titled, ‘Thoughts on the Status of the Cyborg: On Technological Socialization and its link to the religious function of popular culture’, observed that people are shifting from using religion to generate an understanding of ethics in everyday life to that of popular culture. In this sense, there are more and more people who are interpreting ethics through the Jedi philosophy of the Force than that of the bible. People construct complicated pastiches (or collages) of raw pop cultural data to construct their belief systems. Snippets of ethics and norms taken from Hollywood blockbusters, 4chan, YouTube series, and an ungodly number of video games.
To ignore popular culture is to ignore this massive shift in how people understand the world around them. A great example of the power of vernacular popular culture and folklore is video games. To be frank, though there are amazing and powerful pillars of literature—I find myself struck with an overwhelming sense of catharsis when I play through a well-constructed video game. I’ve had oodles of discussions with friends who are willing to bracket off video games as an intellectual waste of time. However, such cultural artifacts are important to the aspiring digital folklorist just because so many people play them. Furthermore, so many people code them as well. Gamer culture becomes a myriad of professional and independent games.
Just to demonstrate this, here are some statistics presented by the Entertainment Software Association at E3 (a big gaming conference) in 2015. 42% of Americans play video games regularly (at least 3 hours a week). The average age of a gamer is above the age of 35 (so video games cross generations). Gamers consume more games than they do TV and movies. Consumers spent a grand total of 22.41 billion dollars in America in 2015. Video games are big! Lots of people use them, identify with them, and generate cultural groups around them. This is an eye opener for a folklorist, it should certainly be an eye opener for other social scientists.
Trevor Blank, a digital folklorist, observed in his introduction to digital folklore that “It bears noting that the fear of cultural displacement via mass culture is mothering new” (3). He demonstrates that following the innovation of some new form of media, cultural pundits criticized emerging technology as destroying traditions and communication. They accused technological innovations of destroying the folk. However, another perspective of framing the changes brought about with these new forms of media is that they entailed new forms of folk. A problem with framing the media in overtly dystopic ways is that you create a technological determinism that takes agency (choice) from those who participate in everyday life. These critics actually ignore the “folk” (and their practices) in their criticism. The vernacular has not disappeared into the heterogenous mess of “mass” culture—it has changed form.
Blank explains, “New media technology has become so ubiquitous and integrated into users’ communication practices that it is now a viable instrument and conduit of folkloric transmission…” (4).
Folklore is the study of everyday life. The digital has become a realm of everyday life. Cyberspace is just as important as actual space to the emerging generations of humans in consumer societies. From the rich and poor, men and women (and trans* and queer), and all races and ethnicity use the Internet in their everyday lives for all kinds of reasons. Popular culture in this context provides us with valuable new social contexts to study. New gateways into understanding human culture, society, and communication.
Blank, Trevor. 2012. “Introduction.” Pp. 1-24 in Folk Culture in the Digital Age: The Emergent Dynamics of Human Interaction, edited by Trevor Blank. Logan: Utah State University Press.
Brasher, Brenda. 1996. “Thoughts on the Status of the Cyborg: On Technological Socialization and Its Link to the Religious Function of Popular Culture”. Journal of the American Academy of Religion 64(4). Retrievieved December 28, 2015 (http://www.jstor.org/stable/1465623?seq=1#page_scan_tab_contents) .
Social media is neither good nor bad, though this doesn’t mean it’s necessarily neutral as it certainly has the potential to exploit and empower. Nicole Costa’s rendition of her experiences and tribulations with Facebook in her recent article My online obsessions: How social media can be a harmful form of communication were incredibly touching. Her refusal and resistance to appearing and contributing to the Facebook community is empowering. However, I believe it is also misleading. Social media and digital exchange and interaction are here to stay (save for some cataclysmic event that knocks out the electrical infrastructure) and because of this I believe that we need to learn how to engage with it productively and ethically. We need to engage with social media in a way that doesn’t jump straight into a moralizing agenda. By this I mean illustrating social media as the savior of humanity or a dystopian wasteland where people’s communication collapses into self-absorbed decadence.
How do we maneuver this politically charged land mine addled cyberspace? First we need to recognize that a great number (in the billions) of the human race use social media (of all sorts) for many reasons. However, this is far too broad, let’s focus on Facebook. Facebook is among the most popular of social media with over 1.5 billion users and growing. It is built into the very infrastructure of communication in the Western world. If you have a mobile phone, you very likely have Facebook. You might even use Facebook’s messenger service more than your text messaging. Facebook allows us to share information, build social movements, rally people together in all sorts of grassroots wonders. As an activist, I’ve used Facebook to run successful campaigns. Why? Everyone uses it, and because of this, it has the power (if used correctly) to amplify your voice. Facebook, and most social media, can be very empowering.
But hold your horses! Facebook is still terrifyingly exploitative. Their access to your personal and meta data is unprecedented. Furthermore, they actively use the data that you give them to haul in billions of dollars. Issues of big data and capitalism are finally coming to the forefront of academic and popular discussion, but the nature of such complicated structures are still shrouded in obscurity. The user sees the interface on their computer monitor. But Facebook sees electronic data points that represent every aspect of the Facebook user(s) in aggregate. Through elaborate surveillance techniques, these data points are collected, organized, stored, and traded on an opaque big data marketplace. Furthermore, the user is not paid for their (large) contribution to the product being sold. They are exploited for their data and their labour—as everything you do on Facebook is a part of the data that is commodified and sold.
At the same time Facebook (and other prominent social media platforms) allow for an unprecedented freedom and speed of communication. They have been embedded into our everyday ways of socializing with each other. New social media have become an invaluable and ubiquitous social resource that we engage in from the time we wake to the time we sleep. It has been used to organize events, rallies and protests. It is used to keep in touch with distant family and friends. It is used for romance, hatred, companionship, and debate. Facebook is playful and empowering.
So if you are like me than you may be absolutely confounded on how to resolve the tensions between Facebook (and other social media) being at the same time exploitative and empowering. We have gone too far down the rabbit hole of social media and digital communication to merely refuse to use it. It is now a intimate part of our social infrastructure. Those who resist through refusal may find themselves at multiple disadvantages in how they engage with the world. My own ethnographic research into why users refused Facebook illustrated that those who abandoned Facebook may have felt empowered by overcoming the “addiction” of social media, however, they also felt excluded and alone. And it must be noted that mostly everyone I talked to who had quit Facebook are now using it again. So clearly, refusal to use these services is not enough to meaningfully challenge problematics in social media.
The Luddites historically were textile workers who were opposed to the invasion of machines into their workplace. Machines that they figured would gouge away at their wages. Today, it is a term used for those who refuse to use certain technologies. In the realm of social media, a Luddite resistance has proved to be incredibly ineffective. It is also important to note that this sort of refusal obscures ways of meaningfully resisting mass surveillance and the exploitation of user data.
I propose the complete opposite. I propose the path of knowledge. We need to learn how to maneuver through social media and the Internet in ways that allow us access to anonymity. Ways of asserting our right to anonymity. This is critical. We need to mobilize and teach and learn through workshops. We need to scour the Internet for free resources on the technical perspectives of social media. We need to also spread awareness of this double edged nature of social media. It is no use to take a stance of refusal, to ignore the importance of social media, and thus remain ignorant to how it all works. When we do this, we actually empower these large capitalist corporations to exploit us that much more. The less we know about the calculus of social media and how it works on a level of algorithm, code and protocol, the more able the capitalists are at disguising and hiding exploitation.
For those of us who have been reading science fiction for some time now—it becomes clear that SF has a strange propensity to becoming prophetic. Many of the themes in science fiction classics are now used as overarching metaphors in mainstream surveillance. Most notably among these is: Orwell’s Big Brother, Huxley’s Brave New World, and Kafka’s Trail. Other common tropes we might refer to is Minority Report, Ender’s Game, and Gattaca.
Though I am not trying to claim that these classics aren’t good pieces of SF literature, they may not do a superb job of covering issues implicit in contemporary surveillance. Imagine George Orwell coming to the realization that the Internet is one humungous surveillance machine with the power of mass, dragnet surveillance. Or imagine Huxley’s reaction to the lulling of consumer affect through branding and advertisement. The power of surveillance tools to control and shape large populations has become a prominent and dangerous feature of the 21st century.
As Richard Hoggart says,
“Things can never quite be the same after we have read—really read—a really good book.”
So let’s stop recycling old metaphors (if I read another surveillance book that references Big Brother or the Panopticon I’m going to switch fields). Let’s look at the work of our own generation of writers and storytellers. What I think we might find is a rich stock of knowledge and cultural data that could illuminate some optics into our (post)human relationship with advance technology.
The reason why I am using mixed media, as opposed to focusing on a singular medium, is that I believe that our relationship with media is not limited to one or the other. Novels, movies, video games, graphic novels and YouTube videos all offer us something in terms of storytelling. Part entertainment, part catharsis premised and constructed through the engagement with the story. Our generation of storytelling has shifted into the realm of mixed media engagement. What follows are some stories that I think are critically important to understanding the human condition in our own generational context.
P.S. They are in no particular order.
Disclaimer: Though I tried to be cautious not to forfeit any critical plot or character points, be careful for spoilers:
SOMA is a survival horror video game released by the developers of Amnesia (another terrifying game), Frictional Games. It is a 2015 science fiction story that both frightens you and an imparts an existential crisis as you struggle to find “human” meaning between the fusion of life and machine. After engaging in a neurological experiment, the main protagonist Simon Jarrett, wakes up in an abandoned underwater facility called PATHOS-II. As opposed to people, Jarrett finds himself trapped with the company of both malicious and benevolent robots—some who believe they are human. The interesting overlap with surveillance here is the focus on neurological surveillance. Scientists (in and out of game) transform the biological brain into a series of data points that represent the original. From this, scientists hope to predict or instill behavior. Or in the case of this game, transform human into machine. This is done by literally uploading the data points of the brain in aggregate to a computer. The game instills a constant question: is there any difference between human consciousness and a copy of human consciousness? SOMA is more than just a scary game—it is a philosophical treatise on the post-human illustrated through an interactive story.
Ready Player One
Ready Player One, is a novel written by Ernest Cline, which covers a wide breath of themes: notably the uneasy relationship between surveillance and anonymity, visibility and hiding. Cline constructs a world that doesn’t seem very far off from our own. A world where people begin to embrace simulation through virtual reality (VR) as environmental disaster plagues the actual world. People hide in the sublime. The VR game, OASIS, a world of many worlds, is the home of many clever pop culture references. Mostly music, video games and movies. With an extra emphasis on science fiction. Embedded in this world of worlds is several “Easter Eggs” (surprises hidden in videogames) that act as a treasure trail to the OSASIS late founder’s fortune and ultimate control over the virtual world. Anonymity is the norm of OASIS—a utopian world where the original, democratic ideal of the Internet is realized. A place where anyone can be anybody—without reference to their actual identity. However, this world is jeopardized as a the corporation Innovative Online Industries is also searching for the Easter Eggs to take over OASIS and remake it to generate capital. The theme of anonymity vs. mass surveillance for profit is arguably a major fuel for global debate as all “places” of the Internet are surveilled in increasingly invasive ways. Anonymity has almost disappeared from the Internet, to be replaced with quasi-public profiles (Facebook and Goggle+) that exist to make billions of dollars off of people’s identities and user-generated content. The original dream of the Internet, sadly has failed.
Nexus is a science fiction novel written by Ramez Naam following characters who are engaged with a new type of “nano-drug” that restructures the human brain so that people can connect mind to mind. There are those who support the drug and those who are against it. This conflict is followed by a slurry of espionage that exposes the characters to incredible dangers. The theme of surveillance in Nexus follows a new fixation on neuroscience. The ability to surveil the very essential, bio-chemical features of the human mind. As well as exposing mind and memory to others participating in this new psychedelic (psychosocial) drug. This is a level of exposure that far supercedes our experiences with the Internet and social media. Imagine being hardwired into a computer network. The book also follows traditional surveillance themes as the main character Kaden Lane becomes entangled in the conflict of private corporations and state government.
Social media in the 21st century has positioned Western society within the context of visibility and exposure. Most people are simultaneously engaged in self-exposure and participatory surveillance—as we post content about our lives and browse and read content about the lives of our friends and family. The Circle by Dave Eggers works this theme through a character, named Mae Holland, who has just been hired by the world’s largest IT company located in a place called the Circle. The Circle is a place, much like a University campus, with literally everything on it. This place boarders utopia—a place where work and play blends. However, following the mantra “All that happens must be known”, social media penetrates the lives of those who exist in the Circle in pervasive and exposing ways. Very quickly, the utopic illusion slips away into dystopia.
Slenderman was, in its bare skeleton form, introduced to the Internet by Eric Knudson on the (in)famous Something Aweful forum board for a paranormal photo editing contest. However, within a year, Slenderman was sucked into a collective narrative construction across all media platforms. People blogged about it, tweeted about it, YouTubed about it. A massive and ever changing (and unstable) urban legend (or Fakelore) was constructed in the chaos of cyberspace. Slenderman, the paranormal creature, can be described as a tall man with unnaturally long arms and legs (and sometimes tentacles), wearing a black suit, with no face. It is usually depicted as a creature who watches, in other words surveils. It watches from obscure areas, slowly driving its victim to paranoia and insanity. Than the victim disappears, without a trace. Slenderman is the contemporary boogieman. But it also shares a narrative with dangerous, obscure, and mysterious secret police and intelligence agencies. As Snowden revealed to the public, governments, through mass surveillance techniques, watch everyone and everything. Could the slenderman narrative be telling of a deep seeded cultural fear of government surveillance in the 21st century? There are many ways to tap into this story—google blogs, tumblr accounts, and twitter accounts. But also, YouTube series’ like Marble Hornets, EverymanHYBRID, and Tribe Twelve. Also check out the genre called Creepypasta for an extra home brewed thrill.