This is the third and final post in a brief, un-academic series about my personal experience of living in China’s troubled Xinjiang region, and the censorship both online and offline that it entailed. This functions largely as a final whimsical anecdote and a conclusion. You can read the background information here, and several other anecdotes from my time in China here.
I previously wrote about having my phone service shut down for using a Virtual Private Network to circumvent the ‘Great Firewall’ and use Facebook, Skype, and other foreign apps.
Well eventually Pokemon Go released, which several foreigners in my social circle downloaded and started playing. Given that Pokemon Go makes use of Google services to function, this was only possible by running the game through a VPN–the same kind that got me shut down several months before.
Not eager to be an unwilling participant in a supposed clandestine mapmaking operation, but a childhood lover of Pokemon, I knew I had to get back online.
A friend helped me register my passport with different cellphone carrier from the one that had shut me down, and I finally bought a new SIM card. By that time we knew I would be leaving China within a few months anyhow, so I went for broke and kept my VPN on 24/7. I didn’t end up getting shut down a second time, though it’s possible that if I had stayed it would have happened eventually.
What was curious to me was that while playing the game, I regularly found evidence of other players active in my area, despite having to use a VPN for it to work, and reports that it wasn’t supposed to function in China at all. One day I decided to use the in-game clues (active lure modules) to find others who were playing. After an hour of wandering from pokestop to pokestop, and setting a few lures of my own to draw out other players, I ran across three young guys in front of a movie theatre. It suddenly dawned on me that my Chinese vocabulary included exactly zero Pokemon terms. In the end I simply showed them my phone and smiled. They showed me theirs and laughed, and we all spent about ten minutes trying to get to an inconveniently placed pokestop.
I wish I could properly follow up on Pokemon Go in Xinjiang. The number of players I found evidence of in Xinjiang was initially surprising, but it shouldn’t have been. The Chinese are notorious for their zealous adoption of mobile games, and the restrictions on Pokemon Go were relatively easy to circumvent. I even had a ten year old ask me to recommend a VPN service one day after class.
I later learned that at that time Pokemon Go was unplayable even with a VPN in most of China, even in major cities like Beijing and Shanghai. But it was functioning well enough in Xinjiang, one of the more sensitive and closely-controlled regions. I never made sense of that.
I’ve now taken my Pokemon adventure (and the more mundane aspects of my life) out of China. But there are certain remnants of the surveillance and censorship apparatus that stick with you even outside the country.
When I visited my father over Christmas, for example, he picked me up from the airport and we went straight to a restaurant for breakfast. “What’s Xinjiang like, then? Do the people there want independence like in Tibet?*” he said. My stomach twisted and I instinctively checked the restaurant to see who might have heard. Of course nobody present cared.
(* – This is an oversimplification of the Tibet situation, but this post isn’t about that)
A Chinese Christian friend of mine related a similar experience she had: after years of fantasizing about boldly professing her religion, when she finally moved to America she simply could not feel comfortable praying without drawing the blinds first. Similarly, my girlfriend has physically recoiled once or twice when I spoke the name of a well-known Chinese dissident out loud in our thin-walled apartment. Every time she’s caught herself and said aloud “Oh, right. Nobody cares here.”
China is not Oceania; there is not really anything like thoughtcrime. But there are speechcrimes. And when certain things are spoken, especially in a full voice, you know in your stomach that those words could get someone in trouble if the speaker isn’t careful.
Before I moved to Xinjiang I had it in my mind that I might like to study Western China when I eventually return to school to pursue a Masters in Anthropology. But now I’m no longer certain if I can: as alluded to already, I met a wonderful woman in Xinjiang. We’ve been together for more than a year now, and we moved to the US now so she can attend a graduate program. While we will certainly return to Xinjiang in the future, the continuing presence of her family there, as well as my girlfriend’s Chinese passport make me ever-conscious of Chinese government’s attitude toward those who are critical. Even though I am against extremism of all kinds, and believe that independence would fly against the interests of those living in Xinjiang, the caveats I would attach to those positions are likely unacceptable to the regime.
And so, perhaps even what I’ve written here is too much to say.
If you have questions or requests for clarification please don’t hesitate to comment below. And as a good friend regularly says: “Every day’s a school day,” so if you’d like to suggest a correction, or a resource or if you otherwise take issue with something I’ve said, please don’t hesitate to comment either. If there is interest, I would love to contribute to Socionocular again.
Let’s see some examples:
“We collect information to provide better services to all of our users – from figuring out basic stuff like which language you speak, to more complex things like which ads you’ll find most useful, the people who matter most to you online, or which YouTube videos you might like”.
Let’s look at Facebook.
“We give you the power to share as part of our mission to make the world more open and connected. This policy describes what information we collect and how it is used and shared. You can find additional tools and information at Privacy Basics”.
Another example of the good intentions of Facebook.
“We work with third party companies who help us provide and improve our Services or who use advertising or related products, which makes it possible to operate our companies and provide free services to people around the world”.
What is missing is the way these companies use the user data to swing huge profit margins. This is arguably the most important factor that transparency is suppose to solve. Both Facebook and Google, the two Silicone Valley tech-giants, make a strong claim that they do everything to better serve their customer base–however, their intentions are more so geared towards data monetization.
However, this does not mean that such data can’t be re-identified (for more in-depth explanation—check out this revealing paper). It is often said in the surveillance studies community that meta-data is more revealing then data. meta-data isn’t merely unidentifiable and arbitrary. If it was, why on earth is it treated like the gold of the digital age?
Though Google and Facebook are certainly important and high quality tools, these issues are still exceedingly problematic and must be addressed. One reason we should be concerned is that we rely on these social media tools several times a day to maneuver through our social and cultural lives.
It isn’t just a corporate product anymore; it is an indispensable piece of social/cultural capital. Don’t believe me? Try to quit Facebook and/or Google—I bet you can’t. It is very difficult to engage productively in our communities without using what these corporations have to offer. These corporations are silently bleeding us dry of our privacy in order to establish high profit margins. And I am willing to bet that most people don’t know the extent of all of these injustices.
The more privacy that we allow these corporate entities to take from us, the more they will push to widen the gaping hole in our already transparent lives. This can be a terrifying prospect in terms of politics of freedom of speech and expression. You may not be hiding something now—but we all hide something eventually. And we have a right to do this. For an example, the right to obscure your sexuality. This is a touchy subject as the homophobic and heteronormative ideologies that lead to hate crimes tend to fluctuate. One day you are accepted for who you are, the next, you might be beaten to a pulp or socially stigmatized. You may lose your job. Your house. And in some terrible cases, your life. We need the right to hide. And we need the right to choose anonymity.
So what now? Don’t be so complacent. The tech-giants are simulating transparency without telling the whole story. They know very well that if their surveillance capacities are pulled into the spot light that they will be forced to change. But they also know very well that no one reads these documents anyways. And because no one reads them—they are left quite untouchable.
Technology and its surveillance capacities are constantly changing and improving. However, our laws and policies are not keeping up with this. It is why it is so important to speak-out and inform those around you. As academics, as citizens, and as consumers.
Pokémon Go is lulling the world in to a humungous augmented distraction. A distraction that is covering up some pretty intense politics. It is almost as if we stepped into Ernest Cline’s Ready Player One—where distraction through virtual reality meets the war between anonymity and surveillance.
It has been well publicized that this new app, of which is fueling a Pokemania (a nostalgic resurgence of interest in Pokémon every time a new rendition of the game is released), has some rather arbitrary and invasive access to your mobile phones data—particularly, unhinged access to your Google account and other features of your mobile device.
What is Pokémon Go? This almost seems pointless now, seeing the popularity of the game—but for those of you who have not tuned in to the pokemania. Pokémon was a TV show released in the late 90s, which became dream fuel for a generation of children and young adults. It featured a young boy, Ash Ketchum, who embarked on a Journey to capture Pokémon in a technology known as the “pokeball” through the direction of the Professor (A man who studies Pokémon). After the Pokémon is caught, the young boy (and the thousands of other Pokémon trainers) would aspire to train it to battle other Pokémon.
Shortly after the show caught on, Nintendo released Pokémon Red and Blue for the Gameboy Colour. These games became an absolute hit. I remember walking to school with my eyes glued to my little pixelated screen—traversing over roads and dodging cars while battling with Pokémon and trading them with my other schoolyard peers. The games slogan repeating through my cranial, “Gotta Catch Them All”.
Nintendo continued to release Pokémon games designed for their various game platforms up until present. Each successive game included an obsessive and nostalgic excitement that took over the gaming community. Or anyone who had grown up playing Pokemon Red and Blue, as well as collecting the Pokemon cards.
Pokémon Go is a game that can be played on a mobile smart phone that uses geolocational data and mapping technologies that turn the phone into a lens peering into the Pokémon world. Through the interface of your mobile device, you can catch Pokémon wandering the “real” world, battle through gyms, and find items that will aid your journey. It augments the world around the user so that everything and everywhere becomes a part of the game.
Just like its predessor, a game known as Ingress, many of the geo features in the game were set up around important places: art exhibits, cultural or historical sites, and parks. Following the maps would lead you through a productive tour of a cities geographical culture.
I want to explore the obsessive and nostalgic excitement through a techno-socio-cultural lens. I will unpack this critique into three parts: (1) the sociology of privacy, (2) Big data and algorithmic surveillance, and (3) the culture of nostalgia and the digital sublime.
Before I continue with this post—I want to assert that it is not an all-in-all terrible, megalomaniac, Big Brother type game. Pokémon Go is enabling new ways for people to engage in the social world. Check out this sociological blog post exploring just that. However, it would be silly to not apply a critical perspective to this.
There are some restrictions I’d like to apply to my analysis: (1) Pokémon Go is not an immature or irrelevant activity, millions of people of all ages and cultural backgrounds are playing it—meaning it has a ton of significance. As well as, (2) The people playing Pokémon Go are not zombies or passive consumers, they are very intentional and unpredictable social actors that have the ability to understand their situation.
Sociology of Privacy
One thing that boggles the minds of surveillance studies scholars is how the vast population of people using social media and mobile applications do not care about invasive surveillance embedded in everything they use.
In my own interviews of Facebook users in 2014, many of my participants claimed, “I have nothing to hide”. A pervasive mentality that enables big corporate and governmental entities to gain access and control to large swaths of data. This nonchalant attitude towards surveillance allows for massive ground in the dismantling of our rights to privacy. Though such an attitude is not surprising, as the entire ecosystem of social media is set up to surveil.
David Lyon, in his book “Surveillance After Snowden”, asserts that privacy is generally seen as a natural and democratic right that should be afforded to all citizens—but admits that a problem lay in that informational privacy is not as valued as bodily or territorial privacy. Even if information, data, and metadata are much more revealing than the both bodily and territorial surveillance.
Lyon notes three important points about privacy that are all very relevant to the current epidemic of pokemania: 1) the collecting of information has now been directly connected to risk mitigation and national security, implying that we are not safe unless we are surveilled. 2) Everyone is now a target of mass surveillance, not just the criminal. 3) Data collected through mass surveillance is made to create profiles of people—these may be completely inaccurate depending on the data collected, but you will never know the difference.
I would like to add a fourth. How can the data be used to swing massive profits? The corporation Niantic, creators of Ingress and Pokémon Go, use their privacy policies to legitimate “sharing” (sic: selling) of data with governments and third party groups. Government surveillance is often the focus of criticism. However, capitalist corporations are not often held accountable to ethical practices. Who is selling this data? Who is buying this data? And what is this monetized data being used for?
As Lyon asserts, Privacy is not about individual concerns—it is important socially and politically for a well-balanced democracy. Edward Snowden has been known to say, “It’s not really about surveillance, it’s about democracy”. While we continue to allow powerful groups to chip away at our privacy for entertainment, we literally give up our ability to criticize and challenge injustice.
Snowden reminds us that when we give up our democracy to the control room—there is zero accountability, zero transparency, and decisions are made without any democratic process.
So while we are distracted trying to catch a Snorlax at the park, we are giving away more and more of our lives to mysterious and complicated groups that want nothing but large profits and control. For a much more scathing review of this, see this blog post on surveillance and Pokémon.
Big Data and Algorithms
So what about the data. What is big data? First off, it’s all the craze right now. As data scientists, social scientists, policy makers, and business gurus scramble to understand how to use, abuse, and criticise such a thing. Big data is consistent of two large disciplines—statistics and computer science. It is the collection and analysis of unthinkably large amounts of aggregated data that is collected and analyzed largely by computer software and algorithms.
Boyd and Crawford (2012) offer a much more precise definition. They assert that Big Data is a “cultural, technological, and scholarly phenomenon” that can be broken into three interconnected features:
Technology – Computer science, large servers, and complicated algorithms.
Analysis – Using large data-sets compiled from technological techniques to create social, political, cultural and legal claims.
Mythology – Widespread belief of the power of Big Data to offer a superior knowledge that carries immense predictive value.
The big problem that remains is how to find, generate, and collect all of this data? In terms of social media and video games much of this has to do with offering a “free” service to consumers who take the role of the “prosumer”. The prosumer is a social actor that both produces and consumes the commodity they are “paying” for.
In terms of social media (like Facebook), while users interact with each other, they are producing affective or emotional data through liking things, sharing things, and discussing things, that are then collected by algorithms and fed back into the system through targeting advertisements. The user is implicit in both the production and consumption of that data.
The user is given free access to the social media platform, however, they pay for it through giving the platform a transparent window into their lives that is than monetized and sold for large profits. People’s reactions to this form of surveillance are variant: some people offer scathing criticisms, others don’t give two shits, and some act a little more cautious.
Why is this important for Pokémon Go? Because you trade your data and privacy for access to what Pokémon Go has to offer. It is incredibly clever of think tanks in Niantic—using the nostalgic Pokemania to usher users into consenting to ridiculous surveillance techniques.
It gets worse. As Ashley Feinberg from Gawker identified, the people responsible for Niantic have some shady connections to the international intelligence community. Causing some in the surveillance studies field to fear that Pokémon might just be an international intelligence conspiracy (It sounds crazy—but it makes complete sense).
David Murakami Wood coined to the concept of “vanishing surveillance”. This is a phenomenon, intentional and unintentional, that allows surveillance capacities in devices to fade into the background. Resulting in users not being aware, or at least completely aware, that they are being watched. Pokémon Go, an innocent video game that is enabling new ways of being social in public, becomes an invisible surveillance device that may have international and interpersonal consequences. And it is the Pokémon themselves that allow for the surveillance to vanish from sight and mind.
A Culture of Nostalgia
So what drives people to consent to all of this? What kinds of cultural patterns allow and shape us to an almost fanatical state when a Pokémon game is released?
The first factor within the culture of Pokémon is its appeal to nostalgia. Jared Miracle, in a blog post on The Geek Anthropologist, talks about the power of nostalgia. It taps into the childhoods of an entire generation—it even moves outside the obscure boundaries of gamer culture into the larger pop cultural context. It wasn’t only geeks that played Pokémon. It was just about everyone. This might provide an explanation to why so many people are wandering around with their cell phones before them (I’ve seen them wandering around Queen’s campus today, while I was also wandering around).
However, it is not all about nostalgia. I believe that the nostalgia plays a role in a bigger process of the digital sublime and the mythologizing of the power of media.
What is a mythology? According to Vincent Mosco, in his book The Digital Sublime, defines myth as, “stories that animate individuals and societies by providing paths to transcendence that lift people out of the banality of everyday life”. This is a form of reality that represent how people see the world from the every-day-life perspective.
Myths are also implicit in power. “’Myth’ is not merely an anthropological term that one might equate with human values. It is also a political term that inflects human values with ideology… Myths sustain themselves when they are embraced by power, as when legitimate figures… tell them and, in doing so, keep them alive”.
These myths, along with nostalgia for Pokémon paraphernalia, generate the digital sublime. A phenomenon that has us go head over heals for new technology. The mythologies that support it can be positive or negative.
Positive mythologies might sound a little like this: “Pokémon Go is allowing us to leave our homes and experience the world! We meet new people and we are empowered by new ways of interacting with each other. Hurrah!”.
Negative Mythologies are also important: “Pokémon Go is creating a generation of zombies. People are wasting their time catching those stupid Pokémon. They are blindly and dangerously wandering around, falling off cliffs, and invading private property. Damn those immature assholes”.
Both of these mythologies cross over each other to colour the experiences of those who play and those who watch.
We need to be careful of generating mythologies about the capacity for games to facilitate freedom, creativity, and sociality. We also need to be careful not to apply to much criticism. Such mythologies not only create a basic, overly simplistic way of understanding gaming, surveillance, and human culture, it also blinds us to nuance and detail that may be important in its broad understanding.
Drawing things together—A Political Economy of Pokémon
Taking a techno-socio-cultural perspective allows us to engage with Pokémon Go with a nuanced understanding of its positive and negative characteristics. It is possible to look at how this media creates a complex ecosystem of social concerns, political controversies, and cultural engagements with nostalgia, mythologizing, and capitalist enterprise.
Pokémon Go is indeed enabling a ton of new ways of interacting and helping people with mental illness get out of their homes to experience the world—however, we can’t forget that it is also an advance technology developed by those who have interest in money and power.
Regardless of the benefits that are emerging from use of this application, there are still important questions about privacy and the collection and use of Big Data.
So Pokemon Go isn’t just enabling new ways of being social with the larger world. It is enabling new ways of engaging with issues of surveillance, neo-liberal capitalism, and social control through the least expected avenues.
After all of these problematics become more and more public—will we still trade off our freedom for entertainment?
Social media is neither good nor bad, though this doesn’t mean it’s necessarily neutral as it certainly has the potential to exploit and empower. Nicole Costa’s rendition of her experiences and tribulations with Facebook in her recent article My online obsessions: How social media can be a harmful form of communication were incredibly touching. Her refusal and resistance to appearing and contributing to the Facebook community is empowering. However, I believe it is also misleading. Social media and digital exchange and interaction are here to stay (save for some cataclysmic event that knocks out the electrical infrastructure) and because of this I believe that we need to learn how to engage with it productively and ethically. We need to engage with social media in a way that doesn’t jump straight into a moralizing agenda. By this I mean illustrating social media as the savior of humanity or a dystopian wasteland where people’s communication collapses into self-absorbed decadence.
How do we maneuver this politically charged land mine addled cyberspace? First we need to recognize that a great number (in the billions) of the human race use social media (of all sorts) for many reasons. However, this is far too broad, let’s focus on Facebook. Facebook is among the most popular of social media with over 1.5 billion users and growing. It is built into the very infrastructure of communication in the Western world. If you have a mobile phone, you very likely have Facebook. You might even use Facebook’s messenger service more than your text messaging. Facebook allows us to share information, build social movements, rally people together in all sorts of grassroots wonders. As an activist, I’ve used Facebook to run successful campaigns. Why? Everyone uses it, and because of this, it has the power (if used correctly) to amplify your voice. Facebook, and most social media, can be very empowering.
But hold your horses! Facebook is still terrifyingly exploitative. Their access to your personal and meta data is unprecedented. Furthermore, they actively use the data that you give them to haul in billions of dollars. Issues of big data and capitalism are finally coming to the forefront of academic and popular discussion, but the nature of such complicated structures are still shrouded in obscurity. The user sees the interface on their computer monitor. But Facebook sees electronic data points that represent every aspect of the Facebook user(s) in aggregate. Through elaborate surveillance techniques, these data points are collected, organized, stored, and traded on an opaque big data marketplace. Furthermore, the user is not paid for their (large) contribution to the product being sold. They are exploited for their data and their labour—as everything you do on Facebook is a part of the data that is commodified and sold.
At the same time Facebook (and other prominent social media platforms) allow for an unprecedented freedom and speed of communication. They have been embedded into our everyday ways of socializing with each other. New social media have become an invaluable and ubiquitous social resource that we engage in from the time we wake to the time we sleep. It has been used to organize events, rallies and protests. It is used to keep in touch with distant family and friends. It is used for romance, hatred, companionship, and debate. Facebook is playful and empowering.
So if you are like me than you may be absolutely confounded on how to resolve the tensions between Facebook (and other social media) being at the same time exploitative and empowering. We have gone too far down the rabbit hole of social media and digital communication to merely refuse to use it. It is now a intimate part of our social infrastructure. Those who resist through refusal may find themselves at multiple disadvantages in how they engage with the world. My own ethnographic research into why users refused Facebook illustrated that those who abandoned Facebook may have felt empowered by overcoming the “addiction” of social media, however, they also felt excluded and alone. And it must be noted that mostly everyone I talked to who had quit Facebook are now using it again. So clearly, refusal to use these services is not enough to meaningfully challenge problematics in social media.
The Luddites historically were textile workers who were opposed to the invasion of machines into their workplace. Machines that they figured would gouge away at their wages. Today, it is a term used for those who refuse to use certain technologies. In the realm of social media, a Luddite resistance has proved to be incredibly ineffective. It is also important to note that this sort of refusal obscures ways of meaningfully resisting mass surveillance and the exploitation of user data.
I propose the complete opposite. I propose the path of knowledge. We need to learn how to maneuver through social media and the Internet in ways that allow us access to anonymity. Ways of asserting our right to anonymity. This is critical. We need to mobilize and teach and learn through workshops. We need to scour the Internet for free resources on the technical perspectives of social media. We need to also spread awareness of this double edged nature of social media. It is no use to take a stance of refusal, to ignore the importance of social media, and thus remain ignorant to how it all works. When we do this, we actually empower these large capitalist corporations to exploit us that much more. The less we know about the calculus of social media and how it works on a level of algorithm, code and protocol, the more able the capitalists are at disguising and hiding exploitation.
For those of us who have been reading science fiction for some time now—it becomes clear that SF has a strange propensity to becoming prophetic. Many of the themes in science fiction classics are now used as overarching metaphors in mainstream surveillance. Most notably among these is: Orwell’s Big Brother, Huxley’s Brave New World, and Kafka’s Trail. Other common tropes we might refer to is Minority Report, Ender’s Game, and Gattaca.
Though I am not trying to claim that these classics aren’t good pieces of SF literature, they may not do a superb job of covering issues implicit in contemporary surveillance. Imagine George Orwell coming to the realization that the Internet is one humungous surveillance machine with the power of mass, dragnet surveillance. Or imagine Huxley’s reaction to the lulling of consumer affect through branding and advertisement. The power of surveillance tools to control and shape large populations has become a prominent and dangerous feature of the 21st century.
As Richard Hoggart says,
“Things can never quite be the same after we have read—really read—a really good book.”
So let’s stop recycling old metaphors (if I read another surveillance book that references Big Brother or the Panopticon I’m going to switch fields). Let’s look at the work of our own generation of writers and storytellers. What I think we might find is a rich stock of knowledge and cultural data that could illuminate some optics into our (post)human relationship with advance technology.
The reason why I am using mixed media, as opposed to focusing on a singular medium, is that I believe that our relationship with media is not limited to one or the other. Novels, movies, video games, graphic novels and YouTube videos all offer us something in terms of storytelling. Part entertainment, part catharsis premised and constructed through the engagement with the story. Our generation of storytelling has shifted into the realm of mixed media engagement. What follows are some stories that I think are critically important to understanding the human condition in our own generational context.
P.S. They are in no particular order.
Disclaimer: Though I tried to be cautious not to forfeit any critical plot or character points, be careful for spoilers:
SOMA is a survival horror video game released by the developers of Amnesia (another terrifying game), Frictional Games. It is a 2015 science fiction story that both frightens you and an imparts an existential crisis as you struggle to find “human” meaning between the fusion of life and machine. After engaging in a neurological experiment, the main protagonist Simon Jarrett, wakes up in an abandoned underwater facility called PATHOS-II. As opposed to people, Jarrett finds himself trapped with the company of both malicious and benevolent robots—some who believe they are human. The interesting overlap with surveillance here is the focus on neurological surveillance. Scientists (in and out of game) transform the biological brain into a series of data points that represent the original. From this, scientists hope to predict or instill behavior. Or in the case of this game, transform human into machine. This is done by literally uploading the data points of the brain in aggregate to a computer. The game instills a constant question: is there any difference between human consciousness and a copy of human consciousness? SOMA is more than just a scary game—it is a philosophical treatise on the post-human illustrated through an interactive story.
Ready Player One
Ready Player One, is a novel written by Ernest Cline, which covers a wide breath of themes: notably the uneasy relationship between surveillance and anonymity, visibility and hiding. Cline constructs a world that doesn’t seem very far off from our own. A world where people begin to embrace simulation through virtual reality (VR) as environmental disaster plagues the actual world. People hide in the sublime. The VR game, OASIS, a world of many worlds, is the home of many clever pop culture references. Mostly music, video games and movies. With an extra emphasis on science fiction. Embedded in this world of worlds is several “Easter Eggs” (surprises hidden in videogames) that act as a treasure trail to the OSASIS late founder’s fortune and ultimate control over the virtual world. Anonymity is the norm of OASIS—a utopian world where the original, democratic ideal of the Internet is realized. A place where anyone can be anybody—without reference to their actual identity. However, this world is jeopardized as a the corporation Innovative Online Industries is also searching for the Easter Eggs to take over OASIS and remake it to generate capital. The theme of anonymity vs. mass surveillance for profit is arguably a major fuel for global debate as all “places” of the Internet are surveilled in increasingly invasive ways. Anonymity has almost disappeared from the Internet, to be replaced with quasi-public profiles (Facebook and Goggle+) that exist to make billions of dollars off of people’s identities and user-generated content. The original dream of the Internet, sadly has failed.
Nexus is a science fiction novel written by Ramez Naam following characters who are engaged with a new type of “nano-drug” that restructures the human brain so that people can connect mind to mind. There are those who support the drug and those who are against it. This conflict is followed by a slurry of espionage that exposes the characters to incredible dangers. The theme of surveillance in Nexus follows a new fixation on neuroscience. The ability to surveil the very essential, bio-chemical features of the human mind. As well as exposing mind and memory to others participating in this new psychedelic (psychosocial) drug. This is a level of exposure that far supercedes our experiences with the Internet and social media. Imagine being hardwired into a computer network. The book also follows traditional surveillance themes as the main character Kaden Lane becomes entangled in the conflict of private corporations and state government.
Social media in the 21st century has positioned Western society within the context of visibility and exposure. Most people are simultaneously engaged in self-exposure and participatory surveillance—as we post content about our lives and browse and read content about the lives of our friends and family. The Circle by Dave Eggers works this theme through a character, named Mae Holland, who has just been hired by the world’s largest IT company located in a place called the Circle. The Circle is a place, much like a University campus, with literally everything on it. This place boarders utopia—a place where work and play blends. However, following the mantra “All that happens must be known”, social media penetrates the lives of those who exist in the Circle in pervasive and exposing ways. Very quickly, the utopic illusion slips away into dystopia.
Slenderman was, in its bare skeleton form, introduced to the Internet by Eric Knudson on the (in)famous Something Aweful forum board for a paranormal photo editing contest. However, within a year, Slenderman was sucked into a collective narrative construction across all media platforms. People blogged about it, tweeted about it, YouTubed about it. A massive and ever changing (and unstable) urban legend (or Fakelore) was constructed in the chaos of cyberspace. Slenderman, the paranormal creature, can be described as a tall man with unnaturally long arms and legs (and sometimes tentacles), wearing a black suit, with no face. It is usually depicted as a creature who watches, in other words surveils. It watches from obscure areas, slowly driving its victim to paranoia and insanity. Than the victim disappears, without a trace. Slenderman is the contemporary boogieman. But it also shares a narrative with dangerous, obscure, and mysterious secret police and intelligence agencies. As Snowden revealed to the public, governments, through mass surveillance techniques, watch everyone and everything. Could the slenderman narrative be telling of a deep seeded cultural fear of government surveillance in the 21st century? There are many ways to tap into this story—google blogs, tumblr accounts, and twitter accounts. But also, YouTube series’ like Marble Hornets, EverymanHYBRID, and Tribe Twelve. Also check out the genre called Creepypasta for an extra home brewed thrill.
Originally appeared in the Queen’s Journal on November 13th, 2015.
“I am just a citizen. I was the mechanism of disclosure. It’s not up to me to say what the future should be — it’s up to you,” NSA whistleblower Edward Snowden told a packed house in Grant Hall.
Snowden — a polarizing figure globally — was invited as the keynote speaker for Queen’s Model United Nations Invitational (QMUNi) for the Queen’s International Affairs Association’s (QIAA).
As the talk commenced at 6:30 p.m., Snowden was met with applause.
The buzz surrounding Snowden’s Google Hangout talk on Thursday at Grant Hall started early, as crowds started lined up to enter the Grant Hall. The building quickly hit capacity.
Snowden began with a discussion of his motivations to disclose countless NSA confidential documents. He told the audience that he once believed wholeheartedly that mass surveillance was for the public good.
He came from a “federal family”, he said, with relations to both politics and military. He said once he reached the peak of his career in government intelligence — when he received the highest security clearance — he saw the depth of the problem.
After that realization came the release of classified documents to journalists in 2013, his defection from the NSA and his indefinite stay in Russia.
“Progress often begins as an outright challenge to the law. Progress in many cases is illegal,” he said.
However, he has made himself into more than just a whistleblower. Snowden has continued to push for and encourage discussion about mass surveillance.
“Justice has to be seen to be done,” he said.
“I don’t live in Russia, I live on the Internet,” he said at another point during the talk.
When asked about Bill C-51 — the controversial terror bill in Canada — Snowden said “terrorism is often the public justification, but it’s not the actual motivation” for the bill.
He continued to say that if you strip the bill of the word “terrorism”, you can see the extent to which the bill makes fundamental changes that affect civil rights.
Snowden’s talk was intended to encourage discussion about mass surveillance. QIAA had initially contacted Snowden’s lawyer and publishers, who handle Snowden’s public affairs, and after a long process of back-and-forth negotiations they secured Snowden as a keynote speaker.
Dr. David Lyon, director of the Surveillance Studies Center and author of the recent publication Surveillance After Snowden, acted as the moderator for the talk.
There were mixed opinions among audience members about Edward Snowden and his mass disclosures of National Security Agency (NSA) intelligence documents to journalists in 2013.
Some students, like Mackenzie Schroeder, Nurs ’17, say Snowden’s actions were gutsy, but had good intentions.
Another guest, Akif Hasni, a PhD student in political studies, said he thought Snowden’s actions were important, despite the problems associated with publishing that information.
Other guests at the event didn’t completely agree with Snowden’s whistleblowing.
“It’s a dangerous thing to tell newspapers about. The thing about guys like Edward Snowden is that no one is going to know if what he did was good, while the action itself may be,” Sam Kary, ArtSci ’15, said.
Kary referred to John Oliver’s Snowden interview, where Oliver highlighted damages to national security caused by careless redacting of leaked documents by TheNew York Times.
The failure to properly redact leaked documents revealed the name of an NSA agent along with information on how the US government was targeting al-Qaeda operatives in Mosul in 2010.
It was recently announced that YouTube, owned and operated by Google, is planning on releasing a paid subscription service. This would entail a prioritizing of services to those who are able to afford it and creating exclusive content for those who are willing to pay. This is all kinds of messed up—but the most nefarious aspect of this is that they are already making money off of you. Google uses you much like an employee (though unpaid). All of the content you generate, use, or provide “free” to Google, they organize and trade through complicated surveillance systems to swing a profit off of surplus value. This is why services like Facebook, Twitter and YouTube are free. They are funded (and make ludicrous profits off of) your personal information.
“When you upload, submit, store, send or receive content to or through our Services, you give Google (and those we work with) a worldwide license to use, host, store, reproduce, modify, create derivative works (such as those resulting from translations, adaptations or other changes we make so that your content works better with our Services), communicate, publish, publicly perform, publicly display and distribute such content.”
They use complicated and automated means of surveillance in order to collect, organize, and monetize your data. They also are free to make use of your user-generated content—things you created with your time and effort, though you are not paid for this. Regardless of how you understand your relationship with Google, you should understand that the relationship is framed in a Capitalistic system. You are a Google piggy bank.
The concept of the cyber prosumer is discussed by many political economists and surveillance theorists. Cohen (2008) introduces the concept of prosumer into her work on surveillance and Facebook. This concept can be used for any Web 2.0 social media application (Facebook, Twitter, Tumblr, etc.). It is most certainly a part of Google’s political economic structure. Cohen observes, “Business models based on a notion of the consumer as producer have allowed Web 2.0 applications to capitalize on the time spent participating in communicative activity and information sharing” (7). To call a social media user a prosumer is to say that they both produce and consume simultaneously while using Google services. They produce the user-generated content that is then sold to advertisers and used to target advertisements back at the prosumer.
In the process of Google capitalizing off this user-genreated content the prosumer is involved in ‘immaterial labour’. This is a concept devised by Lazzorato (1996) to talk about the informational and cultural aspects of labour exploitation. Though the Internet looked far different in the 90s, his concept has become even more valuable with the advent of social media. Lazzorato (1996) elaborates that immaterial labour is “the labour that produces the informational and cultural content of the commodity” (1). He breaks this concept down to two components: informational content and cultural content (ibid 1). Informational content refers to the shift from physical labour to labour organized by computer and digital technology (ibid 1). Cultural content refers to the production of creative and artistic artifacts that were never (and still aren’t) considered in the realm of labour (ibid 1).
This concept is incredibly useful for understanding the role of social media in capitalism—as immaterial labour, often expressed as the realm of fun and social, becomes the unrecognized exploitation of users as corporations utilize their creative potential for capital gain. Bauman and Lyon (2013) express, “The arousing of desires—is thereby written out of the marketing budget and transferred on to the shoulders of prospective consumers” (125). Though it is to be noted that this use of immaterial labour can be said to be a fair trade-off for free use of Google’s services.
The troublesome part of all of this is that if they begin to charge for subscription fees for better services (preferred services) it will take on a doubling effect of exploitation. First, the prosumer engages in immaterial labour through the creation of user-generated content that Google consolidates to produce surplus value from thus generating profit. And then, the prosumer is charged a subscription fee for use. In terms of labour, you will essentially have to pay to provide Google with the fruits of your labour.
What may be even more troubling is if Google is allowed to succeed with the implementation of YouTube Red than it will likely provide incentive for other social media sites, such as Facebook, to do similar things. This is a conversation we should not take lightly. Surveillance might have its benefits to society, but when used by social media sites through the capitalist framework, two issues come to mind: exploitation and control. We need to take a critical stance on this or we might slip down the slippery slope of subscription social media.
Bauman, Zygmunt and David Lyon. 2013. Liquid Surveillance. Cambridge: Polity.
Cohen, Nicole S. 2008. “The Valorization of Surveillance: Towards a Political Economy of Facebook.” Democratic Communique 22(1):5-22.
Lazzarato, M. 1996. ‘Immaterial Labour.’ Generation Online. Retrieved November 5, 2015 (http://www.generation-online.org/c/fcimmateriallabour3.htm).
I suppose I should begin with a (very) brief introduction to the study of political economy (from the novice perspective) and then draw out its many connections to how we exchange and produce (big)data through our use of social media (Facebook, Instagram, Twitter, Tumblr, etc.). As far as the development of poliecon in the social sciences is concerned—we begin with Hegel and Marx/Engels. So prepare your head for a quick review of the history of humanity. Ready? Go!
Hegel developed the philosophical concept of dialectics in history. He idealized history as the production of knowledge (thesis) that was then challenged by another form of knowledge (antithesis) and through conflict and debate formed a new body of knowledge (thesis). Dialectics would continue to cycle like this in a back and forth tension between bodies of knowledge until we reached the pinnacle of all knowledge—the perfect society. The notion of a “perfect society” is very well challenged in our era of academic thought. However, this inspired Karl Marx to (dialectically) approach the development of historical materialist methodology which featured dialectic thought in a more empirical fashion (the development of these thoughts led to a fissure in academic thought between the idealists (Hegelian) and the materialists (Marxist).
Karl Marx grounded his research into the development and history of capital (and capitalism). Through his empirical studies he theorized that the mode of production was the foundation of (just about) everything in society. This was the material base from which the superstructure arises. The superstructure is the heterogeneous masses of ideological thought (politics, law, social relations, etc.). It is from the superstructure, which is coordinated by the mode of production (and some argue, mode of exchange), that we get the (unstable and constantly changing) understanding of value. Furthermore, if the mode of production were to change (as it is certainly done in this case), the superstructure would change, along with the meaning of social relations and formations. It is from this conception of value, as understood by political economy, that I want to spring from to understand how we exchange (big)data through the use of social media. I will use Facebook as the overarching example, because at this point, we all have an intimate knowledge of Facebook. Also, Facebook owns other social media platforms (such as Instagram). It is certainly the current largest social network site.
In order for the entire architecture (both physically and digitally) of Facebook (and other forms of social media) to exist there needs to be value generated for information (big data). Facebook is a capitalistic enterprise that seeks to generate profit from such information. Because of this, Facebook works to proliferate and expand its user base. The more Facebook’s user base proliferates, the more data they have to draw from. I am going to highlight that Facebook achieves all of this through two fundamental forms of surveillance: participatory surveillance and capital surveillance.
First value must be generated. Value is generated for big data through its production and consumption. Before we can understand how value is created we need to talk about the prosumer. In the context of Facebook, the user produces and consumes the user-generated content and metadata that is then used as big data in aggregate. So essentially, producer and consumer are collapsed into the user prosumer (Cohen 2008:7). Value is generated because the fruits of the prosumer—data through biography, interaction, and Internet usage—are sold to advertisers who then feed it back into the system as targeted advertisements. According to Fuchs (2012), the prosumer is packaged, commodified and sold (146).
“Facebook prosumers are double objects of commodification. They are first commodified by corporate platform operators, who sell them to advertising clients, and this results, second, in an intensified exposure to commodity logic. They are permanently exposed to commodity propaganda presented by advertisements while they are online. Most online time is advertisement time” (146).
This is obviously problematic. I think it is also pretty important that we acknowledge that the role of prosumer positions the Facebook user as a free labour commodity. Cohen (2008) asserts, “Web 2.0 models depend on the audience producing the content, thus requiring an approach that can account for the labour involved in the production of 2.0 content, which can be understood as information, social networks, relationships, and affect” (8). In this process of production, Facebook repackages user-generated content and sells the data to generate intense profits (in the billions range). The user prosumer remains unpaid in this exchange. Interestingly enough, through my own work in qualitative research, those who participated in my research believed that use of Facebook’s services qualified as a fair exchange for their data. I think an apt thread of thinking that could resolve these problems, van Djick (2012) observes, “Connectivity is premised on a double logic of empowerment and exploitation” (144). With this noted, I would like to focus on the production, consumption and monetization of user-generated content.
The content produced and consumed by the user prosumer is organized through two layers of surveillance. The first layer of surveillance, is participatory surveillance. Albrechtslund (2008), in trying to address the overwhelming dystopic metaphors implicit in the discourse and study of surveillance, he explains that use of hierarchical models of surveillance (like the big brother and panopticon) obscures important sociological processes that occur through the mediation of social media (8). Furthermore, it treats users as passive agents, unable to resist the oppressive and repressive forces of the Big Brother. He attempts to frame surveillance as a mutual, horizontal process that empowers users through the sharing of information and creation of elaborate autobiographies. Albrechtslund elaborates that social media offer, “new ways of constructing identity, meeting friends and colleagues, as well as socializing with strangers” (8). In this understanding of social media, the subject is not a passive agent under the oppressive gaze of big brother, but an active subject pursuing empowerment. Furthermore, Albrechtslund frames user-generated content specifically as sharing, not trading. However, in doing this, he ignores that these social media platforms are constructed, shaped and owned by capitalist corporations seeking profit. This is where the second layer of surveillance becomes important—capital surveillance.
During the process of the user prosumer engaging in participatory surveillance, or in other words producing and consuming user-generated content that they share with others, the capitalist captures that data and repackages it to be sold to advertisers. They do this through complicated algorithmic computer software which than stores the data in a large architecture of computer hardware, optic wires, and servers. The fruits that become available through participatory surveillance are commodified (along with the prosumers) and then traded to produce capital. This layer, the hierarchical and oppressive model of surveillance, organizes and shapes how user prosumers generate content. Thus van Djick’s concept of the double logic of connectivity is realized. What is problematic here is that much of capital surveillance is rendered opaque or invisible to the user—who only sees the participatory aspects and the advertisements (repackaged user-generated content). Also problematic, is that this entire process is automated–though this note will not be taken up in this article.
It is important to note that participatory surveillance is not typically a capitalist endeavour. Cohen writes, “The labour performed on sites like Facebook is not produced by capitalism in any direct, cause and effect fashion… (it is) simply an answer to the economic needs of capital” (17). So where the user prosumer “shares” their production of user-generated content, the capitalist “trades” it. These are two interconnected, though fundamentally different, processes. We, the user prosumers, don’t often recognized the capital forms of surveillance occurring, because we are so intimately involved in the participatory forms of surveillance. This, I believe, is the root to our apathy about the surveillance issues surrounding social media like Facebook. What needs to be devised next is how we can package these theories in a popular form and export them to those who are shaped by these forms of exploitative commodification. It is the work of social scientists to understand, and then to shape, the world around them.
Another lesson we should take from this is that not all surveillance is evil. We do not live in an inescapable dystopian society. To say this, we obscure a lot of actual practices of surveillance that are beneficial. We also render the notion of resistance as a practice in futility. Surveillance is a neutral phenomenon that is used for better or worse by a plethora of different corporations, governments, non-governmental organizations, activists, and regular everyday people. But in saying this, we can’t ignore the potential abuse and exploitation that may come from the use of surveillance practices to increase the flow of Capital.
Albrechtslund, Anders. 2008. “Online Social Networking as Participatory Surveillance.” First Monday 13(3). Retrieved Oct 9, 2015 (http://journals.uic.edu/ojs/index.php/fm/article/view/2142/1949).
Cohen, Nicole S. 2008. “The Valorization of Surveillance: Towards a Political Economy of Facebook.” Democratic Communique 22(1):5-22.
van Dijck, José. 2012. “Facebook and the engineering of connectivity: A multi-layered approach to social media platforms.” Convergence: The International Journal of Research into New Media Technologies 19(2):141-155.
Fuchs, Christian. 2012. “The Political Economy of Privacy on Facebook”. Television & New Media 13(2):139-159.