The report, which emerged out of a workshop organized by CSIS for the purposes of academic outreach, reflects a common attitude that state security and intelligence agencies have towards social and environmental justice activists—that of flippant dismissal and demonization. Though the spy agency claims that this report does not reflect an official position, it does reveal some logics underlying the surveillance of political activists. The report had obscured any of the workshops participants or the reports authors under the Chatham House Rule.
The immense popularity of social media and its omnipresence in how we communicate and share information has transformed the social and political landscape in ways that are only now being unveiled.
In the CSIS report, the authors collapse any distinction between activists, conspiracy theorists, and hostile foreign nationals into the category of “independent emergent activists” who are understood as “agents of disinformation”. This report asserts that activists distrustful of Western governments engage in the amplification of conspiracy theorists from the political left and right and are susceptible to being hijacked by foreign state disinformation organizations.
Instead of providing a nuanced approach to understanding emerging digital threats in our social media landscape, the report conflates the political lefts opposition to violent military interventions and the exploitation of the global south to online conspiracy theories. There is a big difference between asserting that foreign nationals are able to influence how activists share news stories and activists also being implicit in producing disinformation.
CSIS has muddied the water of the very issues it sought to address. At best, it provides vague and ambiguous background information that is unable to distinguish between activists and trolls. At worse, they have contributed to their own campaign of misinformation by not providing sober nuances of complex issues in social and environmental justice.
With that said we can’t minimize the impacts of disinformation and fake news on our media landscape. These concerns signal the emergence of forms of media manipulation that can be deployed on mass while targeting an individual’s specific tastes and dispositions.
As the report observes, “With ‘fake news,’ the risk is not necessarily that it will overtake real news, but that democracy itself might drown in information.” If we are to approach this issue, we need to be careful not to fall into a state policing bias that privileges security concerns over the ability to engage in political dissent, whistleblowing, and holding the power to account.
The Lindsay Shepherd controversy has opened the Pandora’s Box once again on the notorious, vitriol-ridden “free speech” debate across Canada. It has largely consisted of tired arguments penned up in op-eds advocating that the university has become home to left-wing authoritarians who muzzle the speech of those with whom they disagree. Such debates have become so politically noxious that Andrew Sheer, leader of the Conservative Party of Canada, has jumped on board—calling for a political response to the free speech crisis in universities. Furthermore, Lindsay Shepherd has become an alt-right darling in the amplified calls for free speech on campus; she now has roughly nineteen thousand followers on Twitter and is consistently feeding the fire with toxic tweets. One tweet reading, “Confirmed: WLU is a mental institution”.
Confirmed: WLU is a mental institution.
Anon source: "profs cancel[led] our classes last Friday due to a free speech rally happening on campus & we feared for our own safety, & a lot of faculty members & grad students from the Communication program feel unsafe to come to campus"
Debates about free speech have a tendency to become unnuanced and flat as they typically amount to blanket statements that call for the unbridled and unrestricted ability to say anything. As I explored in an article for Vice, such an understanding of free speech is complicated sociologically when superimposed on a society already stratified along vectors of identity. Free speech becomes even more nebulous when we read the sub-text of free speech advocacy which often cozies up with white supremacy, transphobia, and sexism.
I want to shift the discussion about free speech. Instead of focusing on why the academy needs free speech, I would like to ask how free speech is reasonably deployed in the scholarly pursuit of knowledge production. This analytical shift will allow us to move beyond romanticised notions of free speech and academic freedom and consider the various ways in which knowledge emerges and becomes entangled in institutional practices and professional obligations.
This debate has by-and-large ignored the ways in which knowledge is produced and shaped within the academy. I would like to suggest that the epistemological insights of science, technology, and society (STS) can provide a scaffolding to understand the complexities of free speech in practice, as opposed to free speech in theory. Epistemology is the study of knowledge production. Despite the centrality of knowledge in all our social encounters, epistemological issues are often undervalued. Donna Haraway illustrates the importance of the structures of knowledge in this beautiful quote, “It matters what matters we use to think other matters with; it matters what stories we tell to tell other stories with, it matters what knots knot knots, what thoughts think thoughts, what descriptions describe descriptions, what ties tie ties.” The various shapes of the in-between matter that inform how we understand the world have consequences for how we frame free speech.
The production of knowledge in the academy is laden with formal and informal processes that shape how knowledge is produced, debated, disseminated, and taught. The sociologist, John Law, provides a useful framework for approaching knowledge production in the academy. He draws our intention to the messiness of the world in practice;how human emotions, scientific methods, institutional priorities, research ethics boards, peer review committees, professional reputations, class syllabi, employment contracts, graduate student committees, and codes of conduct become entangled when we go about the business of saying or writing something. When we talk about free speech, these constraints are made opaque despite their centrality in shaping how we talk, write, and debate.
A graduate student, depending on their discipline and department, will typically take graduate courses, be employed as a teaching assistant (TA), and research their independent thesis work. In order to guide a budding scholar through the complexities of academic research and politics, a grad student works under a committee. Such committees are made up of professors who have been rewarded PhDs for their familiarity and experience as academics. One of these committee members is the grad student’s supervisor who very closely guides that student’s academic work. All research produced for the student’s thesis must be rigorously checked by their supervisor and committee. This leads to a painful process of sending in drafts and receiving back red marks. Such a process shapes what knowledge is reliable, rigorous, and fair, and what knowledge is inappropriate, poorly thought out, and not defendable. If a student ignores their committee, they will likely fail their thesis defense and not receive a degree.
If a grad student were to write their own independent research, to give it credibility they would need to submit it to an academic journal. All reputable journals use a peer-review process where a committee of scholars assess the quality, reliability, and credibility of academic work and reject work that doesn’t meet academic standards. Poor research is sent back to the grad student to be revised or sent elsewhere. And some work is rejected for not meeting the criteria of the peer reviewers. Grad students need to have tough skin, as we will get torn to pieces several times a year.
There are other ways in which academic knowledge is reasonably shaped. Research on human subjects is tightly controlled by General Research Ethics Boards (GREB) that are informed by federal policy and legislation. If a grad student ignores GREB, they could be expelled and have their credentials revoked. Academic conduct is held to a Code of Conduct and other university policies that shape how scholars can interact with each other. And ultimately, the university must abide by the Criminal Code and the Charter of Rights and Freedoms that protect people in the scholarly community against hate crimes and discrimination.
Finally, a grad student typically becomes a TA to help fund their studies. This is an admittedly precarious job that usually have students working overtime with no extra pay. The TA signs an employment contract, works under a professor who teaches the course, and does not have any authority to teach their own content. TAs do not have the same academic freedom as professors. Aadita Chaudhurry, a PhD student at York University, penned up an article that appropriately delves into how Shepherd failed at her obligations as a TA. Grievances with professors are often mediated through a public service union that advocates on behalf of the grad student.
These formal restrictions on how knowledge is produced are complemented with informal occupational norms that are enforced by students, faculty, and administration. This is the everydayness of the academy. A grad student can’t just write anything. Everything a grad student does in public (including their publications) are informally assessed by colleagues and professors. Miscalculations or poorly thought out work can negatively impact the future of a grad student.
To engage in proper research in the academy is to maneuver through the tangled red tape of policies, expectations, institutions, regulations, and professional obligations. This has a grad student dancing and staggering back and forth through research and teaching and negotiating and compromising on the substance of their scholarly practices. This is the messiness of epistemology in practice. These processes are all swept away in popular debates around free speech in the academy. Such arguments are far too easy because they ignore how the academy functions as a complex institution and community.
And don’t get me wrong. None of these processes or practices are immune from criticism. But that is an entirely different discussion than the one being advocated by Lindsay Shepherd. Academic freedom is certainly important, but so are the ways in which it can and cannot be practiced. University administration, faculty associations, and student and labor unions are constantly in friction over how these limits should take shape. These are discussions that are always already happening and do not get near the press attention that Shepherd’s employment bungle has attracted. If the academy is in crisis, its critics are focusing on the wrong issues.
Anonymous communities can easily be mixed up with as a thick mess of senseless social interactions. At least, that is how I saw this world when I first decided to study anonymous communities for my Master’s thesis. I thought I would study how surveillance operates in anonymous social media applications—specifically, a very popular (at the time) application called Yik Yak.
When I first downloaded the app a month before I decided to dedicate two years to it—my room mate had convinced me to check it out. An seemingly infinite central feed of anonymous comments that were sorted by a slurry of up-votes and down-votes. The Yak feed is tied to a geolocational system that connects the app to particular locations. My Yak, was the Queen’s University Yak. It was a busy feed. And it was constantly changing. To me, it seemed to be a chaotic and nebulas thick tangle of associations. A fun challenge for a scholar following and Actor-network inspired philosophy.
The popular posts stood out from the unpopular posts by an upvote/downvote feature. It was kind of like a mash between Twitter and Reddit with a touch of anonymity.
After a stint of digital ethnographic work and a ton of interviews with enthusiastic and committed users I began to see something else. Something that, as an outsider, was invisible to me at first. There was an elaborately balanced Yik Yak community. As Gary T. Marx asserts, anonymity is entirely a social process. The only way for anonymity to occur is through a faceless interaction with another faceless person. This includes social regulations, exploitations, and oppressions. But also, playfulness and a culture of care.
I would like to play with a concept I’m thinking of called (a)social. ‘a’ can be used as a negation. ‘a’ can also be used to represent anonymity. But mostly, ‘a’ will be used to approach a society which remains almost entirely faceless. A community of people interacting around nothing more than posts from people who occupy similar space. Similar cultural values.
Though I have major problems with the corporate side of Yik Yak with their capitalist motives and try-hard branding schemes, their application has facilitated the construction of an elaborate community. It’s created an (a)social experiment. It is a community that both contains a culture of trolling and a culture of care.
All things are a collective endeavor. The (a)social communities are also a collective endeavor. In Donna Haraway’s most recent philosophical publication, Staying with the Trouble, she discusses her concept of sympoiesis—a collective unfolding of reality. This collective includes everything. All human, inhuman, and nonhuman components that are threaded into the collective mess.
When we load up Yik Yak to our mobile phones and post snippets of thought to the main feed (or engage in grueling arguments over all controversies in the comments)—we work with silicone, wires, codes, telecommunication companies, algorithms, molecules, humans, bots, and entire scaffoldings of bureaucracies, legal frameworks, and governments. Interacting with the Yak spans the world over.
Furthermore, the Yak’s platform—allows particular functions and blocks others—shaping its users to interact in particular ways. They impose standards, through their Code of Conduct, which they enforce through algorithms looking for offensive key words. And they sometimes change up everything in an update (to remove their main feature, anonymity). These are the institutional forces that shape and provide stability to the community.
However, I have noticed that there is something more powerful at work in maintaining the community. It seems that the mess of interactions from users balance out particular norms and ways of acting. This is done through both the comments section and the up-vote/down-vote feature. These are the vernacular forces that generate norms and cultures. Certain topics, maybe, offensive topics, are down-voted (a -5 score from votes deletes the comment from the feed). This vernacular power, though institutionally enabled, allows for a regulation of trolls and bullies without Yak’s employees ever having to get involved.
(a)social sympoiesis initially looks like a senseless and dense knot of relations. It’s noisy and confusing. But, once, as an ethnographer, you begin the arduous work of untangling these associations—it begins to look like every other community. Despite all of the contradictions, despite the arguments, the controversies, and the confusing faceless interactions—the Yak community is able to balance out, stabilize, and “hang together” as a coherent whole.
Though such an (a)social collective is not shielded from the larger world. Once, for whatever reasons or motivations, Yik Yak decided that their users didn’t want to be anonymous and forced every user to get user handles (and suggested they link up their Facebook page)—the entire community collapsed. All that is left are groups of Yak “refugees” with no where to go but to be visible to the world.
This is the third and final post in a brief, un-academic series about my personal experience of living in China’s troubled Xinjiang region, and the censorship both online and offline that it entailed. This functions largely as a final whimsical anecdote and a conclusion. You can read the background information here, and several other anecdotes from my time in China here.
I previously wrote about having my phone service shut down for using a Virtual Private Network to circumvent the ‘Great Firewall’ and use Facebook, Skype, and other foreign apps.
Well eventually Pokemon Go released, which several foreigners in my social circle downloaded and started playing. Given that Pokemon Go makes use of Google services to function, this was only possible by running the game through a VPN–the same kind that got me shut down several months before.
Not eager to be an unwilling participant in a supposed clandestine mapmaking operation, but a childhood lover of Pokemon, I knew I had to get back online.
A friend helped me register my passport with different cellphone carrier from the one that had shut me down, and I finally bought a new SIM card. By that time we knew I would be leaving China within a few months anyhow, so I went for broke and kept my VPN on 24/7. I didn’t end up getting shut down a second time, though it’s possible that if I had stayed it would have happened eventually.
What was curious to me was that while playing the game, I regularly found evidence of other players active in my area, despite having to use a VPN for it to work, and reports that it wasn’t supposed to function in China at all. One day I decided to use the in-game clues (active lure modules) to find others who were playing. After an hour of wandering from pokestop to pokestop, and setting a few lures of my own to draw out other players, I ran across three young guys in front of a movie theatre. It suddenly dawned on me that my Chinese vocabulary included exactly zero Pokemon terms. In the end I simply showed them my phone and smiled. They showed me theirs and laughed, and we all spent about ten minutes trying to get to an inconveniently placed pokestop.
I wish I could properly follow up on Pokemon Go in Xinjiang. The number of players I found evidence of in Xinjiang was initially surprising, but it shouldn’t have been. The Chinese are notorious for their zealous adoption of mobile games, and the restrictions on Pokemon Go were relatively easy to circumvent. I even had a ten year old ask me to recommend a VPN service one day after class.
I later learned that at that time Pokemon Go was unplayable even with a VPN in most of China, even in major cities like Beijing and Shanghai. But it was functioning well enough in Xinjiang, one of the more sensitive and closely-controlled regions. I never made sense of that.
I’ve now taken my Pokemon adventure (and the more mundane aspects of my life) out of China. But there are certain remnants of the surveillance and censorship apparatus that stick with you even outside the country.
When I visited my father over Christmas, for example, he picked me up from the airport and we went straight to a restaurant for breakfast. “What’s Xinjiang like, then? Do the people there want independence like in Tibet?*” he said. My stomach twisted and I instinctively checked the restaurant to see who might have heard. Of course nobody present cared.
(* – This is an oversimplification of the Tibet situation, but this post isn’t about that)
A Chinese Christian friend of mine related a similar experience she had: after years of fantasizing about boldly professing her religion, when she finally moved to America she simply could not feel comfortable praying without drawing the blinds first. Similarly, my girlfriend has physically recoiled once or twice when I spoke the name of a well-known Chinese dissident out loud in our thin-walled apartment. Every time she’s caught herself and said aloud “Oh, right. Nobody cares here.”
China is not Oceania; there is not really anything like thoughtcrime. But there are speechcrimes. And when certain things are spoken, especially in a full voice, you know in your stomach that those words could get someone in trouble if the speaker isn’t careful.
Before I moved to Xinjiang I had it in my mind that I might like to study Western China when I eventually return to school to pursue a Masters in Anthropology. But now I’m no longer certain if I can: as alluded to already, I met a wonderful woman in Xinjiang. We’ve been together for more than a year now, and we moved to the US now so she can attend a graduate program. While we will certainly return to Xinjiang in the future, the continuing presence of her family there, as well as my girlfriend’s Chinese passport make me ever-conscious of Chinese government’s attitude toward those who are critical. Even though I am against extremism of all kinds, and believe that independence would fly against the interests of those living in Xinjiang, the caveats I would attach to those positions are likely unacceptable to the regime.
And so, perhaps even what I’ve written here is too much to say.
If you have questions or requests for clarification please don’t hesitate to comment below. And as a good friend regularly says: “Every day’s a school day,” so if you’d like to suggest a correction, or a resource or if you otherwise take issue with something I’ve said, please don’t hesitate to comment either. If there is interest, I would love to contribute to Socionocular again.
What follows are several anecdotes from my year and a half in China regarding the topic of internet (and other) censorship and the atmosphere of distrust and paranoia it fosters.
In my first year in Ürümqi I regularly attended a weekly English club with a close network of Chinese professionals. We ate in a private dining room at a small mom-and-pop restaurant, and it was an intimate-enough group that sometimes conversations turned political. There were a small few who spoke without hesitation, but without fail somebody would get up to close the door before having their say.
I remember it was during one of these conversations that a relative newcomer to the group asked me out of the blue if I ever called home to Canada.
“No,” I told him, “I just use the internet.”
“Good,” he replied, “Someone might listen, if you called.”
This sort of caught me by surprise, so I probed him: “What, like American spies?”
He shook his head no, but then thought about it. “Maybe them too. But I mean the Chinese police.”
There was also a (in my opinion) completely reasonable belief among many of the foreigners that many of our apartments might be bugged as well. A friend who attended college in China over a decade ago says he was told flat-out that the foreign student dormitory was being recorded, but not monitored. “If something happens,” he was told, “They can go back and review the recordings.”
I visited another friend’s apartment once, and it came up in conversation that he believed his place was definitely bugged. When I asked him how he knew, he said that another foreigner had lived in it before; he had been friends with him and spent a lot of time visiting. One day a young policeman in the neighborhood warned him to be careful what he said when he visited. Despite the ominous warning, he liked the neighborhood and tried to rent an apartment nearby, but was blocked by the police. I know others who were barred from this neighborhood as well. Finally, though, after his friends moved to a new home he contacted their landlord and inquired about their old apartment. He was given permission to move in by the same police who had told him it was impossible before. The popular opinion was that they probably approved it because they wouldn’t need to go through the trouble of re-bugging the place.
Again, it isn’t to say that everyone’s apartment is bugged. But it’s very telling to me that whenever the topic came up, the usual verdict was usually ‘Yeah, could be,’ and never ‘No, stop being paranoid.’
It isn’t just the foreigners who are so concerned, either. In fact, many locals are subject to more immediate, more real, and more invasive surveillance. Early in my stay in Xinjiang I made friends with a university student who offered to show me around the city. We kept in contact and enjoyed chatting from time to time. On our third or fourth meeting he confessed to me that he was a practitioner of an illegal religion. He said he was never worried about telling foreigners this, because foreigners never seemed to care. “But,” he gestured to the students sitting at the table next to ours “If I told you this in Chinese I might get in trouble.” He recommended some reading materials I should look up, and then spent some time laying out his burdens: at college many of his classmates regularly had their phones confiscated and searched. He showed me both of his phones: one for storing his religious materials, and the other one ‘clean’ so he could hand it over to be searched without worry.
For all the presumed monitoring and censorship, the most people I associated with largely got on with their lives without worrying about it too much. It was occasionally a topic of conversation, or a bit of a game (such as conspicuously whispering pro-China slogans into lamps), but it became a more immediate personal concern in the summer of 2015.
I only briefly mentioned the “Great Firewall” before this, partly because its reputation precedes itself. What some people don’t realize, though, is that most of the time it isn’t terribly difficult to circumvent. There are multiple free Virtual Private Network (VPN) options that will allow one to access Facebook and Youtube, and most foreigners I know made use of one or two VPNs regularly. Some Chinese people I knew also made use of VPNs, but others considered them too much of a hassle for too little benefit (I was once told “Why pay go through so much trouble to use the foreign internet? The Chinese internet has everything I want.”). Word started to circulate that China Mobile, one of the larger cellular carriers, was shutting down service to those running VPNs on their phones. One night at a regular foreigner hang-out, a friend told me about his experience. He had his phone service cut, went to China Mobile to have it reconnected, but was referred to the Public Security bureau. The PSB instructed them to delete all “foreign communication apps” including Facebook and Skype, and submit their phones for inspection before they could be re-activated.
It was clear that the network could tell if a phone was making use of a VPN, but the shut-downs seemed random. A little less than half of my friends got shut down, and I myself continued to use my phone without issue for another six or seven months after that. At the end of January 2016, though, my phone service finally stopped without warning or explanation. At first I thought I had just run out of credit, so I sank some money into my account to put myself back in the black. This didn’t work, so I told my company’s foreigner-handler that I had been shut down. She took me to the Public Security Bureau where they asked me to hand over my phone so they could have a look. I had changed my SIM over from my iPhone to the cheap Nokia phone I had bought in my first week in China. They observed that there were no foreign apps and told me my phone would be re-activated in two weeks. The whole visit took less than fifteen minutes.
They must have figured out I was being less than honest, though, because two months later my phone still hadn’t been reactivated. Otherwise they lost the paperwork or some other institutional failure caused a problem. In any case, I gave up on getting my phone service for a time, and satisfied myself with hopping from wifi hotspot to wifi hotspot around the city for a few months.
Though there are others, these anectdotes are roughly representative of my experience in China. I’ve avoided providing too much biographical info, and/or changed a few details where they are inconsequential, to guard against an unlikely situation where post blips on the Chinese radar.
I will conclude in a third and final post to discuss one last anecdote concerning a personal vice of mine, and the lasting echo of censorship that rings even after leaving China.
Unmanned aerial vehicles (UAVs), otherwise known as ‘drones’ have increased in popularity over the past decades for recreational and commercial purposes. The amount of drone purchases has risen dramatically and it is projected to continually rise in the years to come. And this is due in part of the technology being affordable for purchase, ranging anywhere from $20.00 to $1000.00 depending on the size and capabilities of the quadcopter.
This technology has been showcased in providing vital aerial perspectives for photography, which has begun to be used by real estate agencies and freelance videographers. Other prominent groups within society have begun to incorporate drone use within their repertoire.
Companies such as Amazon seek to use drones for package delivery; law enforcement agencies have begun to see the benefits of using drones, especially when it comes to crime scene photography and search and rescue missions; and now journalists have also begun to use drones in their arsenal for better reporting of events.
The potential benefits of this technology are almost limitless but this can also be said for its potential consequences.
To begin with, it helps to understand what ‘drones’ really are. Fundamentally, the technology is just a platform with propellers, because of this it can be fitted for just about any task and the aerial movement provides advantageous in accomplishing the job.
Since 1911, humanity has witnessed the power and control that comes with air superiority. Left unchallenged and unchecked, control of the skies allows for an unhindered use of drone technology. It will be done by those with the power to do so, allowing for their superiority to be better represented by those subjected to the drone’s presence, all the while being able to operate the device at a safe secure location, often away from the scene.
Though it may seem that this technology is relatively new, it has been around since the late 1800s with air balloons being used to as bombs to target enemy cities. Though the technology really got going during World War I by Dr. Elmer Ambrose Sperry who invented the gyroscope, thereby allowing for more precise targeted missile strikes. The technology subsided for a while and was picked up again in the 1950s, this time serving as surveillance and recon for military operations.
As they’ve continued to develop, drones have transformed into hunter-killer devices in wartime settings and are now being operated for police surveillance in domestic areas. However, the lineage of these devices is not so clear cut, as the influence by remotely piloted airplanes by hobbyists has also been attributed to this sudden rise in this technology in a recreational setting.
Now, technology itself is neither good or bad, it is typically taken as neutral. What is important is who is using the technology and for what purpose, coupled with the perception of the drone by the intended target. In this is where the debate and controversy lies. With the constant maneuverability and over encompassing visual surveillance that drones are capable of—questions emerge: who really benefits from this technology when it is in use? Which groups benefit when the police use them? When corporations use them? When journalists use them?
With the emerging trend of police drones, it will be of no surprise to see security rhetoric being used as a tool to influence the populace into believing that we need drone surveillance as a way to feel safer and for it to be easier to catch the ‘criminal.’ Which was exactly what was done with the implementation of CCTV (closed circuit television) cameras. When in reality, it may be used as a tool of control that will benefit the rich and powerful and will be used on the disenfranchised resulting in particular communities and populations being disproportionately exposed to police surveillance.
In addition to providing a hindrance and an interpretation of a violation of our privacy rights.
Drone technology has the capacity to violate our privacy rights. However, this violation can become obscured by the shaping of public understanding mentioned above. This illusion of security is known as ‘security theater,’ a concept coined by Bruce Schneier as a way to explain the countermeasures that are set in place to provide the feeling of improved security while doing little or nothing to actually achieve it.
And what is surprising about all this, is that this is already happening, just on another medium, data collection is routinely gathered on everyone—government and corporate agencies can easily identify your behaviors and whereabouts stored in carefully constructed profiles, all by analyzing the data from your phone or computer.
A drone’s visceral appearance amplifies how we perceive drones and their impact on our privacy. The thought of being watched and the loss of control over surveillance puts individuals in a state of unease. It’s right there, potentially watching you and it may require a lot of effort to do something about it.
The drones are here and their flight has begun. However, it will be important to note who’ll be using the devices and for what purposes. The use of such technologies are often a reflection of its users.
I recently completed eighteen months of living in China’s far-western province of Xinjiang. As part of the coming-home process I contacted Kyle and offered to write a brief account of my experience in the ‘internet censorship laboratory of the world.’ What follows is a whirlwind of thoughts, opinions, and personal anecdotes that I will be the first to admit require much fact-checking and cross-referencing. Please consider them pages torn from my personal journal and shared with readers of Socionocular for their curiosity value.
One random day in mid-2014 three of my soon-to-be coworkers received text messages from the propaganda bureau of Ürümqi, the capital of Xinjiang Uyghur Autonomous Region in far western China. The messages reminded them that foreigners weren’t to be trusted, and that they must not share secrets with outsiders.
Which foreigners were these good Chinese citizens supposed to be wary of? And what secrets did three English teachers possess that could possibly compromise the safety of the nation? When later I asked these questions I betrayed my newcomer status. I would eventually conclude that all foreigners are suspect, especially in Xinjiang, and that the point is not so much to safeguard secrets as much as it’s to maintain the atmosphere of low-grade xenophobia.
The question that possessed my local friends was more pointed: why them? Broadcast text messages signed by the propaganda bureau weren’t uncommon, but this message was specific in its content and its recipients. For one, even though there were numerous foreigners working out of that office, only three of the more than two dozen Chinese staff got this particular message. As they chatted about it over lunch, they tried to work it out. One girl was dating a foreigner; the other was sleeping with one; the third was very close to a foreigner in a chaste, conservative Christian un-relationship that everyone could see through. But other staff had been so close with foreigners before. Besides, who would have been interested but inconspicuous enough to report these various liaisons to the propaganda bureau? And why would they bother?
The conclusion they eventually arrived at was that all three had used their ID cards to buy a SIM card for ‘their’ foreigner. That was the link.
And the phone company and Propaganda Bureau were evidently watching closely enough to notice.
To sign up for social media in China, most popular services require authentication using a mobile phone. In order to get a mobile number, one must register their government ID with the phone company before being given an activated SIM card. If the pieces fit together correctly anonymity is impossible on the Chinese internet. While I have friends who assure me that one can sever a link in this chain elsewhere in China, it’s much more difficult in Xinjiang.
The reason is, I suspect, that the stakes are higher in Xinjiang for the government, and so the fist is a little tighter. Like Tibet, Xinjiang is an autonomous region principally populated by China’s minorities, not the majority-everywhere-else Han. The Uyghurs who lend their name to the Xinjiang Uyghur Autonomous Region are a majority-muslim turkic ethnic group who share neither language nor culture with Beijing. The history of the region is complex, and contested, and supporting the wrong narrative or questioning the ‘right’ one is considered subversion.
20th century Xinjiang has been marked by episodes of pan-turkic, and separatist thought. There were two abortive independent states declared in the past century, both called East Turkestan. Both collapsed quickly. In the 21st century Beijing has bundled separatism with extremism and terrorism, labeling them ‘The Three Evils‘ which must be opposed at every level of society. The official line, packaged with China’s notorious control over the mainstream media has had the result of conflating each of the three evils with each-other.
The result of the party’s stranglehold on most of the news-media in China (if you’re curious, read The Party Line by Doug Young) is that the really interesting stuff is happening online. In China, the internet and social media have become somewhat of a haven for off-message thinking, mostly in the form of jokes. As mentioned, true anonymity is difficult on Chinese social media, but the Chinese language’s rich ability to cast puns has been used as a tool to avoid automated censorship, and make subtle jabs at those in power.
But the government has some surprisingly grassroots-seeming tactics of it’s own, such as its ability to rouse patriots to comment on the internet to support the party (mostly in Chinese, but also in other language). The use of paid government commenters is also an open secret. These paid internet posters are derogatorily called 五毛 (‘wǔmáo’), meaning ‘five mao’ (a unit of currency) because that is supposedly the going rate for one internet post (.5元is about $.07 USD).
I’m sure you can imagine that in this atmosphere it’s impossible to take others at face-value unless you are very close with them. Very often people will claim apathy or ignorance when asked uncomfortable questions, or echo the official line even if they roll their eyes while doing so. Contrary opinions are not shared easily, and paranoia is pervasive.
There is much I haven’t even touched on, some of which has been discussed at length by others (such as the Great Firewallblocking Chinese citizens’ access to many foreign websites). Instead of repeating myself (or others), I’ll conclude this introduction to the situation here. Shortly I will follow up with another post containing a series of anecdotes which touch on this self-censorship and paranoia.
Let’s see some examples:
“We collect information to provide better services to all of our users – from figuring out basic stuff like which language you speak, to more complex things like which ads you’ll find most useful, the people who matter most to you online, or which YouTube videos you might like”.
Let’s look at Facebook.
“We give you the power to share as part of our mission to make the world more open and connected. This policy describes what information we collect and how it is used and shared. You can find additional tools and information at Privacy Basics”.
Another example of the good intentions of Facebook.
“We work with third party companies who help us provide and improve our Services or who use advertising or related products, which makes it possible to operate our companies and provide free services to people around the world”.
What is missing is the way these companies use the user data to swing huge profit margins. This is arguably the most important factor that transparency is suppose to solve. Both Facebook and Google, the two Silicone Valley tech-giants, make a strong claim that they do everything to better serve their customer base–however, their intentions are more so geared towards data monetization.
However, this does not mean that such data can’t be re-identified (for more in-depth explanation—check out this revealing paper). It is often said in the surveillance studies community that meta-data is more revealing then data. meta-data isn’t merely unidentifiable and arbitrary. If it was, why on earth is it treated like the gold of the digital age?
Though Google and Facebook are certainly important and high quality tools, these issues are still exceedingly problematic and must be addressed. One reason we should be concerned is that we rely on these social media tools several times a day to maneuver through our social and cultural lives.
It isn’t just a corporate product anymore; it is an indispensable piece of social/cultural capital. Don’t believe me? Try to quit Facebook and/or Google—I bet you can’t. It is very difficult to engage productively in our communities without using what these corporations have to offer. These corporations are silently bleeding us dry of our privacy in order to establish high profit margins. And I am willing to bet that most people don’t know the extent of all of these injustices.
The more privacy that we allow these corporate entities to take from us, the more they will push to widen the gaping hole in our already transparent lives. This can be a terrifying prospect in terms of politics of freedom of speech and expression. You may not be hiding something now—but we all hide something eventually. And we have a right to do this. For an example, the right to obscure your sexuality. This is a touchy subject as the homophobic and heteronormative ideologies that lead to hate crimes tend to fluctuate. One day you are accepted for who you are, the next, you might be beaten to a pulp or socially stigmatized. You may lose your job. Your house. And in some terrible cases, your life. We need the right to hide. And we need the right to choose anonymity.
So what now? Don’t be so complacent. The tech-giants are simulating transparency without telling the whole story. They know very well that if their surveillance capacities are pulled into the spot light that they will be forced to change. But they also know very well that no one reads these documents anyways. And because no one reads them—they are left quite untouchable.
Technology and its surveillance capacities are constantly changing and improving. However, our laws and policies are not keeping up with this. It is why it is so important to speak-out and inform those around you. As academics, as citizens, and as consumers.
Pokémon Go is lulling the world in to a humungous augmented distraction. A distraction that is covering up some pretty intense politics. It is almost as if we stepped into Ernest Cline’s Ready Player One—where distraction through virtual reality meets the war between anonymity and surveillance.
It has been well publicized that this new app, of which is fueling a Pokemania (a nostalgic resurgence of interest in Pokémon every time a new rendition of the game is released), has some rather arbitrary and invasive access to your mobile phones data—particularly, unhinged access to your Google account and other features of your mobile device.
What is Pokémon Go? This almost seems pointless now, seeing the popularity of the game—but for those of you who have not tuned in to the pokemania. Pokémon was a TV show released in the late 90s, which became dream fuel for a generation of children and young adults. It featured a young boy, Ash Ketchum, who embarked on a Journey to capture Pokémon in a technology known as the “pokeball” through the direction of the Professor (A man who studies Pokémon). After the Pokémon is caught, the young boy (and the thousands of other Pokémon trainers) would aspire to train it to battle other Pokémon.
Shortly after the show caught on, Nintendo released Pokémon Red and Blue for the Gameboy Colour. These games became an absolute hit. I remember walking to school with my eyes glued to my little pixelated screen—traversing over roads and dodging cars while battling with Pokémon and trading them with my other schoolyard peers. The games slogan repeating through my cranial, “Gotta Catch Them All”.
Nintendo continued to release Pokémon games designed for their various game platforms up until present. Each successive game included an obsessive and nostalgic excitement that took over the gaming community. Or anyone who had grown up playing Pokemon Red and Blue, as well as collecting the Pokemon cards.
Pokémon Go is a game that can be played on a mobile smart phone that uses geolocational data and mapping technologies that turn the phone into a lens peering into the Pokémon world. Through the interface of your mobile device, you can catch Pokémon wandering the “real” world, battle through gyms, and find items that will aid your journey. It augments the world around the user so that everything and everywhere becomes a part of the game.
Just like its predessor, a game known as Ingress, many of the geo features in the game were set up around important places: art exhibits, cultural or historical sites, and parks. Following the maps would lead you through a productive tour of a cities geographical culture.
I want to explore the obsessive and nostalgic excitement through a techno-socio-cultural lens. I will unpack this critique into three parts: (1) the sociology of privacy, (2) Big data and algorithmic surveillance, and (3) the culture of nostalgia and the digital sublime.
Before I continue with this post—I want to assert that it is not an all-in-all terrible, megalomaniac, Big Brother type game. Pokémon Go is enabling new ways for people to engage in the social world. Check out this sociological blog post exploring just that. However, it would be silly to not apply a critical perspective to this.
There are some restrictions I’d like to apply to my analysis: (1) Pokémon Go is not an immature or irrelevant activity, millions of people of all ages and cultural backgrounds are playing it—meaning it has a ton of significance. As well as, (2) The people playing Pokémon Go are not zombies or passive consumers, they are very intentional and unpredictable social actors that have the ability to understand their situation.
Sociology of Privacy
One thing that boggles the minds of surveillance studies scholars is how the vast population of people using social media and mobile applications do not care about invasive surveillance embedded in everything they use.
In my own interviews of Facebook users in 2014, many of my participants claimed, “I have nothing to hide”. A pervasive mentality that enables big corporate and governmental entities to gain access and control to large swaths of data. This nonchalant attitude towards surveillance allows for massive ground in the dismantling of our rights to privacy. Though such an attitude is not surprising, as the entire ecosystem of social media is set up to surveil.
David Lyon, in his book “Surveillance After Snowden”, asserts that privacy is generally seen as a natural and democratic right that should be afforded to all citizens—but admits that a problem lay in that informational privacy is not as valued as bodily or territorial privacy. Even if information, data, and metadata are much more revealing than the both bodily and territorial surveillance.
Lyon notes three important points about privacy that are all very relevant to the current epidemic of pokemania: 1) the collecting of information has now been directly connected to risk mitigation and national security, implying that we are not safe unless we are surveilled. 2) Everyone is now a target of mass surveillance, not just the criminal. 3) Data collected through mass surveillance is made to create profiles of people—these may be completely inaccurate depending on the data collected, but you will never know the difference.
I would like to add a fourth. How can the data be used to swing massive profits? The corporation Niantic, creators of Ingress and Pokémon Go, use their privacy policies to legitimate “sharing” (sic: selling) of data with governments and third party groups. Government surveillance is often the focus of criticism. However, capitalist corporations are not often held accountable to ethical practices. Who is selling this data? Who is buying this data? And what is this monetized data being used for?
As Lyon asserts, Privacy is not about individual concerns—it is important socially and politically for a well-balanced democracy. Edward Snowden has been known to say, “It’s not really about surveillance, it’s about democracy”. While we continue to allow powerful groups to chip away at our privacy for entertainment, we literally give up our ability to criticize and challenge injustice.
Snowden reminds us that when we give up our democracy to the control room—there is zero accountability, zero transparency, and decisions are made without any democratic process.
So while we are distracted trying to catch a Snorlax at the park, we are giving away more and more of our lives to mysterious and complicated groups that want nothing but large profits and control. For a much more scathing review of this, see this blog post on surveillance and Pokémon.
Big Data and Algorithms
So what about the data. What is big data? First off, it’s all the craze right now. As data scientists, social scientists, policy makers, and business gurus scramble to understand how to use, abuse, and criticise such a thing. Big data is consistent of two large disciplines—statistics and computer science. It is the collection and analysis of unthinkably large amounts of aggregated data that is collected and analyzed largely by computer software and algorithms.
Boyd and Crawford (2012) offer a much more precise definition. They assert that Big Data is a “cultural, technological, and scholarly phenomenon” that can be broken into three interconnected features:
Technology – Computer science, large servers, and complicated algorithms.
Analysis – Using large data-sets compiled from technological techniques to create social, political, cultural and legal claims.
Mythology – Widespread belief of the power of Big Data to offer a superior knowledge that carries immense predictive value.
The big problem that remains is how to find, generate, and collect all of this data? In terms of social media and video games much of this has to do with offering a “free” service to consumers who take the role of the “prosumer”. The prosumer is a social actor that both produces and consumes the commodity they are “paying” for.
In terms of social media (like Facebook), while users interact with each other, they are producing affective or emotional data through liking things, sharing things, and discussing things, that are then collected by algorithms and fed back into the system through targeting advertisements. The user is implicit in both the production and consumption of that data.
The user is given free access to the social media platform, however, they pay for it through giving the platform a transparent window into their lives that is than monetized and sold for large profits. People’s reactions to this form of surveillance are variant: some people offer scathing criticisms, others don’t give two shits, and some act a little more cautious.
Why is this important for Pokémon Go? Because you trade your data and privacy for access to what Pokémon Go has to offer. It is incredibly clever of think tanks in Niantic—using the nostalgic Pokemania to usher users into consenting to ridiculous surveillance techniques.
It gets worse. As Ashley Feinberg from Gawker identified, the people responsible for Niantic have some shady connections to the international intelligence community. Causing some in the surveillance studies field to fear that Pokémon might just be an international intelligence conspiracy (It sounds crazy—but it makes complete sense).
David Murakami Wood coined to the concept of “vanishing surveillance”. This is a phenomenon, intentional and unintentional, that allows surveillance capacities in devices to fade into the background. Resulting in users not being aware, or at least completely aware, that they are being watched. Pokémon Go, an innocent video game that is enabling new ways of being social in public, becomes an invisible surveillance device that may have international and interpersonal consequences. And it is the Pokémon themselves that allow for the surveillance to vanish from sight and mind.
A Culture of Nostalgia
So what drives people to consent to all of this? What kinds of cultural patterns allow and shape us to an almost fanatical state when a Pokémon game is released?
The first factor within the culture of Pokémon is its appeal to nostalgia. Jared Miracle, in a blog post on The Geek Anthropologist, talks about the power of nostalgia. It taps into the childhoods of an entire generation—it even moves outside the obscure boundaries of gamer culture into the larger pop cultural context. It wasn’t only geeks that played Pokémon. It was just about everyone. This might provide an explanation to why so many people are wandering around with their cell phones before them (I’ve seen them wandering around Queen’s campus today, while I was also wandering around).
However, it is not all about nostalgia. I believe that the nostalgia plays a role in a bigger process of the digital sublime and the mythologizing of the power of media.
What is a mythology? According to Vincent Mosco, in his book The Digital Sublime, defines myth as, “stories that animate individuals and societies by providing paths to transcendence that lift people out of the banality of everyday life”. This is a form of reality that represent how people see the world from the every-day-life perspective.
Myths are also implicit in power. “’Myth’ is not merely an anthropological term that one might equate with human values. It is also a political term that inflects human values with ideology… Myths sustain themselves when they are embraced by power, as when legitimate figures… tell them and, in doing so, keep them alive”.
These myths, along with nostalgia for Pokémon paraphernalia, generate the digital sublime. A phenomenon that has us go head over heals for new technology. The mythologies that support it can be positive or negative.
Positive mythologies might sound a little like this: “Pokémon Go is allowing us to leave our homes and experience the world! We meet new people and we are empowered by new ways of interacting with each other. Hurrah!”.
Negative Mythologies are also important: “Pokémon Go is creating a generation of zombies. People are wasting their time catching those stupid Pokémon. They are blindly and dangerously wandering around, falling off cliffs, and invading private property. Damn those immature assholes”.
Both of these mythologies cross over each other to colour the experiences of those who play and those who watch.
We need to be careful of generating mythologies about the capacity for games to facilitate freedom, creativity, and sociality. We also need to be careful not to apply to much criticism. Such mythologies not only create a basic, overly simplistic way of understanding gaming, surveillance, and human culture, it also blinds us to nuance and detail that may be important in its broad understanding.
Drawing things together—A Political Economy of Pokémon
Taking a techno-socio-cultural perspective allows us to engage with Pokémon Go with a nuanced understanding of its positive and negative characteristics. It is possible to look at how this media creates a complex ecosystem of social concerns, political controversies, and cultural engagements with nostalgia, mythologizing, and capitalist enterprise.
Pokémon Go is indeed enabling a ton of new ways of interacting and helping people with mental illness get out of their homes to experience the world—however, we can’t forget that it is also an advance technology developed by those who have interest in money and power.
Regardless of the benefits that are emerging from use of this application, there are still important questions about privacy and the collection and use of Big Data.
So Pokemon Go isn’t just enabling new ways of being social with the larger world. It is enabling new ways of engaging with issues of surveillance, neo-liberal capitalism, and social control through the least expected avenues.
After all of these problematics become more and more public—will we still trade off our freedom for entertainment?
We have all likely heard of the panopticon. An architectural design of a prison, thought up by Jeremy Bentham, that was suppose to maximize surveillance capacities so that prisoners always felt as if they were being watched. Even when they weren’t. It consisted of prisons revolving around a central guard tower that could watch every move of every prisoner, all the time. However, the guard tower is made to be opaque—so the prisoners can’t watch the guards.
In 1975, Foucault borrowed this idea to illustrate his concept of disciplinary power in one of his most famous books—Discipline and Punish. The basic idea around Foucault’s use of the panopticon is that when people feel as if they are constantly being watched, they begin to self-discipline. The panopticon can refer to a prison. But it is meant to refer to society in general. Or many of the institutions in a society. The more people feel that they are being watched, the better they act. This watching could be through authorities, or even, your neighbors.
Though Foucault’s concept of disciplinary power is super important to many who study sociological theory—his example of the panopticon is overused and often misleading. It does not accurately represent the nature of surveillance in contemporary society.
The idea of the panopticon better characterizes a society of “total surveillance”. A completely, balls-to-the-walls, 1984, Big Brother-type (dys)utopia. Thankfully, there is currently no technology on earth that can allow for total surveillance. We may be a society of ubiquitous surveillance, but not a society of total surveillance.
So how do we “move beyond the panopticon”, as so many social and cultural theorists have been calling for? There is one useful theoretical framework that builds on Foucault’s work. This is the concept of the oliopticon. A concept that was proposed by Bruno Latour during his incredibly critical arguments of Reassembling the Social.
Latour criticizes Foucault for drawing up a total surveillance “utopia” that is made of “total paranoia and total megalomania”.
“We, however, are not looking for utopia but for places on earth that are fully assignable. Oligoptica are just those sites since they do exactly the opposite of panoptica: they see much too little to feed the megalomania of the inspector or the paranoia of the inspected, but what they see, they see well…”
Latour is staunchly reminding us that something that is everything is nothing at all. The panopticon is made to be too perfect. It is made to see all. It’s something, that as academics, we can’t possibly empirically record or understand. But the oligopticon is the existence of countless scopes meant for watching. Countless surveillance devices. They only see everything together, but because they rarely communicate it could hardly be called “total surveillance”.
“From oligoptica, sturdy but extremely narrow views of the (connected) whole are made possible—as long as connections hold. Nothing it seems can threaten the absolutist gaze of the panoptica, and this is why they are loved so much by the sociologist who dream to occupy the center of Bentham’s prison: the tiniest bug can blind oligoptica”.
However, this does not entirely rule out the panopticon. As Kitchin and Dodge in their book Code/Space assert, the power of codes and algorithms may some day be able to unite many of the streams of the oligoptica to create a menacing panoptic machine. However, due to the unstable nature of the practice of scripting code, running code, and working hardware—it is liable to bugs, errors, and absolute mutiny. So don’t hold your breath.
The panopticon, for now, has its place—but it’s a more appropriate theme for a science fiction novel than a good work of social science or philosophy. It serves as a powerful reminder of where a ubiquitous surveillance society could lead us, but not as a very good characterization of surveillance today.