Mitigating the Negative Implications of Computing: Making Space for Debate

Tapan Parikh
7 min readApr 7, 2018

I read with great interest the recent set of recommendations by the ACM Future of Computing Academy (FCA) for mitigating (or, at least, articulating) the potential negative impacts of computing research on society. While I am very sure that those who framed this proposal only had the best of intentions, I think there are several dangerous aspects that must be considered before this recommendation becomes a standard in our community. Moreover, I believe the proposal as currently framed is insufficient for establishing appropriate discursive processes and frameworks for computing researchers to reason about and address these difficult and potentially contentious issues.

First of all, let me start with the positives. I think it is commendable that members of our community have started to grapple with these kinds of thorny challenges. I am also inspired that this initiative is being spearheaded by some emerging young leaders in our field. That this group is making this a core agenda item, and that they are taking a visible and vocal role in setting the future direction of computing research, both leave me with great hope that the future is in good hands.

I was also happy with those sections of the proposal that discuss the responsibility for computing researchers to consider and articulate the potential negative impacts of their research for society. It is absolutely necessary for this kind of critical reflection to occur more regularly, and not enough of us have been involved in doing so thus far. I also enjoyed the critique of the typical “green-washing” that occurs in many research papers and grant proposals — when interventions are motivated by a one-sided account of their potential positive impacts, rather then a balanced and objective discussion of both the negative and positive effects that could result from implementation of a new idea or technology.

Where I became uncomfortable is when the proposal asks reviewers to assess grant proposals, papers and even community members based on their determination of whether or not a particular research project or agenda is likely to do more harm then good. While it is reasonable to ask computing researchers to become more contemplative about these issues within their own work, what qualifies us to make these assessments of others? This is especially problematic within deliberative contexts that do not allow for open and transparent discussion to resolve differences of interpretation or opinion (like double blind reviewing). I fear that giving reviewers the freedom to anonymously opine on areas outside of their area of expertise risks giving them too much power to reject (or accept) ideas based on their own personal ideological commitments, which may not be informed by all relevant theoretical perspectives.

Overall, I find this proposal glib in considering the ability of individual reviewers (and researchers) to effectively assess these potential negative impacts. Suggestions like “if Kumail Nanjiani can do this, so can we” don’t necessarily help either. It is one thing to unleash a stream of tweets that points out that technologists are hopelessly naive about these issues. It is quite another to address them in a theoretically informed and academically rigorous way that advances the state of our knowledge in the field. Isn’t the entire problem that computing researchers aren’t currently able to do this effectively? Asking them to begin a process of conscious reflection on their own work on the basis of their limited prior knowledge is one matter; asking them to make important decisions about the work of others is another entirely.

Nor is going and brushing up on some “social science literature” the solution to these woes. This ignores the fact that diverse social science disciplines ranging from anthropology to economics are far from neutral in their ideological and value commitments. The fact of the matter is that many of the issues that we should be discussing are (or would be) hotly contested within and between these disciplines. Take, for example, the debate between modern, technologically advanced agricultural techniques (such as GMOs and large-scale industrial agriculture), versus more holistic, labor and conservation-oriented approaches to agricultural production (including organic and agro-ecological techniques). This is a debate that has survived fifty years of theoretical and empirical reflection across multiple disciplines, including economics, anthropology, sociology, agronomy and geography (never mind that most of these disciplines don’t actually listen to one another). The idea that we can shortcut all of this hard work simply by introducing some new review criteria is naive at best, and dangerous at worst.

Which brings me to the question of whether review panels are really the right place to be having these potentially contentious discussions. The very nature of these panels is that they are typically limited to small numbers of “insiders”, that they are not subject to public scrutiny and dissection, and that the knowledge created within them is locked up within incredibly isolated and, dare I say, elitist social networks. Students usually do not benefit from these discussions, nor do practitioners, nor do community members who might be peripheral for various reasons. This seems to me exactly the wrong place to begin a vigorous and spirited debate about the societal implications of computing technology.

So, if not through review panels, then where should this debate occur? Well, for one thing, I would argue that this should happen within our research fora. I think we need to open up and expand the role of critique in computer science research. Too often this all-too-important critical work happens in some or another intellectual backwater — either as part of a community that “mainstream” computing researchers do not pay attention to, or addressing issues and topics in a way that does not speak to this research and these researchers directly. I mean, are there researchers that are really knowingly doing “anti-social” work on purpose?

We need to find ways make critiques more visible to these researchers, and to make their more work accountable to these perspectives (and vice versa). This could happen by introducing critical tracks within existing computing venues, and by soliciting and encouraging work that takes a critical stance on current or prior research contributions. This would also mean expanding the methodological repertoire of these fields — including accepting theoretically and empirically motivated work that does not necessarily contribute any new “technical” ideas — but that can contribute to the debate and inform future work within the area. Another idea would be to include a “discussant” during conference sessions who would draw from critical perspectives — a common practice in other fields, including in the social sciences. Not all of this needs to happen through peer-reviewed research either — blogs, opinion columns and editorials are also fine ways to begin a discussion on these kinds of issues, as amply demonstrated by researchers like Zeynep Tufecki, Danah Boyd, Quinn Norton and others.

Even more fundamentally, I would argue that we actually want researchers in the academy studying technologies with potentially negative implications. Otherwise, these technologies are bound to be invented behind closed doors within agencies like the NSA or companies like Palantir. I think it is wiser that this work happen in open research settings, where it can be subject to public debate and scrutiny (and, eventually, adequate regulation), rather than in secret, where they can be (and probably are being) developed and applied without the public’s knowledge. This is akin to security research that aims to demonstrate system vulnerabilities, so that they can be patched and addressed, rather then pretending they don’t exist. Pandora’s box has already been opened — we can’t close it up again, we can only be thoughtful and vigilant about what is let out.

Another question that I would raise is whether the research process is where we can have the most agency regarding these kinds of issues. One could argue that in some situations the potential for positive or negative impact is not inherent to the technology, but rather in how it is applied and regulated. For example, this kind of rationale could be applied to nuclear fission, the steam combustion engine, or even more fundamentally, to a primitive technology like fire — all of which have had both significant positive and negative impacts. In the realm of computing, many AI or robotics technologies, or even the Internet itself, might fall into this category. What to do then when the real issue is not about the implicit value of the technology, but in its judicious use and application?

In these cases, as I have argued before, there is ample room for computing researchers to become more involved in social issues that they believe strongly about. Many of us joined the academy because of our commitment to a set of ideals that are not well-reflected in the commercial technology industry. Research is only one avenue through which to act on these ideals. As computing researchers there are many channels for us to have impact, including through our teaching, community engagement, social entrepreneurship, public advocacy or direct involvement in policy-making processes. Previous generations of computer scientists were active participants in organizations like Computer Professionals for Social Responsibility (CPSR) and Computer People for Peace (CPP). More recently, researchers like Ed Felten and Helen Nissenbaum have had a deep and profound impact on the policy sphere. Maybe it is time for us to recapture some of this spirit, and to work with like-minded people both inside and outside of the academy in support of initiatives that seek to improve the relationship between technology, society and human well-being.

--

--