Together in Public

What Our Tech Ethics Crisis Says About the State of Computer Science Education

If you work in tech and you're not thinking about ethics, you're bad at your job

7 min read

Together in Public logo with the silhouettes of two heads facing away from each other

It seems as if hardly a day passes without some new tech ethics controversy. Earlier this year, the Cambridge Analytica scandal dominated news cycles, in part because it raised a host of ethical issues: microtargeting and manipulation of social media users, research ethics, consumer privacy, data misuse, and the responsibility of platforms like Facebook.

As an academic who studies both technology ethics and research ethics, I was active in the public discourse around the controversy (which mostly meant tweeting and talking to journalists). This was international news, and the world seemed blown away by tales of privacy violations, research misconduct, and mass manipulation of voters, all facilitated by a social media platform that in many ways has become embedded in the fabric of society. The level of surprise varied, but I heard a common refrain when it came to the root of the problem: all these software developers, tech designers, data scientists, and computer engineers are just so darn unethical.

At that time, a piece by former Google engineer Yonatan Zunger appeared in The Boston Globe under the headline “Computer science faces an ethics crisis.” The Cambridge Analytica fiasco was just more evidence of something we already know–that internet technologies (when working as intended!) are regularly causing appreciable harm. As Zunger compared computer science to other fields of science and engineering that have had ethical reckonings–from eugenics to Tuskegee to nuclear bombs to collapsed bridges–he made a point that really resonated with me: “Software engineers continue to treat safety and ethics as specialities, rather than the foundations of all design.”

We see evidence of this frame of mind–that ethics is only a specialization, something that someone else does–every time anyone says “I just build things” when questioned about the ethics of a particular technology. Earlier this year, when asked about the “unintended side effects” of an algorithm that automates the process of identifying gang crimes, the computer scientist presenting the work responded, “I’m just an engineer.” The questioner, in turn, affected a thick German accent and responded with a lyric from a satirical song about the Nazi rocket scientist Wernher von Braun: “Once the rockets are up, who cares where they come down?”

Last year, Radiolab explored the emerging technology around fabricated video, including a neural network that studied hours and hours of video of President Obama until it could create extremely realistic video based on spliced-together audio. When one of the researchers was asked about potential downsides to the technology, she told the reporter that it was, paraphrased, “her job to build the tech, and other people’s job to consider the implications.” I was struck by how surprised she seemed by the question. In contrast, when I showed students in my ethics class the synthesized Obama video and asked them to imagine ways in which the technology might be used, the resounding answer was “Fake news.”

However, unlike Cambridge Analytica’s data-mining, which involved intentional bad actors, these technologies were most likely created with the best of intentions. Helping prevent crime is a laudable goal. The paper on “Synthesizing Obama” puts forth speech summarization as a potential use for this technology; you can also easily see the possible benefits to film editing. But good intentions aside, both of these technologies could have serious, even dangerous, drawbacks. We already know that predictive policing may be prone to bias in the training data used to build these algorithms, and the consequences for false positives could be severe. And the implications of “deep fakes” are horrifying–not just for news but also for use cases like revenge porn.

The problem with “I’m just an engineer” isn’t the engineer’s inability to identify all relevant ethical implications–it’s that they don’t think it’s their job to do so. This view passes on all responsibility to someone else. It suggests that ethics is not a thing that everyone should be thinking about–not, as Zunger said, “the foundations of all design”–but is, instead, only a specialization. I just build things; someone else can think about the ethics.

There are ethics specialists, of course; I’m one of them, as are many of my collaborators who do research in the area of tech ethics. However, while a construction site has safety engineers who audit safety procedures, safety is also everyone’s job. You can’t leave live electrical wires lying around and then say, “Well, it was the safety engineer’s job to notice that and fix it.” Similarly, people who have studied how to think about ethics and who deeply understand the subject are critical–but that doesn’t let everyone else off the hook.

I am not surprised that this attitude of ethics-as-specialization is so prevalent in computer science–in part because we often teach it that way. For the past couple of years, a portion of my research has been devoted to exploring ethics education within computing. Ethics is often part of the computing curriculum; in order to earn accreditation, computer science programs at U.S. universities must provide students with “an understanding of professional, ethical, legal, security and social issues and responsibilities.” However, the most common model is a stand-alone ethics class, often taught at the end of a degree program–after a student has already spent years learning how to be a computer scientist.

Research on ethics education, not just in computer science but in other disciplines as well, suggests that “silo-ing” ethics is not a great model. Teaching the subject outside a technical context can result in students seeing it as irrelevant to them. Isolating it can make it appear as a side issue or as a public relations diversion. And silo-ing reinforces the idea that ethics is just a specialization–or, worse, not actually part of computer science at all.

In the early 1990s, a project funded by the National Science Foundation brought together experts on computing ethics to determine how best to teach it within computer science curricula. The conclusion was that ethics should be integrated into “the core of computer science”–that stand-alone classes within computer science are preferable to classes in other disciplines, but that integration of ethics into existing classes is even better. After these recommendations were published in Communications of the ACM, the leading professional computing magazine, the next issue saw a letter to the editor from a computer science department chair, stating, in part, “The most glaring problem is that proposed subject matter is not computer science. . . . discussing the social and ethical impact of computing is not doing computer science.” In other words, ethics isn’t for you to worry about, computing students; leave that to the philosophers.

I would like to think that attitudes have shifted favorably since then–and perhaps even more so in recent years, as we see a tech ethics controversy around every corner–but I still talk to plenty of tech professionals, teachers, and students who see ethics as a side gig. One solution I’ve heard is that product teams should all include someone who “does” ethics. To be fair, I think that hiring philosophers and social scientists into the tech industry is a wonderful idea–but I also think that ethics should be something that everyone is obligated to think about. Even abdicating that responsibility and saying that other people should do the work is itself a (poorly considered) claim about ethics. If you truly do think that ethics is someone else’s job, then at the very least you must make sure that someone else is doing it and actively engage with them.

How might attitudes change if, from day one, we taught consideration for ethical and social implications as an integral part of technical practice? What if we taught students when they first learned to write code or build technologies that a fundamental component is thinking through the implications–and that if you don’t do that, you’ve missed a vital step, an error just as damaging as not learning to test or debug your code. When ethics only comes up in classes devoted to it, we reinforce the idea that ethics is an add-on; if we want computing professionals to think about ethics in their technical practice, it should also be part of technical classes.

Another problem with silo-ing ethics in a single class–particularly one required for computer science majors only–is that those who aren’t majors might then never be exposed to these frameworks. As an undergraduate, I took three programming courses as free electives, but I wasn’t a computer science major; I never heard about ethics or implications in any of these classes.

After all, you don’t need to have a degree to do harm with code–a reason why professional licensing for computer scientists would be challenging, despite how much intuitive sense such a measure makes. “Computer scientist,” or software engineer, or technologist, or whatever nomenclature you use, is a designation born of fuzzy boundaries. In the Cambridge Analytica scandal, many of the major actors might not have fallen under a licensing regime like this–or taken a computer science ethics class. Aleksandr Kogan, who developed the app that the company ultimately used to cull data, is a psychology researcher. Christopher Wylie, the whistleblower, is a self-taught data scientist. Even Mark Zuckerberg dropped out of Harvard before completing his degree.

I recently conducted an exploratory analysis of online data science courses and training programs, to see whether they mention ethics as part of their coursework. The answer, in short, is almost never. But I think that any training in computing, whether in boot camps, in online courses, or in elementary schools, should include some attention to the ethics or social implications of computing,

I know that this proposal comes with a huge set of challenges. For example, who is teaching this stuff? Is it really a good idea to insist that a bunch of computer science instructors (who themselves may not know much about ethics) suddenly integrate the subject into their classes? There is promising research in this area, though, and I’m glad to see new initiatives like the Responsible Computer Science Challenge, which provides funding for the conceptualization, development, and piloting of curricula that integrate ethics with undergraduate computer science training. For those interested in techniques and topics for teaching tech ethics, I maintain a spreadsheet of syllabi for mostly stand-alone courses on the topic, and I’ve also co-authored a paper about integrating ethics into a course on human-computer interaction.

I also recently wrote about creative speculation as an important component of teaching ethics, because examining hypotheticals helps us think through implications for technology beyond what is right in front of us. To tackle the problem that computing research can no longer be seen as having a “net positive impact on the world,” the Future of Computing Academy recently suggested that, in published papers, researchers be required to address the possible negative impacts of their work. That isn’t to say that we shouldn’t create new technologies (even if the obvious use case is, say, fake news). But when we do, we should always reflect on their impacts on the world–and, ideally, we should do what we can to mitigate negative effects. Technologists should do that, not just ethicists.

If I could wave a magic wand and change one thing about the culture of computing, it would be this: if you work in tech and you don’t think about ethics, you are bad at your job. Magic aside, one of many steps that need to be taken toward this change is at the level of education–and whatever else we can do to ensure that no one is “just an engineer” anymore.

Together in Public logo with the silhouettes of two heads facing away from each other

How We Get To Next was a magazine that explored the future of science, technology, and culture from 2014 to 2019. This article is part of our Together in Public section, on the way new technologies are changing how we interact with each other in physical and digital spaces. Click the logo to read more.