Student Voices

From the get-go, students at the MCTS dive deep into lots of interesting topics. They are learning about and contributing to Science and Technology Studies (STS) and STS-related discourses such as ethics in AI, biomedical controversies, responsible innovation, and more. As part of their curriculum they often are required to write papers, essays, reports, or even create content in the form of podcasts and blog posts. In order to do so, students work hard, do thorough research, and as a result produce highly informative contributions to the field of STS. However, Student Voices believes these contributions should not only be read by teachers and end up in the deep ends of their digital archives, but should be shared with a larger community.

What is Student Voices?

Student Voices is a student-run digital space for various media content on Science and Technology Studies (STS) topics, use cases and life as a student at the MCTS. The platform is managed by a group of committed students from the MCTS M.A. RESET and STS programs to ultimately help students interested in publicizing their work to do so.

Inputs to Student Voices can take various forms such as essays written by students, interviews with STS professor or scholars, blog posts about technology and science related topics, and commentary about being an M.A. Science and Technology Studies or Responsibility in Science, Engineering and Technology student.

As students in interdisciplinary master’s programs in the relatively new field of STS, we are accustomed to questions like “what exactly is it that you study?” In publishing our work, we hope to ultimately show you – the STS community, current or future MCTS students, and greater public – what exactly we are up to during our studies.


Within both master’s programs of RESET and STS, students participate in core and elective courses on topics across a range of philosophy, policy, and storytelling. Like at most universities, students often write papers and other written formats to express their ideas and understanding of new topics. But since these programs are within a new and interdisciplinary institute, MCTS lecturers and professors push the bound of “typical” essays and encourage students to also write in the forms of memos, reflections, vignettes, along with more traditional scientific writing. Some courses even leave traditional assignments behind, challenging students to write blog posts or create podcasts to share their learnings. 

Whatever the format of content, Student Voices was created as a space to highlight the work that is created at the MCTS by its students. Any student is free to submit their work to the editing team and is considered for publication after a double-blind peer review process. Published anonymously or not, authors consent to the final version shown here for the public on the site. 

So dive in! We are proud to showcase the many talented students of the MCTS through this blog and hope their work gives you a taste of the world of the MCTS and greater field of STS.

Interviewer: Devika Prakash, Student, Master of Arts in Science and Technology Studies.
Interview questions gathered from the MCTS Student Voices Blog Members and the Students of the MCTS’ two Master’s Programs.

Comment by Devika Prakash:

I often read interviews on the stsgrad mailing list that Joseph Satish does with various STS scholars. Almost all of Joseph’s interview partners so far have been STS scholars situated outside of the Europe-North America academic circles. I found these interviews illuminating in that they provided a perspective outside of the ones I often read in our academic texts. I came across a text by Shiv Visvanathan on Cognitive Justice in the Science and Technology Policy class by Prof. Sebastian Pfotenhauer and thought he might make an excellent interview partner. Visvanathan defines Cognitive Justice as “the constitutional right of different systems of knowledge to exist as part of dialogue and debate,” (2005).

Questions to ask Prof. Visvanathan were collected from members of the MCTS Student Voices blog team and later opened to all students of the MCTS’ two master’s programs. It was pleasantly surprising when the answers Prof. Visvanathan gave to our questions were very different from what I had somehow presumed he would. Some of the scholars and debates he mentioned, I had never heard of. It reminds me that we still need to read and think extensively outside of our comfort zones.


A Conversation with Prof. Shiv Visvanathan – STS scholar and social science nomad

When you started doing STS in the 1970s at DSE [Delhi School of Economics], it was still a fledgling field, right? How did you become an STS scholar and how did you discover it?

See, I have to make a confession, I didn’t know I was doing STS. I was just doing sociology of science. The idea of STS hadn’t yet entered the imagination. There was science policy, which was governmental, and the sociology of science, which was academic. Right? For years, I didn’t even know I was doing STS. I was too marginal, too isolated to discover all of that. So, let’s be candid, I only discovered I was doing STS probably five years down the line. Because there was no such field, and there was only one other person working on sociology of science, and the rest were science policy people, who were very governmental and had a different kind of – they all belonged to the left party groups in a certain kind of way. So, when I went in, I was just an ethnographer trying to understand a laboratory.

And all you had in the Delhi School of Economics Library was one book by Merton and one book by Bernal. So, you had to virtually invent the field for yourself. That was the fun of it.

You kind of sign off your articles by calling yourself sometimes a ‘social science nomad’. Would you still describe yourself as an STS scholar?

Yes. But I call myself a social science nomad for two reasons. All the institutions I work for don’t let me use their trademark because of my political activities. Two, I used to sign off as social science nomad because some of my colleagues got bored of trying to identify which organisation I belong to, because by the time the publication came, I had been dismissed from the previous organisation. It just became a kind of practical way of coping with my career.

And I’m still an STS scholar but in a wider sense. Because I think that westernised STS has been quite narrow in certain imaginations. Especially about democracy, about violence, and many things. So, you can call me a stray STS scholar.

Your most recent book, Theatres of Democracy, is a commentary on Indian democracy and the rise of Hindutva. We see a transition from laboratory studies in your first book in 1985 to a different kind of political commentary in 2016. Does STS influence how you write about politics in India?

Of course. Two things. While I was doing Theatres of Democracy, I was completing a book on thermodynamics and the Indian constitution – which should be out soon. So they were parallel activities. Two, I think the sociology of knowledge and the whole fight for a democratisation of knowledge did influence my work on democracy.

Within the current, global context, what challenges do you see for attaining true cognitive justice? What structures in your opinion propagate the dominant paradigm of modern western science?

Let me put it bluntly, I would like to lead a movement which secedes from the intellectual property system. Let me put it as blatantly as that, because there is no democratisation of knowledge and two, the control of patents seems kind of crude and unethical.

So, within that, I would say that, if you want to look at multiple knowledge systems, and you look at systems of coping, you want to look at how marginal groups interact, I would secede from the intellectual property system altogether.

Our next question relates to that. How do you envision better dialogue between different forms of knowledge – such as indigenous, informal, formal — with more equitable and democratic representation? Is it a similar answer?

No. You’ve got to understand one thing. People generally formulate this answer within a center-periphery model, where the margins look helpless and vulnerable. It’s the margins that want a confident dialogue with the center. It’s the margins, which want a dialogue with the establishment. So vulnerability and a certain cultural confidence go together. And in fact the idea of cognitive justice was suggested to us by tribal groups who wanted a seminar with science policy people and western psychiatrists.

Oh right. What you write about in your paper on cognitive justice – the groups who were suffering from sickle cell anemia.

Yeah. And in fact one of them, who in fact went to JNU [Jawaharlal Nehru University] for a while, said, “I don’t want any subaltern nonsense. I want something different theoretically.”

When discussing cognitive justice in the classroom, in the past semester, one student — who was herself from a developing country — disagreed with giving space to alternative medicine. So, you know, do you get these kind of reactions and critiques from within and outside the STS community?

Oh yes. I mean people describe you as fundamentalist, nativist, indigenist, whatever the current term of insult is, you get it. What we are arguing is, all of these belong to a dialogue of knowledges. So each belong to the other, and each is incomplete without the other. You know? I think Raimundo Panikkar, who began as a chemist put it brilliantly, he said, “Every dialogue is a pilgrimage where you experience the other to discover yourself.”

Of course there’s quackery, but I’d still think the kind of debates we had in, say, 1923, on indigenous medicine, which Srinivas Murthy wrote about, had tremendous confidence in epistemic sensitivity. You’ve got to be careful that this is not captured by fundamentalism or even by the Ramdevs, you know, who think it’s a kind of panacea for everything. You have to keep it experimental, you have to keep it open, you’ve got to keep it tentative, and you have to maintain the dialogue. So, actually, what this does is, to make sure that no system closes in on itself.

I’m glad you mentioned the last bit, because this is sort of where we were having discussions on living in a world with climate change deniers and alternative facts. And on the other hand, in India, you know, we started out in 1947 having an emphasis on scientific temper, and now there’s a dialogue about western hegemony but it’s held in the hands of those who want to promote native science, like Ayurveda and yoga, as well as continued attacks on established scientific institutions. I personally have trouble balancing these two things.

Let me make one qualification. The BJP doesn’t know anything about Ayurveda. It suggests that yoga is a set of techniques. So it has no understanding of the epistemology of yoga. So please don’t attribute literacy to the government in power when it actually sees all these as technological acts of one-upmanship. Okay? Second, while they see hegemony, this government actually is giving way to western models. After all, look at just the alliance between India and Israel on the defence scene. Okay? So I think one has to be careful. In terms of an epistemic understanding of knowledge problems, this government is quite illiterate. Sorry for putting it so bluntly.

No, not at all. That’s an interesting perspective on it that I hadn’t considered. Thank you for saying that. But, in terms of, you know, climate change denial and alternative facts, how do we –

No, but I think that came – it came a bit before. Because remember the real argument began with Sunita Narain and Anil Agarwal and others saying that climate change is an unequal game. That the responsibility for climate change should be fixed for – uh – to the west. Which is partly true. I think the more interesting game now is climate change is seen as a planetary responsibility. So we assume that we are trustees of the planet. So, if necessary we will have to educate the West on climate change. So the model is more Gandhian. Swadeshi to Swaraj. And I think this is opening up in a very systematic way. In fact it’s very interesting, if you go down to the villages of India, I remember talking to a housewife recently during a recent investigating, and 3-4 of them grinned at me impishly and said, “Any time the government doesn’t understand something, they attribute it to climate change.”

Moving onto a few questions about STS as a field. The fellow students of my master’s program and I, we often ask what role we as STS scholars can play in transdisciplinary projects. How do you think STS scholars can contribute meaningfully?

See, my model of STS is Patrick Geddes. So, what we have here is, first, I think interdisciplinarity is playful. And the playfulness of STS rescues the university from being bureaucratized at a certain level. The attempt to create a playful university, which is also a cosmopolitan university, which is also a plural university, owes a lot to STS – if we follow what we dreamt of. The second kind of thing is, a search for a non-violent science. And Geddes in fact said, that if you were to create the new university, a post-Germanic university, it would be a university that was holistic and oriented to peace. So everything from pluralism to peace can be worked out within a configuration of STS scholars.

Okay. And, I have a really long question from a student. And he was asking how, when man-made or natural disasters strike in least-developed country contexts, people from far flung countries and cultures often struggle with how they can help. His question is when fundamental cultural asymmetries contribute to the disruption of these shared aspirations, and particularly when the health and lives of large populations are at risk, how can the conceptualization and enactment of cognitive justice help towards more successfully sharing tools for vulnerable communities?

See, three things. The disaster victim is not really a part of the theory of citizenship in most third world countries. As a marginal, he is seen as an object of study rather than a subject. He lacks agency. Second, I think you’ve got to understand that the survivor is a person of knowledge. The survivor’s ideas of suffering and knowledge have to be a part of any critical study. Whether you take Bhopal or you take the Orissa cyclone, I’ve worked on a lot of riots and natural disasters. And I have got to say that the role of the survivor as a person of knowledge, as someone who has a certain kind of memory, a certain language of suffering, a certain theory of the body, and a certain idea of how to cope with the aftermath of a disaster is absolutely critical to any notion of cognitive justice. The expert doesn’t have the phenomenological imagination for it.

Ok. Thank you. And my last question is – we are running this blog on STS and we are just setting it up as master’s students. And we were wondering if you had any comments in terms of what direction would we take? On what would be the most meaningful contribution we can make to STS and also to the larger public?

Look, I think a blog like yours should be a conscience for knowledge systems. Whether it’s war, violence, certain kinds of specialized expertise, iatrogeny, all these are important questions and a blog like yours can actually raise the question – the gossip of responsibility, the gossip of ethics in a world where specialization pretends it’s value-neutral. So I think it’ll be quite an exciting time. In fact it can be quite subversive and a lot of fun.


Visvanathan, Shiv. (2005). Knowledge, Justice and Democracy. In Science and Citizens: Globalization and the Challenge of Engagement, eds. Melissa Leach, Ian Scones, Brian Wynne. University of Chicago Press.

Written by Thomas Roiss

Thomas Roiss, a master’s student of the MA STS program at the MCTS, shares an essay written for the course, “Evidence Practices in Environmental Health.” BPA, a chemical from which many plastic products are made, is discussed for its omnipresent role in society today as well as how its toxicity is managed through evidence practices. Labelling and substituting are presented as strategies utilized by industry in order to strengthen a shift of responsibility from governmental to scientific authorities in charge of presenting information to individual responsible consumers.

Keywords: toxicity, chemical pollution, endocrine disruptors, BPA, consumerism, evidence practices, epidemiology

Underneath the Labels

What the Story of BPA can tell us about Evidence and Consumerism

It was not long ago, when plastic seemed to be the greatest invention ever made, and the hopes and promises it held were never-ending. We started using those materials for all kinds of things, from floors to bulletproof vests, from clothing to baby bottles. When asked about potential toxicities of the chemicals used to produce these materials, we were reassured that it would not matter, because they are bound in those long molecular chains called polymers, and no individual molecules could possibly migrate. And even if they did, the concentrations would be so low, that in the classic toxicological model of dose-response relationships it would be absolutely safe. Trusting the narratives of its safety, we used particular types of plastic, ones that contain Bisphenol A (BPA), for almost everything – food packaging, receipts, toys. Only decades later do we realize that BPA does migrate (Vandenberg, Maffini, Sonnenschein, Rubin, & Soto, 2009; Vogel, 2009). And that it did so for years and years, until it became ubiquitous in our environment, as well as in our bodies – it has now reached detectable levels in more than 90 % of different sample populations (Vandenberg, Hauser, Marcus, Olea, & Welshons, 2007). And what is more, we are realizing that it is a so called endocrine disruptor, a chemical that can act similarly to a hormone, and is therefore dangerous even in minute quantities (Vogel, 2009). As a consequence, we now have substitute chemicals and we have labels, telling us when a product is “BPA free”. Finally, the world is alright again. Or is it?

And who is this “we” that I am constantly referring to? Who knows all those things, and how? Well, I would call this “we” the society, made up of citizens, informing themselves, choosing their sources and making up their own opinion – ideally. But what we really see is a different story. I want to argue that today our society, this diffuse “we”, is being framed as not consisting of citizens, but of consumers. Consumers that, even before they can inform themselves, are readily being informed by evidence provided by the market players. Consumers that are assumed – and expected – to have completely free choice based on this information. Welcome to the age of responsible consumerism.

How do these narratives influence regulations on potentially harmful chemicals? How are evidence practices enacted via the labelling of products, and what does this say about regulatory practices in a consumer driven world? In this essay, I argue that BPA presents a special exemplary case of the emergence of new evidence practices, such as labelling, in a world which is ever more perceived as purely economically driven. The concept of the consumer replaces the concept of the citizen with the significant consequence that individual free choice is assumed and expected. I further argue, that thereby responsibilities are shifted from governmental and scientific authorities providing information to individuals, expected to inform themselves. The essay shows how, in anticipation of these changes, the industry´s strategy of substituting and labelling (and thereby providing information themselves as well as borrowing trust from other, established labels) is likely to be highly effective. I will conclude with a brief outlook, showing what these changes mean for our society and that in the end it is all of us who are suffering from it, one way or another.

A short history of BPA
Allow me to give a bit of background information on the chemical BPA itself, its history, the debate about its safety, and its current regulatory status. While I will keep this as brief as possible, I believe it is absolutely vital in order to be able to follow the story in the next section.

Bisphenol A has a longer history than one might think. After it was first synthesized in 1891, the molecule received some attention about 40 years later, when British biochemist Charles Dodds discovered the estrogenic activity of BPA in his search for a “therapeutic treatment of numerous female “problems” related to menstruation” (Vogel, 2009, p. 559). While he ended up using diethylstilbestrol (DES) (a drug that too caused much harm and was later banned) BPA found its renaissance in the 1950s, when the first epoxy resins and polycarbonates were made with it as one of the two starting chemicals. Its history as synthetic estrogen seemed forgotten when the pure practicality of these materials was discovered and subsequently unfolded in a number of applications. The most relevant of these applications for our story (for there are many others, where human exposure is much less likely) are the inner coatings of aluminium food cans, where epoxy resins are used, and various food containers and toys that are made out of polycarbonates (Vandenberg et al., 2007; Von Goetz, Wormuth, Scheringer, & Hungerbühler, 2010). BPA is furthermore a part of the coating of thermal paper, commonly used for receipts (Vandenberg et al., 2007).

So why is BPA bad? In order for any chemical to do harm it takes two things – exposure and actual harmfulness of the chemical (at the level of exposure). Both of these aspects were obscured in different ways as the debate about BPA safety unfolded. The key term in the story of how this was possible is monotonic dose-response relationship. This is the common, simple model in toxicology, describing that the adverse effects of a harmful chemical get worse as the dose increases. Also included in this model are various safety levels; thresholds, where a lower concentration can be considered safe. And if no such monotonic dose-response relationship can be observed, it is also commonly assumed that it is safe at low levels. However, endocrine disruptors do not follow this model, and can indeed be harmful at very low levels of exposure (Welshons, Nagel, & vom Saal, 2006). Such low levels have first been measured in 1993, when BPA leaking from laboratory equipment tainted research of endocrinologists investigating yeast. The discovery, that what they initially thought were high levels of estrogen were actually high levels of BPA, brought attention to the matter and sparked a lot of subsequent research (Vogel, 2009). And yet to this day the debate is not settled. There are still arguments saying animal studies are not sufficient to warrant regulation, that the levels are insignificant compared to our natural hormone levels (and fluctuations thereof), and that BPA is rapidly metabolized anyway, so that in fact in cannot even do harm (Vandenberg et al., 2007).

The body of scientific literature around the safety of BPA is astounding. With this much research having been done, one might (naively) assume that regulation followed swiftly. However, as this is a high production volume chemical, the economic stakes are significant, and the regulatory process accordingly slow. Most often, calls for more research are being made – new studies are being funded, literature reviews commissioned, risks assessed, and of course task forces are being created (Vandenberg et al., 2009). While policy makers wait here for science to present them with an easy solution for the bigger problem, smaller, more limited regulations have been made. BPA has been banned for the use in food containers and materials that are likely to come in contact with young children, like baby bottles, in several places. Outside of this, Canada is currently the only country with broader, non-food related restrictions on BPA (Flint, Markle, Thompson, & Wallace, 2012). Other regions, like the US and the EU are still debating and have thus far neither come up with a bigger regulatory framework for endocrine disrupting chemicals in general, nor BPA specifically.

What makes the BPA case so different?
Of the numerous cases of chemical exposure that are being discussed, and especially of those that are discussed in STS and Environmental History literature, the case of BPA seems to stand out. And it does so with a big difference – one that changes the whole story – that it does not start with the discovery of different symptoms in a population, or any other specific signs of exposure. Altman and coauthors (2008) write about “exposure experience” in the context of indoor pollution. They use this concept to describe how the people affected by it describe and experience their exposure to different chemicals, including symptoms, circumstances and more. Take Multiple Chemical Sensitivity for example, a condition where people are hypersensitive to even the smallest amounts of generally harmless chemicals. It is a fascinating case, because there seems to be only the exposure experience, and often times no scientific explanation (Dehart, 1998). However, it is the complete opposite in the case of BPA exposure – there simply is no exposure experience.

With the concept of exposure experience not working in this case, we can also observe another important concept not being able to capture the BPA story, namely that of popular epidemiology. Phil Brown (1993) describes this concept well in the case of toxic waste in Woburn, Massachusetts. It was local citizens who identified and described a local leukemia cluster and uncovered the whole story subsequently in the search for causes. Popular epidemiology is getting more attention these days, and high hopes seem to rest upon it. But it is also important to recognize where such an important concept fails, in order to not overlook those cases that it just cannot explain. Because not only is there no exposure experience for BPA. Due to its ubiquitous nature there is also no locality, no spatial dimension. While some scientists speculate that a multitude of growing diseases concerning modern society, such as cancer, diabetes and infertility, are caused by endocrine disrupting chemicals such as BPA, it is almost impossible to show direct links (WHO & UNEP, 2012). It is almost funny – precisely because it is everywhere, it is also almost completely invisible. Add to that the fact that the observable effects are usually only visible on a population level (e.g. skewed ration of females/males in newborn babies) and always show up with a significant latency (e.g. cancer or infertility).

All of these factors – latency, ubiquity, and lack of exposure experience – render BPA exposure almost invisible. The attention it is getting now started with a few concerned scientists, followed by significant attention from public media later. And yet still, a regime of imperceptibility was created and is sustained by the producing industry, industry-sponsored science, and also regulatory agencies. Michelle Murphy (2006) introduced this concept of imperceptibility as something which can “render claims of chemical exposures uncertain”, either through inevitable limitation in experiment design and equipment, but also through purposeful narratives by industry sponsored science (Murphy, 2006, p. 10). The idea is not to deny the existence, but instead to raise enough uncertainty about the science behind it. That way it is not perceived as a problem and acting upon it and introducing new regulations is difficult at best, impossible at worst. In the case of BPA, one could argue that it is commonly perceived as a problem, and that therefore this strategy was not successful. However, a closer look reveals something different. BPA is now widely being substituted by other bisphenols, likely to be just as harmful (Karrer et al., 2018; Rosenmai et al., 2014). The regime of imperceptibility that was created was and is highly effective in this regard. It prevented broader regulations which might affect the whole class of chemicals instead of just BPA, and the attention that BPA is getting is making the potential risks of its substitutes nearly invisible (Aho, 2017). So, let´s take a closer look at how this is done exactly.

So far, I tried to set the stage and introduced our main actor and their backstory, and also gave some context of the world this play is set in. With this in mind, I want to talk about the final act now, a story which could be considered an epilogue in the by now quite often told story of BPA. Because this story is far from over. Not only is there still no consensus on the effects of BPA (or endocrine disruptors in general, for that matter), and not only is its regulation half-hearted at best. It is also being substituted now. And while this may sound like a great victory, as if public pressure has indeed for once lead to a great change, it couldn´t be farther from it. Because BPA is being replaced with different bisphenols, such as BPF, BPS, and BPAF. And while not many studies on the toxicology of these chemicals exist, the initial results show that they are not better than BPA and might even be worse (Karrer et al., 2018; Rosenmai et al., 2014). Yet, the focus on BPA masks the potential danger that come from these substitute chemicals. And what is more, it allows the labelling of products as “BPA-free”, suggesting safety when there really is just uncertainty at best, and harm at worst. Yet these labels work as a form of evidence, they readily inform consumers on what they are looking at and establish trust in the harmlessness of a product.

Is labelling the new regulation?
Having established labels for products is incredibly common nowadays, especially in Germany. The most prominent example would be “Stiftung Warentest”, which is an independent institution that gives grades for products, then to be displayed in the form of labels on the products themselves. Another example that comes to mind is the “AMA Gütesiegel”, which labels meat that comes from Austria and proves that the animals have been kept and killed according to Austrian laws. Such labels enjoy a trust that has been established over years. And they speak to the – perhaps stressed, perhaps lazy – consumer, who then does not need to inform themselves, but rather is being informed by a clear sign. Labelling practices are intensively being debated in the context of genetically modified organisms (GMOs) in Europe. With concerns about their safety and a great deal of uncertainty, as well as the precautionary principle that is established in the European Union, the labelling of products containing GMOs is presented as the prime solution. And while most of the debate spins around the details of how to label such products, there are some general arguments for and against labelling per se. Most of these arguments – on both sides – are of economic nature, but there is one that is particularly interesting: ”labelling of novel foods does not make sense as long as the majority of consumers does not know how to interpret the meaning of the labels” (Todt & Luján, 1997, p. 322).

I would argue that this is the same in the case of BPA. A “BPA-free” label cannot be correctly interpreted without knowledge about what is used instead. The general idea is that while consumers could take an extra step and look up more information, or request more information, most of them would not do that. This is because the label is acting as evidence that their biggest concern – BPA – is not to worry about. Providing this information upfront creates a trust in the harmlessness of the product. But what is more, a study has shown that even when people are informed about the uncertainty, ambiguity and potential harmfulness of substitutes, they still prefer to buy the product labelled “BPA-free” (Scherer, Maynard, Dolinoy, Fagerlin, & Zikmund-Fisher, 2014). Another argument against labelling, that has previously not been raised, is that there is risk for progress to just stop there.

Consumers, the new citizens?
Labelling practices are only one part of a larger trend that I am seeing in issues of exposure to toxicants and uncertainty. The people who are exposed are constantly being framed as consumers, almost never as citizens. But what does that mean, and why could it be dangerous? The confluence of the concepts of consumers and citizens can be observed in a number of cases over the last years. Alastair Iles (2004) described it in the context of sustainable seafood, and Robert Doubleday (2004) in the context of a Unilever campaign. Both saw it positively that consumers are given citizen-like powers over producers. Simply by their decisions of what to buy they can create a pressure on companies, possibly leading to changes. However, I want to argue that this positive view has a significant flaw and is especially not applicable in the case of chemical exposure and regulation thereof.

Welcoming that consumers are given citizen-like qualities seems somewhat cynical. It masks the fact that in our democracy it is the citizen who should have the power to make change happen. While it can easily be argued that there is no harm in having both the citizen and the consumer in a position to exert influence, I do not think it is that easy. For the more power the consumer has, the less power the citizen has – it is not an addition, it is a trade-off. We as a society accept that working towards change in the role of a consumer is in times more effective than doing so as a citizen, but by accepting that role we enable and enforce a more and more economically driven world. The case of bisphenols clearly shows the shortcomings of this. Labelling practices satisfies people´s desire for change. The illusion of the free will of the consumer – which in reality is constrained by a number of personal social and economic factors – successfully silences the calls for regulatory agencies and governments to take action.

In this essay I have recited the story of the safety of BPA and offered a glimpse at what could be an epilogue to this story. One which is about uncertainty, substitution, labelling, and consumerism. In it, we realized that identifying a substance as harmful is really only the first step in a long journey. Not only are the economic stakes incredibly high in the case of high production volume chemicals such as BPA, but there is also uncertainty of how to replace it. Substituting harmful chemicals is never easy, and it is almost never clear whether the substitutes are better or worse – at first they are just new (V. J. Brown, 2012). And who should have the responsibility to come up with alternatives, and who should have to prove their safety? None of these questions can be easily answered, and yet I think it is of utmost importance that they are being discussed. For we see what happens when we only go half of the way and then stop – the issue is perceived as being resolved. Something that once got a great deal of attention is being covered once again under a veil of invisibility and imperceptibility. Consumerism is cleverly employed to aid this cause. If we would peel off the “BPA-free” label and take a closer look behind it, we could see that really nothing has changed. I hope that what is now an epilogue will at some point be the beginning of a new story. A story of how we change our approach when it comes to toxic chemicals. How we bid farewell to this kind of consumerism and instead employ proper regulation, finally enacting the precautionary principle properly. Maybe it would not even be that difficult.


Aho, B. (2017). Disrupting regulation: understanding industry engagement on endocrine-disrupting chemicals. Science and Public Policy, 44(5), 698–706.

Altman, R. G., Morello-Frosch, R., Brody, J. G., Rudel, R., Brown, P., & Averick, M. (2008). Pollution comes home and gets personal: Women’s experience of household chemical exposure. Journal of Health and Social Behavior, 49(4), 417–435.

Brown, P. (1993). When the public knows better: Popular epidemiology challenges the system. Environment, 35(8), 16–41.

Brown, V. J. (2012). Why is it So Difficult to Choose Safer Alternatives for Hazardous Chemicals? Environmental Health Perspectives, 120(7), a280–a283.

Dehart, R. L. (1998). Multiple Chemical Sensitivity. American Family Physician, 58(3), 652–654. Retrieved from

Doubleday, R. (2004). Institutionalising non-governmental organisation dialogue at Unilever: framing the public as ‘consumer-citizens.’ Science and Public Policy, 31(2), 117–126.

Flint, S., Markle, T., Thompson, S., & Wallace, E. (2012). Bisphenol A exposure, effects, and policy: A wildlife perspective. Journal of Environmental Management, 104, 19–34.

Iles, A. (2004). Making seafood sustainable: merging consumption and citizenship in the United States. Science and Public Policy, 31(2), 127–138.

Karrer, C., Roiss, T., von Goetz, N., Gramec Skledar, D., Peterlin Mašič, L., & Hungerbühler, K. (2018). Physiologically Based Pharmacokinetic (PBPK) Modeling of the Bisphenols BPA, BPS, BPF, and BPAF with New Experimental Metabolic Parameters: Comparing the Pharmacokinetic Behavior of BPA with Its Substitutes. Environmental Health Perspectives, 126(07), 1–17.

Murphy, M. (2006). Sick Building Syndrome and the Problem of Uncertainty – Environmental Politics, Technoscience, and Women Workers. Durham and London: Duke University Press.

Rosenmai, A. K., Dybdahl, M., Pedersen, M., van Vugt-Lussenburg, B. M. A., Wedebye, E. B., Taxvig, C., & Vinggaard, A. M. (2014). Are structural analogues to bisphenol a safe alternatives? Toxicological Sciences : An Official Journal of the Society of Toxicology, 139(1), 35–47.

Scherer, L. D., Maynard, A., Dolinoy, D. C., Fagerlin, A., & Zikmund-Fisher, B. J. (2014). The psychology of ‘regrettable substitutions’: examining consumer judgements of Bisphenol A and its alternatives. Health, Risk & Society, 16(7–8), 649–666.

Todt, O., & Luján, J. L. (1997). Labelling of novel foods, and public debate. Science and Public Policy, 24(5), 319–326.

Vandenberg, L. N., Hauser, R., Marcus, M., Olea, N., & Welshons, W. V. (2007). Human exposure to bisphenol A (BPA). Reproductive Toxicology, 24(2), 139–177.

Vandenberg, L. N., Maffini, M. V., Sonnenschein, C., Rubin, B. S., & Soto, A. M. (2009). Bisphenol-A and the Great Divide: A Review of Controversies in the Field of Endocrine Disruption. Endocrine Reviews, 30(1), 75–95.

Vogel, S. A. (2009). The Politics of Plastics: The Making and Unmaking of Bisphenol A “Safety.” American Journal of Public Health, 99(S3), S559–S566.

Von Goetz, N., Wormuth, M., Scheringer, M., & Hungerbühler, K. (2010). Bisphenol A: How the most relevant exposure sources contribute to total consumer exposure. Risk Analysis, 30(3), 473–487.

Welshons, W. V., Nagel, S. C., & vom Saal, F. S. (2006). Large Effects from Small Exposures. III. Endocrine Mechanisms Mediating Effects of Bisphenol A at Levels of Human Exposure. Endocrinology, 147(6), s56–s69.

WHO, & UNEP. (2012). State of the science of endocrine disrupting chemicals, 2012.

Written by Susanne Hirschmann

In her vignette for the Master’s Blog, Susanne Hirschmann, student of the M.A.-program Responsibility in Science, Engineering and Technology (RESET) at the MCTS, discusses the recent political challenges for Barcelona’s water supply. Social movements and politicians criticize the role of formerly purely private actors who provided water for too long without even having a legal contract for doing so. In 2019, after much political pressure, there is hope that, due to public dialogues, the water supply in the metropolitan area can be democratized (“remunicipalized”) in the near future.

Do pipelines have politics?

The remunicipalization of water in Barcelona

Water leaks from a rusty water pipe that runs next to me. Water drops hit the bottom, I step into the small puddles and my sandals get damp. It is only the three of us in the illuminated tunnel that connects the two buildings of the museum “Casa de l’ Aigua” (engl. “House of Water”) which located on one of the hills of Barcelona: me, the rusty water pipe and the sound of leaking water. In the past, the House of Water, built in 1919, was used as a water treatment station that provided water for Barcelona. Today, it is a museum that shows Barcelona´s history by taking into account the essence of life and, subsequently, the essence of city development: water and the infrastructure that makes water accessible for the population. Some posters about the history of water supply in the Barcelona metropolitan area guide us to the other end of the tunnel. One particular poster which reads “Democratic management of water and the possible remunicipalization of Barcelona’s water supply” catches my attention. I start to ask myself: does water have politics and how can its management be democratic?

The water supply of Barcelona is limited and a never-ending fear of running dry makes the secure supply of water one of the city’s main issues. The rivers Llobregrat and Ter provide more than 70 percent of Barcelona’s drinking water, with the remainder coming from other rivers and a desalination plant. Together they provide more than 200,000 m3 per year for the citizens in the metropolitan area of Barcelona. The supply of water coming from rural areas to the metropolitan area of Barcelona (AMB) is in the hands of a public entity called “Agencia Catalana de l’Aigua” (ACA). The management of potable water within the AMB is outsourced to the private-public company “Aigües de Barcelona” (AGBAR), with a private share of 85 percent and a public share of 15 percent.

Social movements like “Aigua es Vida” and political parties such as the “Candidatura d’Unitat Popular” (CUP) and “Barcelona En Comú”, among others, campaign in order to remunicipalize Barcelona’s water management and create a public company that would in charge of Barcelonas water supply. The main claim is that water is both a human right and a common good. For this reason, it has to be managed by public entities which serve only the citizens. These movements not only question the very technical issue of water supply, infrastructure and quality management, but also strive for democracy, public ownership and participation. For instance, this can be archived by creating a citizen observatory on water. It will not only assist in the initial process of remunicipalizing water, but also will be useful once remunicipalization has been put into practise. Specifically, democratic processes and participation are required for the infrastructure’s design and construction phase. Once put into practice, this infrastructure requires maintenance, supervision and transparency. Overall, members could decide on tariffs, get access to the new public company’s data and decide as well on new investments on infrastructure.

However, the water supply remains in the private hands of AGBAR for now. The privatization of water is often criticized as it turns water into a business and only maximizes economic benefits. There is an inherent conflict between providing a basic good for the public and the goal of making a profit. Apart from general arguments against privatisation of water, AGBAR is criticized for two principal reasons. First, the company lacks transparency. In 2013, a public-private company was created in order to solve the problem that AGBAR had a monopoly on providing water management service for 150 years without a valid contract [1]. By creating the public-private company, AGBAR declared active assets of 476 million euros, whereas another report indicated the real value to be closer to 130 million euros. The municipality of Barcelona is still waiting for a court order to uncover the truth. Second, AGBAR is not only a local company but acts as a subsidiary company of the multinational Suez in more than 24 countries worldwide [2]. AGBAR focuses on urban water management [3]. The dense population in cities guarantees a large profit margin, whereas the rural, less dense, areas are served by public entities.

I leave the tunnel behind me and thousand thoughts about public and private ownership of water join me on my way out of the museum. Fresh air enters my lungs and my eyes observe Barcelona lying in front of me. Fog hangs over the city, which in turn covers the pipelines, taps, and subterranean water providing the essence for the city’s life. In 2019, a public consultation on the remunicipalization of water in Barcelona will take place; let’s see if the fog will clear up.

Written by Maximilian Braun

What is at stake, for whom, who is held responsible in a software apocalypse? Maximilian Braun reviews James Somers’ The Coming Software Apocalypse by drawing on STS notions of responsibility and responsiveness.

Keywords: infrastructure studies, platform studies, software, programming, code is law

Remedies for Dysfunctional Complexity

A Text Analysis of “The Coming Software Apocalypse” by James Somers

From Macro to Micro: Software as an Important Infrastructure
Dense is the new big: After facing evermore boundary-pushing construction endeavours throughout the last decades, our senses now witness the micro and nano scale of mankind’s technical innovation capabilities. Unlike the skyscrapers and roads of yesterday (Somers 2017: 3) that were unfolding right before our eyes, Moore’s law (Moore 1965), centralized storage systems and distributed network technologies enable the establishment of unobservable, more intrusive architectures that guide our interactions with the reality that surrounds us. An abundance of software technologies renegotiates our understanding of coded programs from convenient, situated solutions to a necessary infrastructure that forms an essential part of our everyday life. However, this new level of technological density comes at cost: whereas former infrastructures mostly comprised perceivable physical components, the realm of software seems to confront us with uncanny opacity.

The reviewed text argues that coded structures reach a degree of complexity that makes it impossible to fully avoid potentially dysfunctional behaviour, even for experienced programmers. The mentioned tragedies around the “911 outage” (Somer 2017: 1) and Toyota’s “unintended acceleration incidents” (ibid.: 4) speak for themselves. However, a possible remedy is depicted as residing in the simple formula “Inventing on Principle” (ibid.: 7). This way of designing software adheres to the following imperative: Make Integrated Development Environments (IDEs) and Software Development Kits (SDKs) – in brief, software programming frameworks that require a developer to hack thousands of lines of abstract code into a textual editor – a thing of the past! Instead, create development frameworks that are “truly responsive” (ibid.: 9), i.e. that provide instant feedback via simulating how the system will behave when certain parameters change, preferably “with knobs, buttons and sliders that the user learns to play like an instrument” (ibid.:8).

So far, so easy. But all this is a double-edged sword. We are not just talking about defective products or disappointing services. It is about safety, security and the reliability of regulations that protect our integrity when living in an infrastructure full of such complex systems. So, rather than introducing simpler ways of designing software and software infrastructures, we should think about who is held prospectively responsible for them, who determines their scope of function and whether these questions may only be asked to codes’ creators.

Superfluous Developers and Modularized Responsibility?
How to ascribe responsibility for systems whose “complexity is invisible to the eye” (Somer 2017: 3)? We might be able to find answers in the transboundary experiment between infrastructure and platform studies, two once opposing, now slightly complementing domains of research related to Science and Technology Studies (STS). Plantin et al. (2018) try to reconcile the two mentioned disciplines: Infrastructures are described as “heterogeneous systems and networks connected via sociotechnical gateways”, whereas platforms comprise a “programmable, stable core system” and “modular, variable complementary components” (ibid.: 1). Bringing the issue back to questions of responsibility, it still requires a special sort of human contribution to ensure stability, i.e. multiple instances of that type of engineer “whose work quietly keeps the internet running” (Somer 2017: 14).

In the light of the differentiation above, we might argue that Somers opts for a “platformization of infrastructures” (ibid.: 1). when he summons Bret Victor’s conviction that the developer’s role is to make herself superfluous (ibid.: 10). Software tools created with such an intention resemble platforms that combine stability and variability with finite application possibilities, just as Victor demonstrated in his Mariolike Jump’n’Run-framework (ibid.: 8). Here, jumping and running suffice as core functionalities for the program and the effects of certain adjustments (e.g. higher gravity) are directly shown to the programmer in a separate window, without having to replay the game.

It might be a feasible approach to further platformize infrastructures and, therefore, split complementary from vital core components, but deliberation must not fall short in this process. The main question is whether existing structures of “organized irresponsibility” (Beck 1995) are amendable enough to render them responsible. They must be equipped with a prospective sight on responsibility, mentioning responsiveness (i.e. adaptive and deliberative capacities) as a key dimension (Somer 2017: 9). Otherwise, solutions fall short when dealing with hazards that cannot be tackled with a retrospective, knowledge-based responsibility framework (cf. Owen et al. 2013). The outcome would more resemble Toyota’s line of reasoning after the unintended acceleration incidents: It can be poorly designed floor mats, sticky pedals, a driver error, so why should it be our software (Somer 2017: 4)?

The Maintenance of Ever-evolving Programs
The bane and boon of software is its flexibility that seems to effectively outperform all benefits of solid and reliable hardware. Well dried solder connections and firmly mounted electrical components on circuit boards are no more an obstacle for the diverse application possibilities of the manifold software instances running on and between chips, memories and APIs. Instead, updates result as the maintenance of today.

Somers mixes up different levels of abstraction in coding (e.g. High-Level Language as C and JavaScript versus Programmable Logic Controller related languages that work in a quite different fashion), what makes it hard to derive concrete solutions from his accounts. Also, additional functional requirements (like throwing pizza or digging holes in case of the Mario example) will always require coding skills – and there the story begins anew. This leads to the same vicious procedure outlined above, only that it is not a monolithic program with “feature after feature piling on top of” (Somer 2017: 5), but the framework itself. The maintenance work and the unobtrusive slippery slope of adding features to keep the system meeting new requirements is only shifted from the program itself to the interfaces.

Questions of responsibility should specifically address the issue of modularization. It must be ensured that programs remain controllable and maintainable, are subject to regular updates and fall under the responsibility of a dedicated software development team. Otherwise one runs the risk that the frameworks become the same untameable monsters that are to be avoided. Also, it cannot hold that the knowledge and competences that are able to assess the difficulties arising from complex software systems are not represented in the committees in charge of soft-lawing practices (ibid.: 13). If “software ‘is eating the world’” (ibid.: 2), then at least out regulatory institutions should know how its metabolism works.


Beck, Ulrich (1995): Ecological Politics in an Age of Risk. Cambridge: Polity Press.

Moore, George E. (1965): Cramming More Components onto Integrated Circuits. In Electronics (38), pp. 114–117.

Owen, Richard; Stilgoe, Jack; Macnaghten, Phil; Gorman, Mike; Fisher, Erik; Guston, Dave (2013): A Framework for Responsible Innovation. In Richard Owen, John Bessant, Maggy Heintz (Eds.): Responsible Innovation, vol. 31. Chichester, UK: John Wiley & Sons, Ltd, pp. 27–50.

Plantin, Jean-Christophe; Lagoze, Carl; Edwards, Paul N.; Sandvig, Christian (2018): Infrastructure Studies Meet Platform Studies in the Age of Google and Facebook. In New Media & Society 20 (1), pp. 293–310.

Somers, James (2017): The Coming Software Apocalypse. Edited by Hayley Romer. The Atlantic Monthly Group LLC. Washington.

Written by Annika Essmann

This article, by Annika Essmann of the M.A. RESET program, reviews the European Court of Justice’s decision on GMOs. It analyzes the ascription of responsibility and suggests an alternative framing for it. The paper was originally written as an assignment for the law module of the Technology and Society course.

Directed mutagenesis and the EU

Differentiation of the science and the responsibility of innovation


The Conseil d’État in France (The Council of State) (the Conseil) appealed to the European Court of Justice (ECJ). It asked the ECJ whether organisms obtained by mutagenesis constitute GMOs and which of its methods shall be excluded from a Directive 2001/18/EC (the Directive). This Directive aims to protect human health and the environment regarding the release of GMOs (ECJ, 2018a, Art.4). France largely harmonised their law with the Directive (ECJ, 2018a, Art.15-19). Generally, mutagenesis allows one to alter the genome of a living species without the insertion of foreign DNA. New mutagenesis techniques allow to produce organisms which are, for instance, resistant to certain herbicides (ECJ, 2018b). These new methods are called directed mutagenesis (ECJ, 2018a, Art.23). The ECJ ruled that all organisms obtained by mutagenesis are GMOs (ECJ, 2018a, Art.30). How is responsibility in this judgement framed and distributed?

Context of the decision

The Conseil d’État (the Conseil) appealed to the ECJ during proceedings in which the Conseil was specifically asked to rule on the methods of directed mutagenesis (ECJ, 2018a, Art.47). Generally, mutagenesis allows one to alter the genome of a living species without the insertion of foreign DNA. New mutagenesis techniques allow to produce organisms which are, for instance, resistant to certain herbicides (ECJ, 2018b). These new methods are called directed mutagenesis (ECJ, 2018a, Art.23).

The parties involved in the proceedings were a French agricultural union and eight associations (the associations) (ECJ, 2018b), and two French ministers (ECJ, 2018a, Art.2). The associations contested the French legislation which was harmonised with the Directive. French law exempts organisms obtained by mutagenesis from the obligations laid down by the Directive. They thought that organisms obtained by mutagenesis, such as herbicide-resistant varieties, carry the same risk as GMOs (ECJ, 2018b). Consequently, they demanded (1) to include mutagenesis as a technique which results in genetic modification, (2) to ban the cultivation and marketing of herbicide-resistant varieties obtained by mutagenesis and (3) to order the Ministers to introduce a moratorium on herbicide-resistant varieties (ECJ, 2018a, Art.20).

To understand these statements better, the Directive and French law are briefly summarised.

Directive 2001/18 and French law

The Directive aims to protect human health and the environment regarding the release of GMOs (ECJ, 2018a, Art.4). Under the Directive, GMOs are organisms whose genetic material has been altered in an unnatural way (ECJ, 2018a, Art.5). When an organism falls under this definition, EU Member States should act according to the precautionary principle (PP) (ECJ, 2018a, Art.7). Essentially, the PP is at the heart of the Directive (ECJ, 2018a, Art.4).

The Directive neither applies to organisms obtained through techniques which have conventionally been used or which have a long safety record (ECJ, 2018a, Art.3) nor to techniques listed in Annex I B, mutagenesis is one of them (ECJ, 2018a, Art.10). That implies organisms obtained through mutagenesis are not considered GMOs.

France largely harmonised their law with the Directive (ECJ, 2018a, Art.15-19). For instance, the definition of GMOs is comparable (ECJ, 2018a, Art.15).

Interpretation of Directive 2001/18

Before this legal background, the Conseil asked the ECJ whether organisms obtained by mutagenesis constitute GMOs and which of its methods shall be excluded from the Directive’s obligations (ECJ, 2018a, Art.25,1).

The ECJ ruled that all organisms obtained by mutagenesis are GMOs (ECJ, 2018a, Art.30). Their argumentation was two-fold. First, the risks of directed mutagenesis are similar to those of transgenesis which is listed as a technique resulting in GMO (ECJ, 2018a, Art.48). Second, if mutagenesis was excluded from the scope of the Directive, it would fail to meet the intention of the Directive of protection (ECJ, 2018a, Art.51) and it would further disrespect the PP which the Directive seeks to implement (ECJ, 2018a, Art.53). As such, the ECJ viewed mutagenesis as resulting into GMOs and only conventional mutagenesis are excluded (ECJ, 2018a, Art.54).

Framing and distribution of responsibility in the ECJ judgement

Description of the precautionary principle

The concept framing the judgement, indeed the entire Directive, is the precautionary principle (PP). Its basic premise is that governments should act to protect human health and the environment, regardless of the costs, when there is a threat of serious damage, although scientific evidence of this harm might be lacking. Consequently, this principle imposes a substantive duty of care upon governments (Victor, 2001, p.315), yet, it is meant to help politicians to make sound public policy decisions in the face of scientific uncertainty (Harremoës et al., 2001, p.15).

It is noteworthy that the PP is different to the prevention principle. The PP deals with uncertain risks, the prevention principle deals with known risks. Furthermore, the PP obliges states not to defer regulatory action despite missing scientific evidence, the prevention principle obliges states to take action at the earliest stage possible (Van Calster, 2008, p.66). That implies that the PP is proactive in nature, action is taken before a dangerous situation occurs, but the prevention principle is reactive, action is taken after a danger has been identified (Tait, 2001, p.2). Interestingly, the application of the PP was partly an attempt to improve the ‘reactive’ prevention principle, and, even more interestingly, the European industry producing GMOs was the first to be regulated by the PP (Tait, 2001, p.3).

When applying the PP, the main responsibility lies with policy makers and researchers producing GMOs to take precautious measures and to provide a proof of harm (safety), respectively. In case of the ECJ judgement this means that Members also have to regulate directed mutagenesis techniques now and every institution working with these techniques has to abide by the Directive’s obligations. On the one hand, that can increase safety for society en large, but it also burdens national policy makers and researchers further. In short, the ECJ decision led to a shift of responsibilities towards policy and research.

Apart from these specific consequences of the PP, there are also general objections to the PP as a regulatory tool which are discussed next.

Criticism on the precautionary principle

Some critics oppose the PP because it interferes with principles of free trade since it enables a nation to restrict the import of a product based on its subjective perception that it poses risks (Victor, 2001, p.319-320). Indeed, the EU came into conflict regarding that issue. Its precautionary approach to GMOs resulted in a dispute before the World Trade Organisation in 2006 (Peel, 2010, p.135-137) in which the EU was accused of trade protectionism (Morris & Spillane, 2008, p.500).

Other critics discuss that the Directive will become an insufficient regulatory tool because the Directive will struggle to consider similar risks posed by non-GM-based approaches, such as targeted mutagenesis. To them, it is problematic that the Directive’s current process of GM safety evaluation is not well balanced with ‘conventionally’ bred plants, partially because conventional methods are simply classified as such due to familiarity rather than scientific understanding (Morris & Spillane, 2008, p.502). In short, the Directive lacks a scientific foundation. This claim of inadequate distinction is especially interesting as it exemplifies what happened in the ECJ case; the Directive was unable to make a distinction, so the ECJ had to make one. This is remarkable because these worries were put forward in 2008, almost 10 years before the court judgement.

Considering these oppositions, an alternative framing of the Directive and hence, the court judgement is proposed subsequently.

Alternative framing and distribution of responsibility

The PP as articulated in the Directive focuses on the end of a development process since it applies the PP to the release of a GMO, not its research. A different perspective further upstream might be more helpful.

The first concept that springs to mind in this case could be ‘safer by design’. This approach aims to integrate knowledge of adverse effects into the design process of a new technology and it is intended to engineer these effects ‘out’ (Schwarz-Plaschg, Kallhoff, & Eisenberger, 2017, p.277). However, as promising as this might sound, the focus on safety could marginalise other issues, such as the societal desirability of innovation. Yet, it is important to explore other values to be included into an upstream evaluation (Schwarz-Plaschg et al., 2017, p.278).

Instead, the concept of responsible innovation (RI) appears more suitable. Owen et al (2013) define it as a “collective commitment of care for the future through responsive stewardship of science and innovation in the present” (p.36). According to them, it is one major task of RI to determine which futures we want science and innovation to create. This consideration, they argue, poses the challenges of accommodating a plurality of political and ethical views as an input and, then, prioritising the different futures (Owen et al., 2013, p.37). To finally achieve RI, four dimensions should be integrated, namely:

  • Anticipation: describing and analysing (un)intended impacts through foresight
  • Reflection: reflecting on underlying purposes, uncertainties and risk
  • Deliberation: listening to perspectives from publics and diverse stakeholders
  • Responsiveness: setting the direction of innovation through reflexivity and learning
    (Owen et al., 2013, p.37-38)

This approach appears more useful because it not only considers anticipation, which is characteristic for the PP, but deliberation provides input for this anticipation and reflection as well as responsiveness hold anticipation in check. For instance, a GMO regulation, like the Directive, would not just accept precaution for human (environmental) hazards as a regulatory measure whose perception can be manipulated by pressure groups benefiting from a tighter regulation (Morris & Spillane, 2008, p.503). Instead, it would additionally consider other viewpoints and reflect upon the underlying purposes of those holding these views.

However, this model has a deficit which Ruggiu (2015) outlines. He states that the socio-empirical version of RI – Owen et al.’s model can arguably be considered as one version – is problematic in two ways. First, participation (deliberation) does not necessarily lead to more democratic or better outcomes because only particular stakeholders in a particular mode of engagement can participate (Ruggiu, 2015, p.227). Second, the relationship between future visions, one main tasks of RI according to Owen et al. (2013, p.37), and regulation is difficult because one would still use current regulatory frameworks for the development of unforeseen processes; this, in turn, confirms inadequate regulations (Ruggiu, 2015, p.228).

To avoid these problems, Ruggiu argues that human rights should be primarily considered, as already legitimised principles guiding RI so to speak (Ruggiu, 2015, p.229). Specifically, human rights belong to the person, while fundamental rights belong to a citizen (Ruggiu, 2015, p.230). This inclusion of human rights is important because, first, the socio-empirical version of RI does not necessarily produce respect for fundamental rights (Ruggiu, 2015, p.231). Second, nothing in EU law leads to the priorisation of fundamental rights, meaning that in cases concerning innovations, such as directed mutagenesis, fundamental rights might be placed after the promotion of public interests, such as techno-scientific progress (Ruggiu, 2015, p.230). So instead, human rights should be included in the considerations of RI because they can ‘override’ public interests.

In sum, the alternative framing for the Directive proposed here is the RI model by Owen et al. (2013) complemented by human rights as principles guiding the deliberation and reflection process.

Personal opinion

I generally support the decision of the ECJ. First, I am convinced that one should always consider scientific development and make distinctions, such as between random and directed mutagenesis. I think it is paramount to promote consistency, especially in policy, to treat similarly risky endeavours equally. Second, I think that when considering little details, for instance specific GMO-techniques, one should not forget the larger purpose. Indeed, I found it remarkable that the Directive’s objective to protect humans and the environment ‘overrode the Annexes’ and I am convinced that this is how it should be.

Despite my general agreement, I also have two points of critique. First, I find the definition of the conventional GMO-methods, exempted from the Directive’s obligations, especially vague. What is convention and according to whom? I think this could be formulated more concretely to avoid that techniques pass as conventional although they are hazardous. Second, I think the role of experts is too prominent. Certainly, I value scientific progress, as mentioned above, but the policy process seems to be largely based on their expertise. This gives experts enormous power although their knowledge is far from objective (Jasanoff, 2003, p.160).

In summary, I support the alternative framing and I would complement this view by sharpening the definition of conventional GMO-methods and monitoring the scientific influence.


ECJ. (2018a). Confédération paysanne and Others v Premier ministre and Ministre de l’Agriculture, de l’Agroalimentaire et de la Forêt (C‐528/16). Retrieved from lang=EN&part=1&occ=first&mode=req&pageIndex=0&cid=2927832#Footnote*

ECJ. (2018b). Organisms obtained by mutagenesis are GMOs and are, in principle, subject to the obligations laid down by the GMO Directive [Press release]. Retrieved from

Harremoës, P., Gee, D., MacGarvin, M., Stirling, A., Keys, J., Wynne, B., & Vaz, S. G. (2001). Late lessons from early warnings: the precautionary principle 1896-2000: European Environment Agency.

Jasanoff, S. (2003). (No?) Accounting for expertise. Science and Public Policy, 30(3), 157-162.

Morris, S. H., & Spillane, C. (2008). GM directive deficiencies in the European Union: the current framework for regulating GM crops in the EU weakens the precautionary principle as a policy tool. EMBO Reports, 9(6), 500-504.

Owen, R., Stilgoe, J., Macnaghten, P., Gorman, M., Fisher, E., & Guston, D. (2013). A framework for responsible innovation. In R. Owen, J. Stilgoe, P. Macnaghten, M. Gorman, E. Fisher, & D. Guston (Eds.), Responsible Innovation: Managing the Responsible Emergence of Science and Innovation in Society (pp. 27-50). Chichester, West Sussex: Wiley & Sons.

Peel, J. (2010). Science and risk regulation in international law. Cambridge University Press.

Ruggiu, D. (2015). Anchoring European governance: two versions of responsible research and innovation and EU fundamental rights as ‘normative anchor points’. NanoEthics, 9, 217- 235.

Schwarz-Plaschg, C., Kallhoff, A., & Eisenberger, I. (2017). Making Nanomaterials Safer by Design? NanoEthics, 11, 277-281.

Tait, J. (2001). More Faust than Frankenstein: the European debate about the precautionary principle and risk regulation for genetically modified crops. Journal of Risk Research, 4(2), 175-189.

Van Calster, G. (2008). Risk regulation, EU law and emerging technologies: Smother or smooth? NanoEthics, 2(1), 61-71.

Victor, M. (2001). Precaution or Protectionism–The Precautionary Principle, Genetically Modified Organsims, and Allowing Unfounded Fear to Undermine Free Trade. Transnational Law, 14, 295. Available at:

Written by Anna Ajlani

Anna Ajlani writes on high tech assistive technologies in educational contexts as means of integration. What are the politics around disabilities and assistive technologies in schools? What is the role of traditional infrastructures and how can technology forward inclusion and participation?

Disrupting Normality

Examining the Trajectories of Assistive Technologies in Inclusive Education

For decades, people who experience disabilities have been fighting for recognition and participation in a society and infrastructure that has been designed for the able-bodied. Although German legislation and policy are supposed to ensure that people with disabilities are guaranteed participation in all social spaces, the Committee on the Rights of Persons with Disabilities which investigates on behalf of the United Nations Convention on the Rights of Persons with Disabilities has criticized Germany in 2015 for not sufficiently adhering to the convention which the country has ratified in 2009 (Committee on the Rights of Persons with Disabilities 2015). One of the areas affected by this slow progress of inclusion is education. As of 2014, every tenth school for general education in Germany was a “Förderschule”[¹] (Autorengruppe Bildungsberichterstattung 2014: 170). Children and adolescents with disabilities are still at a disadvantage compared to their able-bodied peers. As Gibson et al. (2017: 498) summarize, young people with disabilities “engage in less diverse leisure activities, more ‘passive’ recreational activities (such as watching television) and fewer social activities”. In light of the current debate about equality and feasibility in the context of inclusive education (see e.g. Die ZEIT 2018) this article will discuss which barriers continue to hinder inclusive education and how assistive technologies can be employed in order to empower children and adolescents with disabilities in integrative processes. First, the definition of disability and the infrastructures surrounding and enacting it are discussed. Following that, I will work out the politics and power relations that play an often-invisible role in the trajectories assistive technologies take. Finally, the results from the preceding discussion will be applied to the case of inclusion in the German education system in order to identify the requirements for an empowering application of assistive devices into classrooms. Assistive devices exist both in the form of low and high technology (see Campbell et al. 2006); this article focuses on high tech assistive devices as they require more training as well as environmental adjustments. This article includes all kinds of disability ranging from learning to motoric disabilities under the umbrella term, even if only some can be explicitly named and discussed.

Deconstructing (dis-)ability
The boundary between abled and disabled may be widely perceived as “natural” and unpolitical, but as Blume et al. (2013: 99) clarify, the definition of disability is “the outcome of a complex medico-bureaucratic process” with the goal of making (dis-)abilities manageable and governable. Admon-Rick (2014) elaborates on the infrastructures that have been constructed to encode, calculate, and classify disability on the example of the Azuhei Nehut, the Israeli Disability Percentages System. To her, the biomedical understanding of disability is a “form of technoscience” designed to simplify heterogeneity and, at the same time, embed cultural values into numerical figures, thus “stabilizing their existence” (ibid: 109; 112). Admon-Rick (2014: 114f) further points out that while the person evaluating the level of disability is usually assigned the role of a neutral observer, the component of human interaction is unveiled through the emergence of the attribute “ugly” when classifying scars. The same is the case in the rare 100% rating that blindness receives: The emotionally charged notion of tragedy associated with being or becoming blind is projected and translated into an exceptionally high ranking (ibid: 113f). Nonetheless, the high rating can also partly be attributed to the strong barriers non-sighted people face in a quite visually oriented infrastructure. Looking at these structures from a large-scale perspective based on the work of Star (1999), the classification of disabilities would not be useful if it weren’t for the economic and social orders requiring it. Modern lifestyles are formed around an education-employment-retirement pipeline, which emphasizes productivity that is exchanged with currency needed to afford basic life necessities such as rent and food. If these expectations of productivity, which is classified and normed itself, cannot be met due to disabilities, human rights legislation implores that there is some form of reimbursement set in place to ensure that a person’s needs are met (Admon-Rick 2014: 118). In Germany, these compensations are categorized through social insurance legislation in the SGB IX. Without these larger systems in place, a classification and numeration of disabilities would not have any meaning. A strictly biomedical interpretation of disability places it as a personal attribute and not as the result of barriers and inaccessibility put in place by exclusionary infrastructures as the WHO (2013) suggests. Mankoff et al. explain their advocacy for a postmodern model of disability which includes both a medical and a social perspective as follows:

Because some conditions may require medical attention and involve serious secondary problems, it is worth understanding (and perhaps improving upon) the medical model of “impairment.” At the same time, social models of disability should not be abandoned, as they reduce the risk of “blame the victim” social policies […]. Finally, a cultural understanding of disability is needed to avoid the mistaken assumption that the ultimate goal is “normality.” (Mankoff et al. 2010: 4)

Admon-Rick (2014: 121;124) describes a duality of disability percentage systems as on one hand, the individuals and the ways in which they are affected disappear behind a number, on the other, she remarks that receiving a classification symbolize “recognition, a civil identity, and a gateway to receive government subsidy”. Wise (2012: 169) illustrates how the relationship between technology and disability can be ambivalent as well on the examples of the telephone and computer or smartphone screens, as both were massive societal innovations but exacerbated barriers in accessibility for those with aural and visual impairments at first. The next chapter will showcase how whether technological innovations alleviate or worsen barriers for people with disabilities is highly dependent on the inclusion of users in a devices’ early stages of development.

The Politics of Assistive Technologies
Ravneberg (2012: 261-267) breaks down the four stages of users acquisitioning a new assistive device according to a framework provided by Silverstone et al. (1994): first, there’s an appropriation phase in which a device transitions from a market commodity to an object, followed by the objectification phase in which users evaluate usability and aesthetics of the device. Here, gender inequalities unravel with Ravnebergs (2012: 265f) re-narration of an interviewees experience with a wireless alert system enclosed in a wristwatch that she perceived as clearly designed for men with its size and black color. When the device was introduced in a “female version”, it had lost its functionality as a wristwatch. The incorporation phase surrounds the stage in which users include the device in their everyday lives. If a device is so incompatible with users needs and expectations that it ends up unused, the potential consequences include a decrease in safety and availability (ibid: 266). The final conversion phase is characterized by the meanings the device takes on through both the users and their environments (Ravneberg 2012: 266f).

Assistive technology is not always welcome with and suitable for those with disabilities. Blume et al. (2003: 100) re-evoke the case of the Cochlear implant which was rejected by the deaf community who saw it as another instrument of forcing spoken communication on them instead of recognizing sign language as a valid form of communication. Ravneberg (2012: 262) highlights the highly medicalized component of assistive technologies and that their design is often only re-shaped by users long after the creation and implementation stage. A feature that distinguishes assistive technologies from other high tech devices is the way the market for them is composed. Assistive devices are often part of public disability benefits. In light of the “slimming” of welfare state funds that is globally taking place, users are rarely free to choose from a range of products, but are assigned “the most reasonably priced types of assistive aids that meet the user’s needs“ (ibid: 263). Keeping the correlation between disability and poverty (Hughes 2013) in mind, a lack of autonomy in a social structure that puts a high value on individual freedom and free markets becomes visible. Ravneberg (2012: 263) suggests that this could also hinder flexibility and innovation. She describes another form of paternalism occurring in the distinction between “professionals” and “end-users” in advertising and websites from the producers (ibid).

According to Lupton and Seymour (2000: 1855), the definition of what constitutes assistive technologies is blurry: people with disabilities have named technologies such as air condition and keypads as relevant parts of their everyday lives, even though they were not developed with the key purpose of supporting those with disabilities. In order to essentialize the relation(s) that develop between a user and an assistive device, Winance (2006) draws on the Actor-Network-Theory approach to map and depict the relationship of human and non-human actors in the case of people with neuromuscular impairments and wheelchairs. One key insight her work offers is that the disability and its contexts are fluid and highly perceptible to change. She describes the process of the user and the wheelchair adjusting to each other; through years of use and unique wear and tear, the wheelchair is not merely a wheelchair, it has become this specific user’s wheelchair that is like no other (Winance 2006: 58). Likewise, the user adjusts just as much to their device during years of usage. Winance refers to this connection as an emotional and material community (ibid). She offers an interpretation of assistive technology devices as a mediator enabling a person who wants to perform an action to execute that desired action (Winance 2006: 60). Through Winance’ perspective, the social perception of the “body […] coded as a dysfunctional body” (Lupton and Seymour 2000: 1852) gives way to an understanding of the ability or disability to perform one certain task at a specific point in time (Winance 2006: 66). She concludes that “the person is made through the interactions he or she has with other entities (human or nonhuman)” (ibid: 67).

A similar image emerges when we turn to the trajectories of computers. As pointed out by Wise (2012) above, these technologies heightened barriers for people with visual impairments due to their highly visual design in the beginning. It seems reasonable to assume that computers were invented for use by “everybody”, however, people with disabilities can become “invisible” through isolation and a stabilized perception of the “normal” body. The solutions that have derived from this problem are copious, examples that come to mind are the “Be My Eyes” app[2] that both reduces isolation and enhances independence for people with low or no vision or the technology that the first deaf-blind Harvard graduate Haben Girma[3] is using to communicate and advocate for the disabled community. It is substantial to mention that these numerous advancements in assistive technology have also been brought forward by the emancipatory movement of people with disabilities who have demanded a voice and space inside society for decades (see Mankoff et al. 2010). As Mankoff et al. (2010: 6) make clear, this perspective is indispensable as “screen readers only work well if web pages are designed with them in mind”. After exploring the trajectories and ramifications assistive technologies can take on, the next chapter is set on providing a framework for an adoption of assistive technologies into inclusive education that proves beneficial to the end-users.

Assistive technologies – A pathway into inclusive education?
At the time of writing this article, there were few studies concerning a correlation between inclusion in education and assistive technologies to be found; instead, a big part of the available literature centralized assistive technologies for the elderly. While this strong tendency of research is understandable amidst the significant demographic changes that will challenge established care systems and policies in the closer future, the progress of inclusionary education deserves more attention given the great impact education has on later life chances and outcomes. Despite having ratified the UN CRPD almost ten years ago, Germany remains to have one of the most segregative special education systems worldwide and has made little progress in educating children with and without disabilities together (Biermann 2019: 19f). Not all “Förderschulen” offer an official degree to graduate with (Autorengruppe Bildungsberichterstattung 2014: 181). Studies have determined a grave difference between acquired skills in students with disabilities who were schooled segregated and those who visited integrative schools (Autorengruppe Bildungsberichterstattung 2014: 180). The authors do point out that this cannot exclusively be interpreted on the grounds of inclusion; other factors such as the severity grade of a disability certainly play a role in this outcome (ibid). Yet, there are positive developments in relation to measures taken to increase participation observable in Germany: the German disability sports youth organization has seen a significant increase in members under the age of 21 since 2001 and universities are making constant progress in removing barriers for students with disabilities (ibid: 184f). One example is the DoBuS department of the Technical University of Dortmund[4], which also offers a pool of assistive devices to students.

Drawing from Winance (2006), I argue that assistive technologies that remove or weaken barriers hindering children with disabilities from efficient learning can take on the role of “carriers” that ease the “crossing” process for those children from segregated outgroup to ingroup in the setting of schools. This endeavor cannot easily be executed by merely supplying schools and/or families with the available technology, though; albeit there needs to be a political component in overcoming the “structurally conservative” argumentation used against inclusionary measures by state actors and profession representatives (Biermann 2019: 22). Biermann (2019: 22f) unveils the underlying barriers created by conservatism around institutionalized knowledge. Debates in Germany only go as far as asking how special education expertise can be transferred into general education knowledge while seldomly suggesting the abolition of school segregation altogether (ibid). Assistive technologies might be of aid in a reform of the German educational system by balancing out differences in learning speed, concentration capacities as well as offering support for potentially challenging activities such as reading or writing with zoomable screens and customized keypads. A press release from the Ministry of Education and Research from 2017 (Bundesministerium für Bildung und Forschung 2017) announces funding for digital tools to enable mobile attendance of people with disabilities in lessons, yet this funding is confined in the boundaries of occupational education and seems to be tied to cooperation with employers.

The structurally conservative standpoints that Biermann (2019) describes become quite distinguishable in the case study of the Athens Metro by Galis and Lee (2013). The authors implemented the terms distortion, estrangement, rejection, and disruption into the aforementioned concept of mapping all actors involved into a network in which they take on individual roles. These processes do not have to happen in chronological order (ibid: 154ff). The planning process of the Athens Metro is exemplified to demonstrate how one group of actors can increase their power by distorting the interests of another actor group and subsequently framing their agenda as irrelevant. In this case, distortion was accomplished by individualizing and depoliticizing disability. To counteract the disability movement efforts to participate in the Metro planning, a newly elected conservative government laid out a model of institutionalization under the guise of helping people with disabilities. The model saw rehabilitation centers and “houses equipped with accessibility technologies” instead of granting people agency and independence through making their environment accessible (Galis and Lee 2013: 160f). The ambiguous nature of assistive technologies becomes amplified during these events; they are not exclusively tools for inclusion and removal of barriers but can confine people to their homes if the surrounding infrastructures remain inaccessible. The case of assistive technologies becoming weaponized in a power struggle is applicable to the field of education as well: Actors who oppose a school system reform could use technology that enables students to attend classes from home as a method of stabilizing segregation by arguing that school buildings do not need to be made accessible if students can participate online. This emphasizes the relevance of meaning assigned to artifacts and relationships in a debate which Biermann (2019) addresses. Galis and Lee (2013: 165) further point out how alliances formed in order to push the disability movement out of the participation process, citing economical and aesthetical concerns and claiming that passengers with disabilities were too few to take into consideration. The disability movement refused to succumb to their rejection and attempted to “reproblematize” accessibility by creating a very public controversy. This reproblematization proved successful but also resulted in ontological boundaries being set by Metro employees and architects who monopolized their knowledge and rejected disabled persons expertise as irrelevant, thus denying them “technopolitical participation” once again (Galis and Lee 2013: 166-170).

After the current state of affairs surrounding the German education system and the ambiguity of assistive technologies has been established, the rest of this article will suggest a framework of optimal conditions for the adaptation of assistive technologies to lessen segregation and expand agency. Campbell et al. (2006: 3) name two important factors: First,  assistive technologies must include the services and human assistance that is needed to receive and adopt them and second, children should be taught the use of these devices in their early years before they enter the education system. Given the shortage of teaching personnel in Germany which is cited as one of the reasons for the opposition of inclusion in schools (see Die ZEIT 2018), teaching children with disabilities the usage of assistive devices upon school entry would further tighten the time teachers have to assist all students. Many of the studies discussed by Campbell et al. (2006) showcase how training the usage of assistive devices can be integrated into play activity. This can smoothen out the acclimatization process and let children learn to use the devices without directly associating them with (dis-)abilities. Valadão et al. (2011) confirm this, adding that assistive technologies – in this case, robotics – can counteract “learned helplessness” and encourage children with severe impairments to explore their surroundings more. Specifically, but not exclusively robots should have an age-appropriate appearance in order not to trigger the “Uncanny Valley-Effect” (Watson 2014). Further design considerations may include unobtrusiveness, size and lightweight materials for easier transportation (Valadão et al. 2011).

The shortage of additional teachers who an inclusion process in “regular” schools might require as mentioned above is an issue of policy and public funding, however, assistive technologies in classrooms may bridge potential gaps in learning speed and concentration and enable teachers to distribute their attention and assistance more equally. Furthermore, if an education reform would be realized in the form of abolishing special education schools, the expertise and personnel could be transferred into newly inclusive schools (Biermann 2019). Leaning on the evidence that children with disabilities who learn together with able-bodied peers see better outcomes in skills acquired (Autorengruppe Bildungsberichterstattung 2014), the integration of special education classes in regular school buildings that are slowly dispersed with the implementation of assistive technologies may present a more easily feasible first step.

The goal of this article was to disrupt the linear narrative of disability as a deficit by exploring how traditional infrastructures construct exclusion and how assistive technology can act as an antagonist to barriers. There is a wide array of research concerning adaptation of assistive technology devices by the elderly, however, few studies and papers focus on the inclusion of children with disabilities into “regular” schools accompanied by assistive technologies. Campbell et al. (2016: 9) bring this issue up as well, stating that “[future] studies should align with current recommended practices and test intervention effectiveness not just for performance or improvement of isolated skills but for promoting children’s successful participation within a variety of everyday activities and routines”. Adding to this, Ravneberg (2012: 268) determines that the “aesthetical side of design, user satisfaction and user abandonment of devices are important but neglected issues”. As (dis-)ability and participation are also influenced by other social positions such as class and gender, it is essential to approach the topic with an intersectional perspective. The accessibility of technology for people experiencing disabilities should not be an afterthought. As Mankoff et al. (2010:6) mention, a universal usability approach does not only grant inclusivity but can spark innovation and improvement in existing technologies, which further demonstrates the blurred lines between assistive and “regular” technologies. Three areas which require change can be identified: First, there is a lack of research on the relationship between assistive technologies and inclusive education. Second, the discourse about inclusion of schools as Biermann (2019) has described it should give more space to members of the disability movement and broaden to consider a reform of the segregated school system. The current segregated education system in Germany is not conform to Article 24 of the UN CRPD and requires a reform in order to fulfill its commitment to not exclude anyone from the regular education system due to disability and to enable people with disabilities to fully participate in society (see Autorengruppe Bildungsberichterstattung 2014: 157). Third, people with disabilities should be included in the early creation stages of new assistive technologies. Examining medical technologies in general and assistive technologies in particular through the perspective of Science and Technology Studies can be especially valuable as it unveils underlying power imbalances in the long tradition of critical disability and postcolonial studies. STS might build a bridge between these disciplines and provide considerations that help ensure that the growing possibilities to empower people with disabilities through technology, especially in the field of education, do not stay unexplored due to inflexible institutionalized structures.


[1] A school for children with special needs and disabilities which is rather segregative in nature.

[2], accessed 03/08/2019.

[3], accessed 03/08/2019.

[4], accessed 03/09/2019.

Admon-Rick, G. (2014). Impaired Encoding: Calculating,Ordering, and the ‘‘Disability Percentages’’ Classification System. Science, Technology, & Human Values, 39(1), 105–129.

Autorengruppe Bildungsberichterstattung. (2014). Bildung in Deutschland 2014: Ein indikatorengestützter Bericht mit einer Analyse zur Bildung von Menschen mit Behinderungen. Bildung in Deutschland: Vol. 2014. Bielefeld: Wbv, Bertelsmann.

Biermann, J. (2019). „Sonderpädagogisierung der Inklusion”: Artikel 24 UN-BRK und die Diskurse über die Entwicklung inklusiver Schulsysteme in Nigeria und Deutschland. Aus Politik Und Zeitgeschichte, 69(6-7), 19–23.

Blume, S., Galis, V., & Pineda, A. V. (2013). Introduction: STS and Disability. Science, Technology, & Human Values, 39(1), 98–104.

Bundesministerium für Bildung und Forschung. (2017). Digitale Medien als Helfer bei der Inklusion. Retrieved from

Campbell, P. H., Milbourne, S., Dugan, L. M., & Wilcox, M. J. (2006). A Review of Evidence on Practices for Teaching Young Children to Use Assistive Technology Devices. Topics in Early Childhood Special Education, 26(1), 3–13.

Committee on the Rights of Persons with Disabilities. (2015). Concluding observations on the initial report of Germany. Retrieved from

Deutscher Lehrerverband: Lehrergewerkschaft fordert Aussetzung der Inklusion (2018, February 5). Die ZEIT. Retrieved from

Galis, V., & Lee, F. (2013). A Sociology of Treason. Science, Technology, & Human Values, 39(1), 154–179.

Gibson, B. E., King, G., Teachman, G., Mistry, B., & Hamdani, Y. (2017). Assembling activity/setting participation with disabled young people. Sociology of Health & Illness, 39(4), 497–512.

Hughes, C. (2013). Poverty and Disability. Career Development and Transition for Exceptional Individuals, 36(1), 37–42.

Lupton, D., & Seymour, W. (2000). Technology, selfhood and physical disability. Social Science & Medicine, 50(12), 1851–1862.

Mankoff, J., Hayes, G. R., & Kasnitz, D. (2010). Disability studies as a source of critical inquiry for the field of assistive technology. In A. Barreto (Ed.), Proceedings of the 12th international ACM SIGACCESS conference on Computers and accessibility (p. 3). New York, NY: ACM.

Ravneberg, B. (2012). Usability and abandonment of assistive technology. Journal of Assistive Technologies, 6(4), 259–269.

Silverstone, R., Hirsch, E. & Morley, D. (1994). ‘‘Information and communication technologies and the moral economy of the household’’. In R. Silverstone and E. Hirsch (Eds). Consuming Technologies. Media and Information in Domestic Spaces. Routledge, London, 13-28.

Star, S. L. (1999). The Ethnography of Infrastructure. American Behavioral Scientist, 43(3), 377–391.

Valadão, C., Bastos, T. F., Bôrtole, M., Perim, V., Celino, D., Rodor, F., Gonçalves, A., Ferasoli, H. (2011). Educational robotics as a learning aid for disabled children. In ISSNIP Biosignals and Biorobotics Conference (BRC), 2011: 6 – 8 Jan. 2011, Vitoria, Brazil. Piscataway, NJ: IEEE, 1-6.

Watson, R. (2014): Uncanny Valley — Das Phänomen des „unheimlichen Tals“. In R. Watson (Ed.): 50 Schlüsselideen der Zukunft. Berlin, Heidelberg: Springer Berlin Heidelberg, 136–139.

Winance, M. (2006). Trying Out the Wheelchair: The Mutual Shaping of People and Devices through Adjustment. Science, Technology, & Human Values, 31(1), 52–72.

Wise, P. H. (2012). Emerging technologies and their impact on disability. The Future of Children, 22(1), 169–191.

World Health Organization. (2013). How to use the ICF: A practical manual for using the International Classification of Functioning, Disability and Health (ICF). Exposure draft for comment. Geneva: WHO. Retrieved from


Podcasts today are omnipresent; for many, they have become the preferred, on-demand radio for a morning or afternoon commute. They are everywhere and for good reason – listening to ideas in the form of interviews and stories can be an effective way to learn and understand a new topic. 

For the first time at the MCTS, Professor Ruth Mueller’s seminar “Telling Responsible Stories – Telling Stories Responsibly” drove students to understand the ins and outs of storytelling practices from an STS perspective. By studying storytelling practices, students dove deeper into understanding how sustainable and responsible solutions are presented and constructed in contemporary society. The study of these grand narratives, inspired by work from STS scholars such as Donna Haraway, challenges how stories of solutions can be political, economic, or technological in nature.  In the seminar, students were encouraged to rethink practices of storytelling and to tell the “story otherwise” in the narrative genre of podcasts. As the final project for the seminar, students produced their own podcasts, which are available to you here at Student Voices.

How to Contribute to Student Voices

Awesome to see that you are interested in contributing to Student Voices! There are two ways to support the group. You can either submit your writing to the blog or, even better, become a member of the working group. 

Submit your Writing

Become a Member

You wrote this one assignment last semester that you think the world needs to read? You are visiting an event related to the MCTS/STS and want to report on it? Or there is this one topic that you have always wanted to write about? The format of Student Voices is quite open, so any type of writing is welcome! 

We have regular open calls, but you are always welcome to submit your pieces throughout the semester. To do so, please send an e-mail with the subject “Submit” to receive instructions on the required format as well as our submission form which you have to send along with your text file to

Are you a student from the MCTS, the TUM, or another institute around Munich? You are interested in STS related topics? You want to get active and involved with Student Voices? Great, so let’s get in touch then and see for what exactly you would like to take responsibility.

As part of the Student Voices working group, members participate in regular meetings every other week (usually Wednesdays). Tasks for the group include writing, editing, publishing, web design and more. If you are interested in joining, just drop us a message at or meet us in person during our info session. The next one will be on November 19th, 7:00 – 8:30 pm!

Please note: The student voices webpages are run and editorially supervised by STS and RESET master´s students only. Any content or views represented on these webpages are personal and belong solely to the authors. They do not represent those of the MCTS as a whole.