Produced by Coding Rights. In many places around the world, facial recognition technologies are gradually being deployed in several moments of our lives: be it in surveillance cameras installed across the streets we normally walk around or when we need to authenticate our IDs to allow access to social services or to enter banks and other private services. But what happens when we use a binary algorithm to control and influence non-binary and very diverse lives and experiences? Who gets excluded? What historical oppressions are being exacerbated and automated?
Vanessa Koetz and Bianca Kremer, both Coding Rights fellows on Data and Feminisms, talked to two Brazilian experts: Mariah Rafaela Silva, scholar and activist in transgender rights and to Pablo Nunes, black scholar and activist specialist in public security and racism. Through this podcast, we will discuss the risks of implementing this kind of tech without an informed public debate about potential consequences. We even spoke to an algorithm, Dona Algô (Misses Algô… short for algorithm)! Are you curious? Press play!
Links in this Episode:
Research and Interviews: Bianca Kremer and Vanessa Koetz
Concept and script: Juliana Mastrascusa and Joana Varon
Interviews: Pablo Nunes and Mariah Rafaela Silva
Fictional Character: @Malfeitona
Editing & Mixing: Ergi Shkëlzeni
Visual Design: Ura Design
Executive Producers for Privacy is Global: Laura Schwartz-Henderson and Laura Vidal
Sponsored by Internews and Heinrich-Böll-Stiftung Washington DC
Welcome to Privacy is Global, a podcast brought to you by Internews and Heinrich Böll Foundation. Through this podcast, we will look at what is happening in privacy and data protection debates all around the world.
In each episode, we will convene discussions on the development of Privacy politics and policy in Africa, Latin America, Asia, and the Middle East. As part of this podcast, we will be talking to activists in countries around the world. So, hear the stories behind the fight for better and more protective data-privacy laws and policies.
I’m Laura, Laura Schwartz-Henderson.
And I’m also Laura, but I’m Laura Vidal.
Welcome to the party!
Ok, Laura, let’s start from scratch. What is Privacy for you?
Privacy to me is being able to navigate my world and knowing I have some control over how I’m seen and perceived and over my own identity, I think.
Well, I love that you say “I think”, because Privacy is a lot of things for a lot of people.
Exactly! It’s also freedom. Freedom to navigate your own space and determine who you are, and not have the world determine who you are for you.
I once heard something that I liked a lot and that’s: “Privacy is an enabling right. So, it might be a little bit abstract, it might be a little bit invisible, but Privacy is what lets you participate or not participate in your own terms and in your own ways in whatever is happening around you.”
Yeah, and I also think, it’s so interlinked with your right to express yourself because if you feel like you have Privacy, you are less likely to self-moderate and you are more likely to navigate your world, searching online or saying something to your friend that’s authentically you, rather than worrying about where that information will go and what it means.
The thing is that now that we are so dependent and so used to these new technologies, the question about Privacy has changed quite a bit. Like, Privacy today, I feel is not the same thing that it was 30 years ago.
Absolutely! I mean, you can be sitting in your room alone and have lots of information being collected on you. They can know where you are and essentially what you’re doing, who you’re talking to, without having to go anywhere. And it’s easy for us to think it’s so small, like, every tiny piece of data doesn’t feel big. And it feels like a part of a big system that you can control and so, once it starts it’s hard to stop.
That’s a good point. There’s another thing. We’re really used to hearing debates and conversations and initiatives around data privacy that come from the same places that have really good representation. So, we know a bit already about what’s going on in conversations about Privacy in the US, and also in Europe. And that’s really important, especially in the US since a lot of the technologies were created having these populations in mind but these technologies are being used in so many places by so many people and in so many different ways, which means that there are many conversations about privacy that we are not really hearing about.
That’s exactly why we wanted to do the series. As everybody knows, there’s a GDPR in Europe, which is changing the way that a lot of people talk about privacy regulations. There are conversations about what a Federal Privacy Legislation could look like in the US and state legislation but there are a lot of things happening all over the world, as we all come to terms with this new technological and social way of living. And other stories that we need to hear from activists who are pushing for new laws and pushing for protection and trying to force us to think critically about how we give away our data and the relationships we have with technology.
Yeah, absolutely! I also think that governments are very much talking to each other and taking notes in whatever they do. So, I’m thinking that activists and people that care about these issues should also talk to each other and inspire each other in what they are doing.
Inspire each other but also really understand what is contextually specific. If we’re just copying regulations and laws, there’s a lot of differences in how governments operate and what people are capable of, and how things can be enforced. And so, it’s important to learn from the challenges that other jurisdictions are having, as we learn and grow and all collectively figure out how to govern in this new world.
Yeah. Why don’t you tell us a bit more about where this idea came from?
As part of our project that we’re running at Internews called the ADAPT project, we’re working with a number of organizations to think through what privacy advocacy looks like in each of these countries, knowing that a lot of people are starting at different points. Public awareness on these issues might be lower or higher in some places, there might already be legislation passed, and then the real challenge is figuring out how to enforce those regulations, how to make sure regulatory bodies are independent, how to make sure they’re well funded, how to deal with the politics of privacy laws. And all of these things are constantly changing and evolving and policymakers need to play “catch up” pretty much all the time and the activists are there, at the forefront of all of this, and we really wanted to hear their stories and their successes and highlight the work that they’re doing.
So to our listeners: You’ll be hearing from us, but we will be passing the mic to many amazing organizations around the world for each episode. You’ll hear from activists and experts as they take us to explore specific issues or as they help us understand the context of a particular country. For this first episode, we’re passing the mic to Coding Rights and we’re going to Brazil. Yeah, you’re going to have an amazing experience listening to our wonderfully innovative friends from Coding Rights talk about how data privacy impacts gender identity, how it impacts racial discrimination, and also the colonial history in Brazil and what that means for surveillance and privacy. They’ll also show us what it would be like to talk to an algorithm and what the ideas behind algorithms mean for how they function and how it scales versus societies.
I can’t wait to hear all that. Let’s go!
It is not very hard to imagine a reality where we use our faces as means of identification to enter a building or to guarantee a fair reduction on public transportation. Maybe this is already happening where you live.
Way beyond cute filters on social media or the unlocking function of mobile apps, the implementation of facial recognition technologies is permeated by important debates on gender, race, and territory. I am Bianca Kremer, Data and Feminism fellow at Coding Rights.
From devices to bodies, in this episode, we will talk about facial recognition, and the dangers of applying this tech, under narratives of public security, or for authenticating identities, particularly of people who do not fit in a privileged, white, cis, hetero, normative profile.
I am Vanessa Koetz, also a Data and Feminism fellow at Coding Rights. Oh, and don’t worry if you don’t know much about facial recognition. This episode is full of daily life examples and we will also explain some key concepts about it.
If you have been following these debates for a while, take advantage of the next few minutes to get to know some very important research and productions that are being made here, in Brazil. The idea is to collectively advance in this debate. After all, it affects all of us as a society, doesn’t it?
3. Miss Algô Airport
Another calm day of work from you Miss Algô, the airport assistance algorithm. Nothing strange, nothing new. Let’s go on to the next person.
“Male. White. Age: 20.”
Oh, what a nice young man.
“Single. French. Reason for travel: Study abroad.”
A dedicated student. This one may pass. Great profile to enter the country. Next.
Oh, I like her profile.
“Accountant. Age:30. German. Reason for travel: Business.”
Seems like a nice woman. Don’t even need to ask anything. Let her in. Next.
Wait, what? Who is this person? System, investigate him!
“Math teacher. Age: 35. Colombian.”
What is he trying to do here? Why is he threatening our borders?
“The reason for travel: Attending an academic event.”
Well, and who invited this guy over here?
“He was formally invited by the university.”
Oh, really? He still looks pretty suspicious to me. I can not let him thought without the extra check. He needs to present proof of invitation and then maybe, maybe I might allow him in.
“Well done. This is Algô. Another day keeping our borders secure.”
Oh, assistant. Thank you. You know, I’m just doing my job as I was programmed to do.
4. Intro Pablo
You have just listened to Miss Algô or, in Portuguese, Dona Algo, a fictional character that represents the algorithms of our everyday lives. She is our special guest for this episode.
Facial recognition technologies have proven to be powerful tools for mass surveillance that are being deployed under narratives about public security and innovation. But what are the consequences of deploying technologies that actually serve to reinforce discrimination against historically oppressed and persecuted populations?
That is precisely why, in order to discuss facial recognition, we need to understand the contexts, in which this tech is being deployed, and the bodies, and territories, that are being targeted by it.
For instance, in Brazil, we are the third country with the highest rate of incarceration in the world. Among more than 770 thousand people imprisoned in the country, approximately 67% are black and brown. That is to say that two out of every three people imprisoned in Brazil are black and brown people. A rate that expresses a context of historical exclusion, a result of colonial violence, which, unfortunately, is likely to be similar in several places around the world.
In 2020 two thousand twenty, the unemployment rate, among the black population in Brazil, was 71% higher than the rate for the white population. Considering the intersectionality of identities, it is further estimated that unemployment could reach 40% in the LGBT+ community, and 70% in the trans population.
A lot of data, yeah, we know. We are just trying to highlight that it is in this unequal and oppressive context that facial recognition is becoming popular in Brazil. Particularly since 2018, these technologies began to be widely implemented for policing public spaces from several regions in Brazil, especially at large metropolises like São Paulo, Rio de Janeiro, and Salvador.
In the northeastern Brazilian state of Bahia alone, the system flagged more than 200 suspects. And we already know that some people ended up at the police station by mistake. While the number of arrests is increasing, the amount of information about the application of these systems by governmental agencies decreases. And so, quietly, on the sly, the use of facial recognition for policing is being normalized. And even defended.
To talk about facial recognition and public security in Brazil, I interviewed Pablo Nunes, Ph.D. in Political Science and researcher on issues related to public security in Brazil for 13 years. Since 2018, he has been coordinating the Center for Security and Citizenship Studies (CESeC), where he develops research and activism on human rights, public security, policing and the fight against racism. He coordinates the Security Observatories Network, an initiative that brings together Observatories in five Brazilian states. He conducts research on media, violence, social media, and new technologies allied to policing. He is also the founder of Panopticon: a project within CESeC to monitor the adoption of facial recognition technology by public security institutions in Brazil.
5. Interview 1
Hi Pablo, thank you so much for joining us. You have an impressive trajectory on the debates about race and public security in Brazil, and from 2018 onwards facial recognition started to be part of your analysis as well. What’s the part you’re focused on in this specific field?
Pablo – Answer 1:
I’ve been researching and working on different aspects of public security since 2008. Since 2013, I’ve been working at the Center for Studies and Citizenship. The CSC is an institution with a 21-year history researching public security, violence, and media in Rio de Janeiro, as well as other issues related to public policies and security, and how society debates and influences public policies on security. In 2018, we witnessed the federal intervention in Rio de Janeiro. It was a measure of power in which the federal government took upon itself part of the role of the state governor. At the time, Luis Fernando Pezão, who was left out of decisions related to the public security office. The federal interventor was a military man, General Braga Neto. Now, Braga Neto is the Defense Minister of the Bolsonaro Government. Under his command, Rio de Janeiro witnessed several police operations, numerous cases of violence, massacres, and other cases of human rights violations during the ten months where the intervention took place in Rio.
As the Federal Intervention was something that had never happened in Brazil, a civil society group got together to think about ways to monitor and follow up on this process. We did that so that we can force political pressure on cases of violations, and violence, as well as cases of corruption with public money. We created the Intervention Observatory, an initiative of the CESEC, which existed the ten months of the intervention. Our goal was to monitor the day by day of public security policies and create indexes that were not available among the data released by the police.
I addressed this point because Rio de Janeiro is a state where more than 10 police operations are held in slums and peripheral areas. Despite these operations being the core of the public security strategy and influencing the everyday lives of a significant part of the population of the state, Rio de Janeiro has never published data on them.
We know that without data, we cannot discuss public policies or influence them. And this is why we created a methodology to monitor these police operations. We published 10 monthly reports about the intervention in Rio. By the end of the federal intervention at the end of 2018, we already had Jair Bolsonaro as president-elect and Wilson Witzel as Rio de Janeiro governor-elect. Witzel was later deposed for corruption scandals. But at this moment, we had two commanders from the Rio de Janeiro state and federal governments with very aggressive policies on public security as the platform based on law and order and violent responses to crime. Their public policies platform also included the use of new technologies by the police to pursue a goal of greater imprisonment rates and exacerbated violence.
When the intervention ended in 2019, we created the Safety Observatory network, with observatories in the Brazilian states of Rio de Janeiro, São Paulo, Bahia, Pernambuco, Ceará, and the two newest ones, Maranhão and Piauí. We started with the daily monitoring of events in these states. From 2019 onwards, we saw an increase or rather an emergence of arrests using facial recognition first in Bayer, and then spreading to other states From that moment on, we focused our attention on this issue and used the same methodology from the Security Observatories network to monitor cases of arrests.
At the end of 2019, we managed to identify 184 arrests. Of those, when the arrests had racial information indicated, 90% of these people were black. This is a well-known scenario in public security. And in a way, it reflects the larger data on the incarcerated population. From the end of the year, when we published this first report, the different units of the police in various states began to restrict access to information.
Consequently, we are no longer able to monitor these arrests, because the police started to create obscurity around this information. Then upon reflection, we thought about shifting our focus to the monitoring projects. We tried to understand how these projects are structured, who finances them, who are the key actors, companies, and suppliers that are involved in these facial recognition projects, and which police forces are using the technology.
We know that not only the military and civil police but also the municipal guards have been using this type of technology. And we want to know where the money is coming from, whether it is from the federal government or from other sources. This is how Panóptico started. Panóptico’s purpose is to monitor facial recognition use in Brazil, and we have issued analysis pursuant to the uses that state police and municipal guards have made a facial recognition technology.
Brazil is among the countries with the highest rate of incarceration in the world position, only behind the US and China. Your research has shown that several states in Brazil are starting to deploy facial recognition under the narratives of public security. In this scenario, what are the risks that you foresee?
Pablo – Answer 2:
Facial recognition has been used to reinforce and accelerate a logic that is well-known in Brazil, which has the third largest prison population in the world. In other wounds, we have decades of accelerated incarceration in the country. There was no sign that this accelerated incarceration is a strategy that aids the development of a citizen-oriented public security with the participation of society, or one that could create an atmosphere of security and the ability to fight against crime in our society. Facial recognition systems have been used as a new element aiding the incarceration and how the government responds to public safety issues. Facial recognition is also a negative element when it comes to the violation of rights, arbitrary decisions, and violence. In spite of the fact, we know from history that this technology has been applied without a critical reflection in this county. It has become normal without an open dialogue with society, and importantly, without transparency.
Pablo, we all know that most people incarcerated in Brazil are black. In a recent report by Panóptico, you indicated that 95% of people arrested through facial recognition technologies in the country were also black. Can you comment a little bit more on that, and on the insights from the report?
Pablo – Answer 3:
We create these methodologies, these ways of creating numbers, because the state is not transparent in relation to its own actions. There was previously nothing published about arrests based on facial recognition.
The state has not publicized the numbers relating to arrests and the profile of people arrested in states where there was a facial recognition project used by the police. We tried several times to obtain data under the access to information law, but all were denied for different reasons. So we decided to stop monitoring the data ourselves.
We know that what we can monitor is only a very small portion of what is really happening. In reality, numbers are probably way higher than what we can determine by monitoring the state’s actions. Nonetheless, in a scenario where there is literally no official information, any information we can get by monitoring the arrests may guide us in our analysis of these public policies and help us understand the impacts.
So we’ve been monitoring social media, the press, police blogs, and other websites, for every case of arrest that we can identify. We then created a database and reached some conclusions. There were 184 people arrested, 90% of whom were black. They were usually arrested for nonviolent crimes like petty theft and trafficking and small amounts of drugs, and the arrest happened in at least six different states in the country with a strong prevalence of arrests in the state of Bahia. The analysis indicates something we already know. Racism is ingrained in police actions and in the state as a whole. This is a very well-established profile. The promises that facial recognition could diminish racism and the police forces were actually proven wrong. In fact, we have seen that these technologies are being used to reinforce discrimination They have been used to create spaces that segregate the black population, such as the Copacabana neighborhood, where the Rio de Janeiro government clearly based itself off this segregation, logic, and other issues that have been reflected in other states as well.
Following the narratives on the news and in social media, some might say there is a general acceptance about the usages of facial recognition technologies for public security purposes. So, my last question is: What are the strategies to change this feeling? How to expose the dangers of the massive use of facial recognition technologies in public security. Can you comment on what challenges you face as a researcher?
Pablo – Answer 4:
This topic is essential, important, and urgent. I believe much of the beliefs this portion of the population has on facial recognition, including their trust of it and their acceptance of its use comes from the fact that facial recognition is a black box. One cannot know exactly what it does, what are the risks, and what are the potential impacts upon society that come with these technologies. It is a challenge to show the population how these technologies operate and how they have been impacting society.
This educational front approach, seeking to increase the number of people worried about the dangers of facial recognition, will only succeed if there was an open dialogue with those who studied the topic. It is important that the political science and law researchers, black activists, and other people who are already dedicated to this topic, not only about facial recognition itself but also about racism in society, in public safety, issues relating to public security policies, issues of privacy and violations of rights, stoke talking to generate a multi-dimensional understanding of what is the impact of facial recognition on society, and that this multidimensional understanding is conveyed to the general public in a didactic way, in a way that captures the attention of the largest number of people. We at Panóptico are investing our efforts on first-person experiences to try to connect people to this concern about facial recognition. So we are working on an application with an algorithm that will show the biases and other issues related to facial recognition. We also have other initiatives such as playbooks, leaflets, and other materials for social media, that also seem to be relevant to push this agenda forward. And it’s important that we speak about the issue of facial recognition and that we speak to as many people as possible so that we can increase the number of people concerned about the issue.
6. Miss Algô Public transport
A: Uh where’s my bus card? Found it.
A: What’s this?
Miss Algô: Uh excuse me! You don’t exist on my database. Next!
A: What do you mean I don’t exist on your database? Who are you?
Miss Algô: I’m the one who should be asking you. I’m Ms Algo, a transport system facial recognition algorithm and I’ve lost way too much time with you.
A: Well, my card is fully charged and I have a right to get on the bus.
Miss Algô: Well, your gender record on my database doesn’t match the results of my camera’s facial recognition. I don’t think you are you, so I will not let you board.
A: Ok, I see what’s happening. Miss Algo, I updated my name and gender on my federal ID records. That should already be on your database.
[keyboard typing sound]
Miss Algô: Mmm, no. I haven’t found anything.
A: Please! Help me out!
Miss Algô: Sorry honey, can’t do that for you. You are unrecognizable.
A: I need to board Miss Algo. I’ve got to get to work.
Miss Algô: I’m sorry, but I’m not responsible for this issue. Next!
Miss Algô: Let me tell you a little secret. I actually may be responsible for this.
As we can see, the implementation of facial recognition technologies by the public sector goes way beyond street surveillance. In Brazil, many governments are deploying this tech to authenticate identities, in order to enable access to public service, such as social benefits for public transportation. But what is sold by government as shining ovation to make our lives easier, may in fact also bring another layer of exclusion, especially of those who do not fit into privileged white mal cysgenger profile. That is most of us and many more people.
That happens because these technologies are designed from the worldview of those who design them, mostly white male from the global north. And this construction makes historically excluded populations such as black and brown, trans and even LGBT+ population, at the same time visible for a curtain surveillance and invisible to access rights. Complex? Who develops these ideas is our next guest Maria Rafaela.
In this episode of this web documentary from devices to bodies, produced by coding rights available online, checks explained how facial recognition technologies can chase trans people as it is a surveillance tool that prejudices and stereotypes that labeled them as dangerous to certains context.Beyond surveillance, it can also work to exclude the trans population from accessing basic services, such as accessing student reduced fares for public transportation, if such can recognize a person. In other words, visible for surveillance, invisible to access the rights and services. Unfortunately, we must remember that Brazil is considered the world’s get list countries for trans communities, as it is the country for most kills transgender people in the world.
In 2020, 127 transgender women were murderd. A 41 increase from the previous year. As we can see, the situation is already brutal and complicated.
Imagine what can happen if technology is that green forest binary views of gender, are added in a context that is already highly violent for trans people. It’s happening.
Can you get how big the problem is? I’m sure you’ll come away with your mind exploding full of reflections after our next interview.
Maria Rafaela is a black trans woman activist and researcher in gender, sexuality, art and subjectivation process. She holds a doctorship in social communication, master in history and criticism of culture and bachelor in history of art. She has experience in the critical analysis process of art, violence and power relations, subjectivation and education policies, access to justice and citizenship promotion within her groups. Also, she served as an advisor at the superintendence of individual collective and defuse rights, of their states secretary or social system and human rights between 2013 and 2015.
Interview – Mariah Rafaela Silva
Hi Mariah, thanks for joining us! You studied art history, sociology of art and the production of subjective. What’s part of your interest to investigate facial recognition, especially looking at trans identities?
So, Vanessa, I have a lot of thoughts about it. My education is interdisciplinary. I’m also involved with social movements. I was the director of Astra Rio, which is the Association of Transvestites and Transsexuals of Rio de Janeiro. I have more than 10 (ten) years of experience in the field of promoting human rights for transvestites and transsexuals, as well as people from all races black and racialized people, and those living in slums, which is where I’m from. And my education is, in fact, a complementary to my life story, to my engagement to social and political fields. So, I’m doing the opposite of what is normal, some people use their academic background to work in the social and political field. I use my engagement in the social and political field to work in the academic space. There, is my first point.
As an undergraduate, I had already researched things related to the aesthetics of art, specifically where were the great artists, and the great trans artists, in the History of Art. Because, at the time I went to college, I didn’t have any artist as a reference. We had all the references of the great geniuses of European Art History, but we hardly talked about cisgender women. We never talked about transsexual women and transsexual men in Art, and that bothered me. But everything changed when I went to the Master’s Degree. Then I was specifically researching how the legacy of colonialism influences current violence. I wanted to understand this violence more deeply, from the perspective of cultural production. From the point of view of the production of subjectivity. And a specific concept that we use a lot in Psychology, called “Production of the colonial unconscious”, which is a term coined by Frantz Fanon, but is widely used by Gilles Deleuze and Felix Guatarri in Post Structuralist Philosophy.
I obtained a Master’s Degree in Humanities, in History and Critical Theory of Culture, which is an area of the Human Sciences much closer to Sociology, and I went to England to research in a museum. And there I came across a whole universe where violence was produced in a structural, multidimensional way and, obviously, conditioning our way of being, thinking and acting in society.
This led me to my Doctorate Degree. I wanted to research structural violence in sex but, obviously, I was going through issues like expression. There are specific fields of communication: expression, languages, media. Mainly because I wanted to research the violence in pornographic cinema. And I came across the issues of algorithms, the biometrics, the history of biometrics (how the history of biometrics also conditions a history of control and segregation at the end of the 19th century), and I ended up researching, as a part of my Ph.D. (part of my research assignments) subjects such as facial recognition, biometrics, DNA mapping. These are issues that interest me, particularly, because I see there a new posture with regard to population control of large population groups, specifically. Partially related to people understood as minorities and a structure that segregates the society and separates the individuals. The neo-segregationism this time, under the support of technological devices. This interests me deeply.
Those who listen us may not know but suddenly Brazil, is the country with the highest number of transgender people killed in the world. And they already know protection of LGBT+ rights is also collapsing under the current far right government. Could you comment on how this content affected the deployment of facial recognition when targeting LGBT+ people?
Mariah – Answer
I think they relate in a lot of different ways. But, the first question, and perhaps the most important one for us to address, is the following: transvestite and transsexual people, especially black and racialized people, in peripherical contexts, they have always been understood as threats to the Brazilian State. And this has always been an issue, since the foundation of what we understand today as Brazil with the invasion of the Portuguese. Xica Manicongo was a gendered threat, a gendered terrorism. And this arrived in the 20th century, mainly in the period of the Military Dictatorship, gaining contours of intensification and an ostensible policing of the State to control transvestites and transsexuals. So, there was already, within a mentality and a modus operandi of the State, of a policy to control these bodies through violence. Trans women and transvestites were collected from the cities, thrown into police cars which went through the streets at very high speed, shaking them inside breaking their noses, their legs. There are reports of heels that entered the thighs of other girls, given the violence with which they were treated.
And they were filed with the police. Literally cataloged, like you do in laboratories where you put a record in an animal, you attach a record to the animal and let that animal loose. Put a chip on that animal and let it loose, so you can monitor that animal’s behavior. Of course, transvestites and transsexuals were not chipped, but they were recorded: the places where they walked, where they circulated, people with whom they eventually communicated. Because there was a whole idea that they were a threat. When you begin to understand that, in fact, the threat is not transvestites and transsexuals, but it is the culture that is formed around misogyny, cissexism, sexism, that makes these people perceived as threats. In fact, there is an inversion: the State becomes a threat for these people.
And I think this logic of violence inevitably comes down to technology. I think it has become a cliché to say that these technologies are not neutral, they are produced by people. And once produced by people, they reproduce all the imagination, and all the normativities, and all the social hierarchies already established in the field of culture and society. This brings to technology a series of agenda that are profoundly violent, for example using facial recognition technology to predict who may or may not be a threat to society. This is only one of the applications. This technology may also be used to deny services to marginalized people. And to understand, throughout our research that there is a denial of services to transvestities and transexuals. And what is this denial of service? For example, when you register on a platform and you cannot use your social name. So, there is an issue of self-determination. It’s like when you issue a bus card, but the photo of your document is different from how you express and your gender. And that prevents you from necessarily using the service,which is a public service to aid students. But also, prevents you from accessing public buildings of an alleged threat you pose to the place or the morality of that place. In other words, it is correct to say that technologies have been moralized and that they carry in their architectural engineering. They reflect all the apparatus of discrimination of disgust and abjection already placed and already presented in the scope of society. And they reproduce it. And, in this sense, they become deeply hostile to transvestites and transsexuals, especially the most precarious. And this reflected in a situation of imminent killing, but also in a policy of incarceration and exclusion from access to health services, and other services that the State offers to the population. So, there are other ways of killing. There are other ways for the State to cause a death, which is not necessarily literal or physical death, but also symbolic: social death, citizenship death.
In January 2021, you were a researcher and director of coding rights launched their report “Facial recognition in the public sector and transvestites that politics controls surveillance and threatens the gender diversity intersectionality with race, class and territory”. In this report you gave examples on how facial recognition has been used to authenticate identities and allow citizens to access public services such as the social security benefits, lower fare rate of public transportations and even to use a driving license. You also pointed out possible harm caused usage resulting from a machine called false negative. There is when the machine doesn’t match a face with the identity corresponding to the face. In other words, it doesn’t recognize that you are you. So, how does this affect particularly the LGBT+ population in all its intersectionality?
Mariah – Answer
I think the main issue, and it’s not an issue that just affects the LGBTQIA+ population, but the population as a whole, first of all, is transparency and legislation. I think we urgently need to make an effort to transform that. Having legislation that protects users’ data. A legislation that protects privacy. A more robust legislation that is capable of effectively mitigating digital crimes, especially against the most precarious people. And, obviously, this is reflected in an issue of transparency. We don’t know how the States, but also the companies (mainly the Big Tech) effectively use this amount of data and what they do with it. Or even if most of this data really is used for advertisements.
And we know it’s not. We know that, through this massive collection of information, a large map of behavior and, obviously, of shaping subjectivities has been made. And when you have access to subjectivities, you have access to human reserves in a way never seen before. When you access the production of subjectivity, you have inexhaustible access to human reserves. It means a lot. This means that you can literally lead a person, persuade a person to buy a certain product, but also commit a crime.
And this is particularly worrisome in a society that already understands transvestites and transsexuals as a threat. That already understands LGBTQIA+ people as a threat, and that already understands black people as a threat. Because every rhetoric, every speech will contribute to the intensification of a process of historical exclusion to which these people are already subjected.
So, this is a point that does not exclusively concern LGBTQIA+ people, but the entire population. I think these technologies, especially the false negative issue, is a contemporary matter, but it has an implication that is historical and that we need to be deeply aware of, because they concern what you are and what you can be. They literally tell you what you should be. When a face recognition machine, when a computer, an artificial intelligence, says that you are not what you claim to be, it is not just saying what you are not – Vanessa, or that you are not Mariah, or that you are not Bruna, or not Tatiana. She is saying that you cannot be that, because that is not your name.
So, it reconditions the cultural and human character of our experience. And I think this is the big problem, when there is a reconditioning of the human condition, starting from a mechanical condition, from an algorithmic engineering that reflects all the disgust and abjection already inserted in the population’s scope.
Obviously, as I said before, this will reflect on the exclusion of rights, but this also directly reflects on the production of public policies. Note: public policies in education, public policies in transport, public policy in safety, health… Mainly, in security in which, nowadays, it is, perhaps, the most used niche, where these technologies are most used. Ostensible policing, the one that is done to avoid criminal situations in which you map people who present “risks”. If, on the one hand, this technology says you are not what you claim to be, on the other hand, it claims that you can be what poses a threat to the collectively produced fictional idea of security. And this is very worrying. This is deeply disturbing!
Again, this reconditions the human character, of our humanity. That is, the whole set of human experiences are subjected to the scrutiny of these machines. And these machines literally calculate, produce metrics between what is possible, what is true, what is false, what is dangerous: but, above all, what represents an immediate risk and which obviously needs to be eliminated from the social life. I think that’s the big risk, the big question and the big challenge of the 21st century for the issues of facial recognition, for the issues of false negatives, of false positives. We will have in the coming 3 (three) decades a profound debate in the field of law, but above all in the field of aesthetics, cultural and communication humanities because that is where the law comes from.
So after this conversation, which was a masterclass thank you, can you also point out some of the challenges we have as a society and how they are being transposed to the deployment of technology such as facial recognition?
Mariah – Answer
I think we have a challenge in the field of legislation. I think the risk is for us to reinforce stigmas through our legislation. When the machine, facial recognition and artificial intelligence behind these devices and platforms start to say that those people are nor that. When you try to access. When you try to access, for example, Biovalid to prove your life, and you can’t access it because you changed.
I’m going to make a weird comparison, Vanessa, to get to the concrete example. In the past, we used to go to the bank and you needed to sign the same way, to make a withdrawal or deposit a check. You made a signature the same as the one on your ID. But, over time, and this is normal, your signature changes. And then you arrive at the bank and your signature does not match what is registered in the bank’s database.
The same thing happens with facial recognition as far as trans people are concerned. This biometric signature changes. And not just with trans people. We know that this happens, especially with black women, where black women are not recognized as women. Machines still have this difficulty. It’s more or less that. This signature – and I’ll call it a biosignature – they change. And this change reflects, mainly in the nature of security and security policies that these institutions start to adopt, but alo in the perspectives of legislation that the States start to adopt to guarantee security.
And this security is not dedicated to thinking about the best user or account holder experience. Rather, it is produced to protect the bank’s assets. So, there is, deliberately, the interest of privileging the corporation to the detriment of the human experience, of the human itself.
I think this is a big risk. This is a risk that, in fact, is already posed, is already posed and that is being reinforced. So, how can we think about the whole social and legal framework, using as a parameter, as a starting point, the same historical place of distrust – of threat – from those people who, a priori, are people?! And as people, they changed in time and space? I think this is a big risk and, again, a big challenge. More than a risk, it is a great challenge, especially for transvestites and transsexuals.
And I’m not just talking about cosmetic surgery or hormonal therapies. I’m talking about basic issues like growing hair, which is part of the organic scope. The question of skin color and where you come from: we know that these technologies are much more hostile to poor people than to people who are better able and who know their specific uses, how they work, and who have more social passability.
I think the risks are many, it can’t be exhausted here, but I am deeply concerned about this issue of how we are going to think about a whole set of legislation for the 21st century, based on these principles.
Either we build efforts to reinforce difference and diversity, creating control mechanisms not for the population, but for the uses and applicability of facial recognition technologies, or we will enter a collision course with the very functioning of the State. The state is building mechanisms that put its own democratic and political stability at risk.
What to do above all this? The city of Boston and San Francisco, both places that now have research centers and innovations in technology are experimenting with either men or remoritorian use of facial recognition by the police and law enforcement agencies. And this is a trend that is emerging in several places around the world.
Indeed. Campaigns from the global ban of the use of facial recognition are spreading and creating awareness about how these two are able to enable mass surveillance and gender and racial descrimination. Last June, a letter signed by more than 170 human rights organizations around the world including us from Coding Rights, highlighted some of these concerns and asked for a ban on biometric scans. Next step is to push for legislation around our countries and regions. But for that to happen we need a lot of people concerned about the topic. So, are you?
When it comes to authentication of identities we still have a lot to discuss. We need more transparency about where the error is and the implementation of the systems. Including disaggregated data about the demographics of cases or false negatives. Meaning, what the system identifies you when being you. Furthermore, in order to avoid further exclusion this data authentication should never be the only way to access the services and many safeguards need to be discussed and considered in terms of security and data protection. If a data leak compromising your password is already damaging, what happens if someone clones your leaked biometric data?
Our main take here is that not every innovation is good or that should be implemented just because it exists. What do you think? If you want to learn more about Coding Rights or find the materials we talked about in this episode, follow us on social media @codingrights .
This episode was produced by Coding Rights.