
What can science reveal about bias in our education, healthcare, and other social systems? It turns out, quite a bit. This series of short talks from experts in the fields of medicine, law, education, and business explores where bias comes from, the importance of facing the fraught history of bias, and how we might benefit from striving to be “good-ish” rather than “good.”
Podcast: Download | RSS | iTunes (29:08, 28.4 MB)
DOLLY CHUGH (NEW YORK UNIVERSITY STERN SCHOOL OF BUSINESS): Hello everybody. So I have a friend who was taking a taxi to the airport, and at the end of the taxi ride the driver said to her with complete sincerity that–he said, “I can tell you're a really good person.” And my friend, when she told me this story later, said she just felt this warm glow come over her, she couldn't stop thinking about how that made her feel that the taxi driver saw her as a good person. When she told me that story, it really rung true with what I call the “psychology of good people.” I did my PhD in the lab that developed the leading measure of unconscious bias, the implicit association test. And in that research on unconscious bias that my colleagues and I have done, we continually see this issue of people intending to do good but seeing a gap between their intentions and their impact. And my friend, in her reaction to the taxi driver, I think gives us a hint into the psychology there she's describing, what we call a central moral identity–a feeling that it's important to us that we be seen as and feel like good people.
And when psychologists measure this on a 1 to 7 scale, a lot of us put ourselves at a 6 or 7–meaning that identity, that moral identity, is important to us. So we matched that with the research on unconscious bias–which says that all of us have some unconscious bias that we're uncomfortable with, that doesn't match our intentions–it puts us in a really interesting corner. And so where my work has been going lately is trying to understand that corner we're in through the lens of the work on mindset some of you may be familiar with, this work by Carol Dweck. So Carol Dweck has developed the idea, and her colleagues have developed lots of empirical research, showing that when we think of ourselves as being at a fixed level in something–like I think at public speaking I'm here, and I'm not going to get better with effort or coaching or feedback. Or I think at math I'm here, and I won't get better if I make mistakes and get feedback on them.
If I view myself as being in a fixed mindset, and not having room for work and being a work in progress, what I tend to do when I make mistakes is I shut down. My brain activity actually reveals less learning, I quit sooner, I deny mistakes if I can cover them up, or I even cheat. Now match that up with the psychology of good people–we care deeply about being seen as good people, but unconscious bias is pervasive and can lead to real mistakes and you can see the tight corner we've put ourselves in in the psychology of good people. What I've been fascinated by is how we can take Dweck's work on mindset and the contrast she makes between fixed mindset–where we're not a work in progress–and think of ourselves more as a work in progress, a growth mindset is how she describes them. When we have a growth mindset, we believe their abilities can be developed through effort, time, and in fact we work to actively notice mistakes. If you have children in school, many of them are working in classrooms where the teachers are actively been trained to foster a growth mindset because you can imagine that's exactly what you would want if someone was trying to learn something hard in school.
Well this is where the dilemma is. I've got my friend who cares deeply about being a good person, but we've got a way of thinking about being a good person in society that suggests that it's free of bias, that we're free of unconscious bias–that in fact we have a hard time reconciling the idea that…I care deeply about being a good person. I'm a professor, I care deeply about being egalitarian in my classroom. And yet I sometimes confuse two students of the same race for each other students who look nothing alike other than being the same race. And I do it in front of everybody. Or I assign a reading and a student will email me and say that that reading was sexist, and I will go into red zone defensiveness. I'm a feminist, what do you mean? And in fact, when I reread the reading it's like staring right back at me, that somehow I had normalized the sexism.
And so this good-person identity that we care deeply about, even when a stranger who's driving us to the airport says we're a good person, is meaningful to us. And yet we know as social scientists that unconscious bias exists and that we are not free of it. That we are not free of the systemic biases that are baked into the systems around us and that benefit us. So the argument I've been making and the evidence I'm trying to build right now is that we should let go of being good people. Just stop trying. Not stop trying totally, but stop trying to be “this” kind of good person. The reason is this: trying to be this kind of good person is preventing us from being better people. When we try to be better people, or what I call “goodish” people, we actually take on a learning mindset, we adopt that growth mindset, we're always getting better. When we make mistakes, especially when people call us out on our mistakes, we admit them–we own our blind spots, we take ownership for our own learning. We don't yet have a solution for unconscious bias. As social scientists, we haven't been able to crack that code yet. But we do know that when mistakes happen, we can do more to notice them or we can deny them. Being a goodish person is a higher standard than being a good person, I argue, because a goodish person actually owns their blind spots rather than denying they exist.
I had the opportunity to interview over 40 people and match their stories with the science in our field and what was most striking to me across all these stories was the candor with which people were willing to talk about what happens when they make mistakes. What the mistakes were, and what happens afterwards, and how they've developed this growth mindset. So some of the stories in the book are from Tommy Kail, the director of Hamilton; Jodi Picoult, the best-selling author; and even Joe McNeil, who is one of the four college undergrads in the Greensboro Four who started the lunch counter sit-in movements. so even a Civil Rights icon like him talking openly about the blind spots he has had, and how he's working to create a goodish mentality. I find their stories really inspiring. In the research, while we haven't figured out how to crack the code on unconscious bias, we have learned a lot about learning. And perhaps that's where the work can begin. We start with learning and growth, and give ourselves room to grow in this area rather than putting ourselves in a tight corner where there's no room for that kind of growth. Thank you.
MARIANNE J. LEGATO (DIRECTOR, FOUNDATION FOR GENDER-SPECIFIC MEDICINE): Thank you very much, it's wonderful to be here. And Dolly thank you, you sort of set me up for what I'm about to say next. I chose to tell you about two topics in biomedical investigation which illustrate being a good person with a closed mind in a tight corner, and preconceptions that shut the window of opportunity to really go forward and discover a bigger, wider, and more sound truth than we started with. The first concept came really from evolution and the perception of men as stronger, more daring, more able to take risks, with shorter lifespans as a result. But their primary function from an evolutionary point of view was to defend, protect, forage for resources, and make sure that the family was kept intact. The female of the species was valued primarily for her ability to conceive, bear, and produce and nourish children. And that unconscious bias exists to this day, and certainly impacted the way we did research.
Our tendency as physicians was to protect women from the risks of investigation, and their corresponding fear was that if we did any kind of clinical investigation of them it would disturb their ability to reproduce. And that is still a keystone of women's value, or self value, which is their ability to reproduce. And so we assumed what I call the “bikini view” of women. We learned about breasts and how they functioned, and reproductive physiology, but assumed that women are identical with men until–believe it or not–the lay public, which it always does, drove us after World War II to really look at women who were incredibly useful in fulfilling roles they had never really filled in society until World War II when all the men were away or absent.
Women banded together after the war and the great feminist revolution began. And women began to feel that the we had not paid attention to their unique needs and physiology, and petitioned what was then the Public Health Service to dedicate some resources to saying, what did we really know about women? And it turned out that beyond the bikini view, we knew nothing at all. The whole science of differences between men and women really only began in the 1990s, and it began in cardiovascular medicine when we realized that the very hearts of men and women, and their unique experience of heart disease, was different as a consequence of sex. If the heart, I assumed at Columbia, was different between men and women–that was my original investigation–I assumed that other organs were different too. And so we started out on a great journey of opening the window to the concept that we had erroneously assumed, because women were “too fragile” and needed to be protected from clinical investigation. That whatever we learned about men would be applicable to them. And so the science of what I call gender-specific medicine was born, nourished, and it's now an international phenomenon.
And we find that the very cells of your body as you sit here are sexed as male or female, and that their whole composition and function is different in important ways as a function of what sex you are. The second concept about which I would like to speak very briefly is that all humans–as a person interested in gender, we had also assumed that you are either male or female, and that all of humanity could be divided into two dyadic camps: male or female. Again the lay public, with increasing insistence, especially in the current climate, began to beat the drum of the fact that perhaps there is not a simple dyadic separation between men and women. But rather a continuum of individuals between purely one sex or another, and that that continuum was moreover fluid. And I am now working on what I call the “plasticity of sex” because as we understand since the structure of the genome, we have begun to unravel the biomechanical evidence of the fact that gender is a fragile concept. That there are many variations, and that there is a molecular biology for anomalies or variations in genital anatomy. There is a bio-medical or biological molecular biology about gender identity that may not be congruent with the individual's anatomy, and that sexual preference in fact may have a molecular biology that we are just beginning to unravel. It's a new world, it's a bold concept, and I will leave you with the idea that many of my colleagues don't want to investigate this at all. And talk about being in a small space with a closed window, Dolly as you point out, I need to have some more goodish colleagues who are willing to work on expanding this new biology of the plasticity of sex. So that, in summary, is my 30-years’ work in the field of the differences between men and women. Thank you.
DANIEL BRAUNFELD (ASSOCIATE PROGRAM DIRECTOR FOR SPECIAL PROJECTS, FACING HISTORY AND OURSELVES): Thank you very much for having me, it's great to be here. And as a teacher–I used to play one of those–I always start with the question that we're getting asked at the beginning, and the question of this was: can science help us eliminate bias and favoritism in education, medicine, the workplace, and legal and other social systems. And I'm here to say yes. Have a good night.
But the yes is not how can science help us, but how can facing the history of science and its impact help us address the issues of bias in all of these places. And to start with, I'd like to read you a poem written by James Berry. It's called What Do We Do With A Difference?
What do we do with a difference?
Do we stand and discuss its oddity
or do we ignore it?
Do we shut our eyes to it
or poke it with a stick?
Do we clobber it to death?
Do we move around it in rage
and enlist the rage of others?
Do we will it to go away?
Do we look at it in awe
or purely in wonderment?
Do we work for it to disappear?
Do we pass it stealthily
or change route away from it?
Do we will it to become like ourselves?
What do we do with a difference?
Do we communicate to it,
let application acknowledge it
for barriers to fall down?
These questions have different answers and consequences in history, today, and for different communities. And when I think about this idea that a Facing History student said, “I've had math classes, English classes, or 6 or 7 science classes, art, Spanish, but in all the time I've been in school I've only had one class about being more human.” The question is, what do we mean by “more human,” and what does it mean to face history? Many of the biases we carry–racially, gendered, abled–were actively supported and promoted by the scientific community for years, reaching into our very current times. This includes Thomas Jefferson's request in the notes on the state of Virginia for science to go investigate what made Africans and Indians inferior to white people, and ignoring Benjamin Banneker’s response that that was complete rubbish. It includes the faux scientific idea of eugenics–the science of creating good people–which between and 1914 and 1948 was included in 90% of biology textbooks and high schools, and was taught at three over 370 institutions of higher education including Harvard, and Cornell, and lots of other places–that you can use science to create not just good people, better people, perfect people. And that leads to…when it leads the Academy, and influences public policy around medical experiments, miscegenation, sterilization, immigration, genocide here and abroad. I'm the uplifting speaker. But facing history is different than knowing history. I could give you a lecture on that history and we'd all be asleep pretty quickly. But facing that history is different. If we write those scientists off as monsters from another time, we miss the opportunity to reflect on our own support to perpetuate these ideas. What do we do with a difference?
Facing History's work is not in the academy, it's in schools, and we work with adolescents. So think for a moment–if it doesn't scare you too much–about you as an adolescent. What were your concerns as a 15-year-old? What pressures did you have? What biases did your teachers have about you and your classmates? What biases did you have about your classmates? What did you do with a difference? And what was done to you as a difference when you are an adolescent? Kids think about these questions all the time, and they're in this place called school eight hours a day, and we work there to help them confront and think about and face that history. So we train teachers to create classrooms and schools where students are safe to ask difficult questions to confront their blind spots that we all have, we all walk around with them, so how do we invite those blind spots into the room. As Beverley Daniel Tatum refers to them, “the smog that we all breathe in all of the time even if we don't want it to admit ourselves as smog breathers.” How do we create spaces where these conversations are welcomed and not shut out at the door? how do we make the implicit bias explicit so we can talk about it, and confront it, and face it. In short, how do we slow down school so we can talk about important issues?
But it isn't enough to just talk about creating those spaces, we also need to give teachers the training to facilitate those conversations and the curriculum to do so. Facing History creates curriculum for difficult history where science and human behavior has led to some of the most catastrophic moments in history. And the questions of human behavior can be examined and applied not only by listening to a master narrative, but by looking at a full range of perspectives and voices. Because if we look at history and see where these biases came from, they're not only unconscious, they've been infected by hundreds of years. And if we can understand the history of where those biases came, from the roles they played, how they've been infused into society, political structures, systemic institutions, then our students become equipped to ask questions about those biases. Who benefits from this policy? Who is ostracized by it? Whose voice is missing? What assumptions are being made? What choices are being made? Slow down school.
And students, if given the opportunity to talk about their own experiences, an opportunity to talk about experiences they have every day–how do we arm kids to talk about the expertise that they have in their own lives confronting these biases every single day, and use that so they can understand that if they can empathize with the decisions that are made in history, they're better able to empathize with the decisions people are making today. Thank you.
Jonathan Kahn (Mitchell Hamline School of Law): So in my few minutes here, I'll talk to you about my more recent book called Race on the Brain: What Implicit Bias Gets Wrong About the Struggle for Social Justice. It's sort of a cautionary tale. It's born of a concern, sort of three concerns I'll just briefly go through. But the overarching concern is that implicit bias–you know, it's a real thing, it's an important thing, it's a very worthy thing of study, but I feel that in the last 10-20 years, implicit bias has become sort of a master narrative for talking about issues of race and racial justice in this country in a way that can, in some cases, be rather problematic. So I have three clichés to sort of describe my thesis–because I think all good theses should be reducible to clichés.
So the first one is “a place for everything,” which is the idea that work on implicit or unconscious bias that has been you know especially blossomed in the last 20 years, a lot of it is really incredibly good, interesting, substantive, and valuable work about how our implicit biases work. But we have to keep in mind though–as we're talking about engaging with issues of implicit bias and racial justice–to first realize that those are two distinct phenomena. They're not the same thing. Dealing with implicit biases is not same thing always as dealing with racial justice. So the thing is about the role of science in dealing with questions of bias or racism, is science can be a useful tool, but it's not a solution. Science won't solve the problem for us, right? It can help us solve the problem, but addressing issues of social and racial justice is hard work that everybody has to be engaged in. It's not something, again, that science can do for us. The second thing’s about “too much of a good thing.” So this is the idea of elevating then implicit bias into this kind of master narrative. Where, reflexively, when issues of race or racial injustice come up, too frequently people will so refer to it as an instance of implicit bias as kind of as an easy fallback. People feel more comfortable talking about implicit bias then talking about racism.
It was even a thing in the presidential debates, if you remember this in 2016. Hillary Clinton got a question about the recent Black Lives Matter issues and police brutality, and her response–which was, in many ways, a perfectly good response–was saying oh yes implicit bias is a problem for everybody. And so it was dealing with this as a problem of implicit bias. And then more recently the Starbucks incident last May, I guess it was, about the two black guys came into Starbucks, and the manager called the cops on them, and they were just sitting there waiting for a third person. And again, Starbucks, to its credit after that said oh we're going to shut down our stores for a whole afternoon and have implicit bias training. And on the one hand more power to them, I'm glad they did it. On the other hand, what happened at Starbucks was not implicit bias, it was racism. And it's important to say that because implicit bias is precognitive, you're not aware of it. This woman, she saw the guy, she talked to them, she picked up the phone, she called the cops, the cops came in, yadda yadda. This was not an instinct of, oh I averted my eyes I didn't make eye contact, or I didn't smile at this person. And so that's really important to keep in mind, that we can’t deal with the problem if we're not able to name it. And to be able to sort out the very real places implicit bias exists, but also the very real places racism still exists. And although I want to qualify that–because it's not just a matter of implicit bias or racism, it's not dyadic just like gender’s not dyadic. There’s a range of attitudes, and feelings, and engagements, and responses, and exercises of power between unconscious and conscious bias and conscious racism. And it's not just one or the other, there's gradations of different levels of sort of misuse right of power and authority around racial conditions and issues.
So finally, the final cliché then is the road to hell being paved with good intentions. I mean this goes back to the idea of the master narrative of implicit bias. I'm speaking particularly as a lawyer here because a lot of legal scholars have taken up the work of implicit bias and tried to integrate it into legal argumentation as a way to try to further protection under the law. Using the law to promote racial justice using the concepts of implicit bias, so we share many of the same goals and ideas. But again, this idea of relying on implicit bias to do the work for us, science to do the work for us, promotes this idea that there can be a technical fix to racism, right? And it comes back to the idea of science as a tool versus a solution. And there is no technical fix to racism, because racism is not a problem of technology, it's a problem of society and history. And technology and science can be again a useful tool for engaging it, but it can't do the work for us. And so finally, to wind up with, the way I actually got into this project to begin with was a very kind of screwy study that came out of Oxford about five, six years ago, where people were taking this thing called the implicit association test–which is one of the foundational technologies Dolly mentioned–about how you measure implicit bias. And this guy had the idea of, oh well let's give people implicit association test, measure their bias. If we give them a beta-blocker, propranolol–beta blockers are like hypertensive medication, it kind of chills you out, lowers your blood pressure, chills you out. And so lo and behold, they take the beta blockers, and whoa they perform better, they show less bias.
So it raises this tantalizing possibility of is there a pill for racism, basically. As sort of ridiculous as that is, there's a logical connection, because as racism becomes a function of biology, as it becomes reduced to a cognitive functioning in the brain that can be measured either through looking at it on FMRIs, or through millisecond measurements of your response to a particular kind of test, it becomes susceptible to biological interventions and technical interventions. And there are increasing numbers of studies out there looking at how does taking propranolol affect your implicit bias? How does being exposed to the hormone oxytocin affect your implicit bias? How does exposure to transcranial magnetic stimulation affect your bias? And it's not like everybody's going to be all of a sudden trying to sell us anti-bias pills–although you can never tell–but it's the idea of that we can fix this purely with technology, rather than doing the hard, messy slog of actually fighting for racial justice–because who needs a march on Washington if you can just take a pill? So I'll leave you with that happy thought.
Speakers include: Dolly Chugh, professor at New York University's Stern School of Business; Marianne J. Legato, physician and director of the Foundation for Gender-Specific Medicine; Daniel Braunfeld, Associate Program Director for Special Projects at Facing History and Ourselves; and Jonathan Kahn, the James E. Kelly Professor of Law at Mitchell Hamline School of Law.
This lecture took place at the Museum on November 28, 2018, under the title “New Science, New Solutions: The Biology of Bias and the Future of Our Species.”
This lecture is generously supported by the Abel Shafer Public Program Fund, a fund created by the Arlene B. Coffey Trust to honor the memory of Abel Shafer.