MENU
April 3, 2014

Competing With Bias

Research shows how discrimination works in the hiring process and suggests that science and tech firms that leave it unaddressed may overlook talented female candidates.


In 2005, economist Larry Summers, then president of Harvard University, made a controversial speech in which he noted three possible reasons women are not as prevalent in science, technology, engineering, and mathematics (STEM) fields as men: women’s abilities were simply different and not as good as those of men, perhaps women were disinterested and prefered not to go into those fields, or, alternately, they faced simple discrimination—skilled STEM women are overlooked in favor of lesser or comparably skilled STEM men.

Data show that young girls do express interest and show aptitude in STEM subjects on par with their male peers, and have comparable standardized test scores, evidence that should put to rest the first two hypotheses. But by the time they reach college, young women are not actively pursuing STEM majors and are outnumbered by male students in most STEM fields. If interest and aptitude aren’t initial barriers, could it be discrimination that’s keeping women out of STEM?

New research from Professor Ernesto Reuben asks that question. Working with Paola Sapienza of Northwestern University and Luigi Zingales of the University of Chicago, Reuben found that bias is thriving and has potentially far-reaching effects—but that it can, to some degree, be offset.

In a series of experiments, the researchers mimicked how employers get information about candidates during job interviews. First, they asked subjects to complete a task that men and women perform equally well but about which there is a pervasive stereotype that men perform better: correctly completing as many math problems as possible in four minutes. After subjects received their scores, the researchers randomly assigned these same subjects to play the role of either employer or candidate. (Employers earned more from participating in the experiment if they chose the candidates who had performed best on the first task, while candidates earned more if they were chosen by the employers.)

Employers each met two candidates in person at the same time and had to choose which of the candidates to hire to complete another set of math problems. In some cases, each candidate told the employer how well he or she thought he or she would perform on the math task—just as a candidate might sell ability and skills in an interview—and then the employer made the hiring decision. In other cases, the researchers—a less subjective source of information—told the employers about candidates’ past performance before asking them to decide between candidates. In two other variations, employers chose between candidates based on sight alone first, before receiving any other information. In these cases, after employers made their initial choices, candidates then told employers their expected future performance, or the researchers told employers the candidates’ true past performance.

As the last step in the study, all the subjects took the Implicit Association Test (IAT), an assessment that measures associations of words and images through response times. “The test assumes that if, for example, you’re less quickly able to associate the words ‘female’ and ‘math’ together than the words ‘female’ and ‘humanities’ then your brain is trying to resolve a conflict about the association between women with math,” Reuben explains. “So the IAT gives us a measure of the relative bias of each of the subjects.”

The results were not encouraging: both male and female employers were strongly biased against female candidates in all variations of the experiment, choosing women significantly less than half the time (half being the rate women would be expected to be chosen in the absence of discrimination). Even in the variations where employers could change their choices after learning more about candidates’ performances from the researchers or through the self-reported scores, male candidates were still favored by at least 13 percentage points. And when the candidates self-reported, 9 out of 10 times when the employer chose the poorer performing candidate, that candidate was a male. (This is likely because when candidates self-report, the men tend to overestimate their future performance, a tendency Reuben has already documented.)

Generally, those employers with IAT results indicating greater bias were among the most biased in the experiments; when they updated their hiring decisions after getting new information, they still chose inferior male candidates over superior female candidates with surprising frequency.

“Our experiment shows that discrimination can be costly to employers,” Reuben says. “You would think that in a marketplace competing for talent, firms that discriminate and don’t hire optimal talent would just disappear.”

It’s probably difficult to avoid discriminating, Reuben acknowledges. “You can’t not interview candidates, because you’d lose other valuable information—can they speak articulately and knowledgeably on the spot, are they a fit in terms of personality, and other considerations about someone’s potential success in a given firm’s culture.”

So what’s an HR department or a hiring manager to do? “Be aware. You can overcome your bias if you process the information you get properly—don’t give weight to stereotypes—and concentrate on the information you do have,” he says. “If you’re comparing two women to each other, you’re probably going to make a better decision than if you’re comparing a woman to a man. So, when possible, you should evaluate candidates separately by gender. And that implies that firms might even want to ultimately consider a quota system.”

Ernesto Reuben is assistant professor of management at Columbia Business School.

Read the Research

Ernesto Reuben, Paola Sapienza, Luigi Zingales

"How stereotypes impair women's careers in science"

Download PDF 
View abstract/citation