So many aid programs in low-income countries have set “empowering women” as their goal. They don’t just want to boost women’s incomes and health and education level, but to give them the ability to make their own decisions over those aspects of their lives.
But how do you actually gauge how much control a woman has over her life?
“I don’t think it’s been more than five or six years since we’ve been trying to do that. And it’s actually very difficult,” says Mayra Buvinic. A former director for gender and development at the World Bank, she has helped pioneer a growing effort to measure women’s empowerment.
Buvinic, who is now with the Washington, D.C., think tank Center for Global Development, recently convened a meeting of researchers to discuss the challenges — and trade some creative strategies for getting around them, including experimental games and scenarios.
Many of the researchers at the gathering pointed to the difficulty of getting program participants to give an honest account of their views and experiences on a topic where traditional societal expectations can be so strong. “People tell you what they think you want to hear,” says Buvinic.
Joao Montalvao, an economist at the World Bank’s Africa Gender Innovation Lab, said a new study he has done of adolescent girls in Liberia points up the complexity of the problem. He found that when the girls were asked about their sexual behavior, their answers were strongly influenced by who was doing the interview. For instance, the girls were less likely to disclose their sexual activity to interviewers who personally held more traditional views about gender roles — even though the interviewers were following a strict script and had not verbally revealed these personal opinions.
Somehow, says Montalvao, the interviewers had nonetheless communicated their private views, “likely through non-verbal means — their gaze or their intonation in the way they asked the questions.”
How to get around that conundrum? One answer, of course, is careful vetting and accounting for the characteristics of the research assistants who are doing face-to-face interviews. But a new, practical guide to this type of research produced by the Cambridge, Mass.-based Abdul Latif Jameel Poverty Action Lab, or J-PAL, encourages researchers to get even more innovative.
One of the authors is Rachel Glennerster, former executive director of J-PAL and current chief economist for the United Kingdom’s foreign aid agency, the Department for International Development. She noted that the guide includes the example of a research team that presented their subjects with “vignettes.” These are essentially hypothetical scenarios that the subject is then asked to weigh on in.
In this case, says Glennerster, the researchers were studying a school program aimed at reducing discrimination against girls in northern India. So they presented the teens in the study with the story of a fictional 21-year-old village girl named “Pooja” — who is on the cusp of achieving her long-cherished dream of becoming a police officer.
Imagine, the researchers told the kids, that Pooja has just graduated from college, passed the police exam and gotten a job offer. But her parents believe this would be unsuitable work for a young woman. Far better, they say, for Pooja to get married to a husband from a good family who can support her while she takes care of the home and has children.
So Pooja’s parents found her a prospective groom — and told her that instead of taking the police job, she should get married. Did the teens agree with the parents’ decision?
The idea, says Glennerster, was to assess the extent to which the teens believed that women should be allowed to work outside the home. But asking such a general and charged question directly is unlikely to yield much insight. By posing the question within a specific context that resembled situations that many of the students’ own relatives and friends have faced, the researchers could elicit answers that were more honest and nuanced.
Glennerster also highlighted a second, equally daunting obstacle when it comes to measuring the effectiveness of programs intended to empower women. As Glennerster put it, “empowering women is about giving them meaningful choices — but we rarely observe decision-making directly.” And so, she says, researchers should consider trying to create conditions in which they can at least do a version of that kind of observation.
For instance, the guide recounts how a researcher from Dartmouth used a game to gauge how much power a group of married women in Kenya had over their money. The gist of it was that husbands and wives were a given a small prize of about $7 and, through various rules and random drawings, the chance to decide jointly or separately how to spend it. The researchers wanted to see how much the decisions differed when the women were making them on their own as opposed to with their husbands.
A similar tactic, says Glennerster, is to create circumstances in which communities make real-world decisions without necessarily realizing that researchers are tracking the process. She described how she had used that approach in a recent experiment. It tested the effectiveness of a program in Sierra Leone that was supposed to increase women’s participation in village decision-making in say, figuring out how to spend aid money.
Glennerster arranged for the research assistants to tell each group being studied, “We’re going to be taking a lot of your time to do these surveys. So we want to compensate you with a gift. And we have two gifts at the back of the truck — we have salt and we have batteries — and we’d like to know which one you’d like.”
Then, says Glennerster, “we stepped back and watched.” They wanted to see: Did the village chief simply take the decision without consulting the rest of the assembled community? And if the villagers did hold a discussion, did women speak up? And for how long?
They found that about one-third of the time, the chief made the call on his own. And even when there was a village meeting there was no difference in the number of women who participated in the deliberation compared to what happened in a control group that had not participated in the empowerment program.
In other words, said Glennerster, here was a program that “had just spent four years persuading [people in these villages] that you needed to listen to women. And [our finding] was a pretty good indicator that actually women’s voices hadn’t changed in these communities.”
This doesn’t mean programs like this aren’t potentially valuable, she says. But it underscores how vital it is to use rigorous research to make sure they’re actually accomplishing their goal.