There is a pressing need to foster inclusion and diversity in technology, and digital identity systems are no exception. In recent years, research has highlighted the absence of gender inclusivity in various computing scenarios[1,2], and found that user problem-solving approaches often vary by gender. This has direct implications for software designed to assist users, as most software tends to favour methods statistically preferred by men.
Removing these gender biases is crucial to ensure everyone’s ability to fully participate in and benefit from technology. Recognising this, MOSIP partnered with researchers at Oregon State University (OSU) to leverage the GenderMag methodology developed by the team at OSU.
The researchers conducted a 12-month case study to identify gender-inclusivity issues in one of MOSIP's modules, Inji — a mobile app for offline storage and authentication of ID. Additionally, they trained MOSIP software practitioners to lead the GenderMag evaluation sessions independently.
What is GenderMag?
GenderMag, also known as the Gender-Inclusiveness Magnifier, is a usability inspection method to find and fix gender-inclusivity bugs in problem-solving software. This method can be used by software practitioners to evaluate the software they design and develop from a gender-inclusiveness perspective. GenderMag method focuses on five facets of gender differences, brings them to life through three faceted personas and encapsulates use of these facets into a systematic process through a gender specialisation of the Cognitive Walkthrough (CW).
The five facets of GenderMag assess an individual's cognitive style, which influences their interaction with technology. These facets include:
1. Diverse motivations for using technology
2. Information processing styles
3. Computer self-efficacy
4. Attitude towards risk when using unfamiliar technology features
5. Learning style (learning by process vs. tinkering)
The "inclusivity bugs" uncovered by GenderMag are instances where a technology product fails to fully accommodate the complete range of values associated with the five facets, leading to a disproportionate impact on individuals whose cognitive styles are unsupported. These bugs are also considered gender-inclusivity issues because the facets reflect statistical differences among gender preferences in cognitive styles.
Real-World Examples and Implications
Consider the persona of Abirami, a farmer with limited technology experience. She is less likely to take risks by clicking unfamiliar buttons. The study found inclusivity bugs such as vague error messages after entering a wrong passcode. Since Abirami is less likely to tinker and figure out the right next step without adequate guidance, this would frustrate her, thereby preventing her from obtaining an ID.
Alternatively, consider Timir, a nurse who learns by tinkering but dislikes unclear terminology. The app contained certain bugs like using abstract industry jargon, without proper explanations. This could hinder Timir from verifying his ID effectively, demonstrating how poor design choices can obstruct users who prefer hands-on learning, but require clear and accessible language.
These inclusivity bugs stem from unsupported cognitive styles and gender-inclusivity issues, as cognitive styles and feature usage often vary by gender. For example, men may prefer tinkering to learn new features, while women may not. Software tends to support the cognitive style of its developers, so when male-dominated teams develop apps, they can unintentionally introduce gender bias.
Real users with cognitive styles like Abirami and Timir would face barriers the app never intended to create, leading to frustration and reduced usability, ultimately hindering the adoption of the technology.
The GenderMag method leverages personas to act as stand-ins for diverse real-world users. Testing with these personas revealed design choices that exclude certain cognitive styles. Evaluation techniques like GenderMag offer a proactive model for assessing inclusivity, ensuring that design choices accommodate a broad range of cognitive styles.
It is crucial to consider the overarching goal of digital ID systems: to include all individuals. Bugs that disproportionately obstruct certain cognitive styles directly counteract this mission. Frustrated users result in lower adoption rates, thereby diminishing the system's overall value. Inclusive design is not just idealistic; it is essential for real-world success. Evaluating inclusivity early in the design process allows for affordable fixes before launch, and building in diversity from the outset increases user uptake and satisfaction later on.
A digital ID system must be accessible and functional for the full spectrum of users within a country, and this research highlights the subtle but serious barriers that exclusive design can create. Its lessons provide both moral and business cases for accommodating cognitive diversity. Diverse digital experiences are not a bonus; they are indispensable for achieving widespread adoption and satisfaction.
We hope these concrete examples motivate developers to prioritise gender inclusivity as they build, test, and refine their software solutions. Design choices that appear neutral can still exclude many real users. By keeping the needs of individuals like Abirami and Timir in mind, an ID system can better achieve its inclusive purpose, ensuring that it serves all users effectively and equitably.
References
[1] Executive Office of the President. 2013. Women and Girls in Science, Technology, Engineering, and Math (STEM). Retrieved September 24th, 2015 from www.whitehouse.gov/ostp/women
[2] National Center for Women & IT. 2014. By the numbers, Version 02282014. Retrieved September 24th, 2015 from https://ncwit.org/resources