We have conducted two experiments where we utilised AI technology to uncover the binary gender representations in our news publications. While the timing of our experiment likely influenced our results, we believe one thing is safe to say:
We can, and want to, do a lot better!
Why are we assessing gender bias?
As noted in our reflections on understanding newsroom biases, there are many different types of biases affecting society. During the LSE JournalismAI Collab – a global industry collaboration – we decided to focus on just one of theme: gender bias.
While we do believe that gender identities entail more than the binary male/female, we decided to narrow the scope of this experiment to only assess binary gender representations.
Why did we run two different experiments?
We found out that developing/utilizing an AI tool that works for multiple written languages with consistent accuracy is challenging. As we are working in different languages (Japanese, French, Spanish, Swedish, English, Norwegian, and German), we choose to collaboratively focus on a media expression we all share: images.
However, a few of us publish in English. Through the Collab we got in touch with the great team at Gender Gap Tracker, and together with them we have conducted a text-based analysis of gender representation as well!
You can learn more about our experiments here: