How we created global anti-harassment chatbot, Maru

Feminist Internet
9 min readNov 25, 2020
A screenshot of the Maru Avatar and chat bubbles that say ‘Hey, I’m Maru, a chatbot that helps you tackle online harassment’
Maru, a chatbot that helps tackle online harassment

When Plan International approached us about creating an anti-harassment technology in response to their recent research on online harassment, we knew what we wanted to do.

First, we wanted to design something with the young people who are so heavily impacted by this problem: Plan’s research found that 58% of girls have experienced online harassment, and 63% of girls who haven’t faced harassment themselves know a girl who has. One in four girls abused online feels physically unsafe as a result, and harassment is silencing girls’ voices.

Second, we wanted to experiment with conversation design, because we believe it’s a powerful medium for communicating complicated information in a sensitive, engaging and accessible way.

Luckily, we were able to do both. Plan connected us with a panel of 6 youth activists from Benin, Cameroon, Germany, Ghana, Nepal and South Africa, who all care deeply about promoting internet freedom and safety. We ran a series of 2-hour co-design workshops that allowed us to understand the online harassment experiences these activists had or had witnessed in their own contexts, and to design the chatbot as closely together as possible. The workshops were delivered in English and French. We set 1-hour assignments to participants during each workshop and used Padlet to share images and research findings.

The result of this incredible collaboration was Maru, a chatbot designed to support and empower people who are experiencing, witnessing or fighting online harassment by providing real advice and resources from experts and activists. In its current iteration, Maru is only available in English, but we hope to roll it out in other languages soon as this is key to making it accessible to a global audience. The process was informed by our Feminist Design Tool throughout. Here’s what it looked like:

Co-design Workshop 1 — Stakeholders & Purpose

Screenshot of a workshop slide asking ‘How could a feminist chatbot help young people address online harassment?’

We wanted to establish early on who we were designing for and what the exact purpose of our anti-harassment chatbot would be. We started by asking which kinds of harassment and which platforms participants were most concerned about. We explored the many types of harassment women and other marginalised groups face online and found that sexting, sexual harassment, hate speech, gender-based slurs and identity theft were a priority. The group wanted to focus on Instagram and Facebook, as these are popular in their countries and places where significant harassment is experienced. Plan International’s research found that 23% of girls and young women reported that harassment happens on Instagram, but it is on Facebook that they feel particularly unsafe: 39% of girls reported that they face harassment there.

We heard that providing definitions of harassment would help targets, bystanders and perpetrators of online harassment. It would help targets to identify when harassment is happening to them, bystanders to call out harassment being experienced by others, and perpetrators to see that what they are doing is wrong.

Participants agreed that providing clear information about the impacts of harassment would benefit all these groups as well, though targets and bystanders were seen at the main audience for the chatbot.

As our chatbot was going to be aimed at a global audience, we asked what we might need to consider about the countries participants are from. We heard that memes or jokes may be context or country specific, so this should be considered during conversation design; as internet access is poor in some countries, we should minimise video/streamed content; pointing people to local and relevant resources would be most helpful.

We then discussed how the group thought this chatbot could help. We agreed it could do so through raising awareness, providing practical support on how to protect yourself from and respond to harassment, encouraging allyship, and providing emotional support. The group agreed that if someone has used this chatbot, we would like them to know more about the topic of harassment, and be able to:

  • Report harassment
  • Improve their online privacy & security
  • Help others experiencing harassment
  • Access relevant information on how to get help

The group wanted people who had used it to feel relieved, safe, empowered, protected, less alone, and supported. Importantly, we all agreed it was important to try and cultivate a sense of solidarity.

For the next session, we asked participants to conduct some stakeholder research: to talk to friends and family about the proposed chatbot, and see whether they felt it would be helpful, if it would tackle online harassment, and what they would most like to learn or get from it.

Our two brilliant researchers, Safiya and Beatrice, started working on finding global case studies of harassment, gathering anti-harassment organisations and resources and building a glossary of terms informed by the Women’s Media Centre and PEN America.

Workshop 2 — Stakeholder Research, Feminist Conversation Design, Chatbot Personality

This workshop was about understanding the participants’ stakeholder research, introducing feminist approaches to conversation design, and thinking about the chatbot’s personality and character.

Having talked to their peers, participants reported that there was a lot of positive support for the idea of the chatbot, but that in some regions there isn’t a clear understanding about what chatbots are, so sharing examples would be helpful when talking to them in future. Stakeholders raised concerns about how the chatbot would be marketed, so we committed to working with Plan International to develop a marketing strategy to help with this.

After discussing stakeholder research, we introduced some thoughts on feminist conversation design, which to us:

  • Considers how language can be an activist tool
  • Doesn’t try to fool anyone into thinking a bot is a human
  • Understands emotional connection as a contract that should not be abused
  • Is designed to respond adequately to harassment
  • Uses empathic language
  • Doesn’t make assumptions about a person’s gender

We discussed some key terminology, such as ‘victim’, ‘perpetrator’ and ‘bystander’, checking that these words were understood in both languages. We reflected on debates about the words ‘victim’ and ‘survivor’, and whether we wanted to use them. We agreed that ‘victim’ is helpful in the context of explaining what victim blaming is, which is necessary because it is a common problem and one reason why people are afraid to talk about harassment. We agreed to use the term ‘victim’ only in an impersonal/general context. If an individual is being described, we would use ‘a person experiencing harassment’ or ‘target of harassment’.

Screenshot of workshop slide asking participants to discuss the terms ‘Victim’, ‘Bystander’ and ‘Ally’

Next, we asked participants about how a feeling of solidarity could be created through our anti-harassment chatbot. They said that solidarity comes from feeling informed and knowing about other organisations or activists that are working in this space. It can also be cultivated by showing people how much they are needed and how important it is that they are safe to share their experiences. For me this discussion was crucial as it helped us see clearly that a huge part of this project is about bringing together and highlighting the work of activists and organisations already working in the anti-harassment space. Feeling a sense of solidarity with them as well as those going through the experience is a great source of strength and resilience.

Finally, we discussed the chatbot’s personality. The most popular words to describe it were ‘honest’, ‘strong’, ‘compassionate’ and ‘inspiring’. The group felt the chatbot should embody the character of an activist, that it should be represented in a cartoon style, and that it should not be assigned a gender.

Participants’ assignment this time was to find or create 1 or 2 images that visually represent the chatbot’s personality. We also asked them to consider a name.

With the insights gathered in the first two workshops, the conversation design process intensified, and we started to sketch out the chatbot’s main pathways so we could feedback to the group at the next workshop. We used LucidChart for mapping the conversation, and Javascript to implement it.

Workshop 3 — Name, prototype and user interface

We started this session by talking about the chatbot’s name — and what followed was one of the most delightful moments of the whole process. There were several great suggestions, but ‘Maru’ just stood out. It means ‘cloud space’ in Sesotho, and the whole team felt inspired by this name conceptually as well as visually, so Maru was officially born!

Next, we shared sections of an introductory conversation section and asked for feedback on clarity, tone and a statement on how Maru deals with data (it doesn’t store any). The group responded extremely positively and gave valuable feedback about reducing the number of buttons people have to click to get to necessary information. They appreciated moments in the conversation design where terms like ‘intersectionality’ or ‘cis-gender’ were explained clearly.

We then moved onto exploring possible visual and graphic elements. Feminist Internet co-founder and designer Conor Rigby presented several graphic routes, and there was clear consensus that bold, graphic colours were more desirable than a more minimal palette. It was important that the chatbot’s visual identity reflected the intersectionality of the group and the vibrancy of all the regions they are from.

The assignment this time was for participants to gather examples of websites they love in terms of look and feel, tone of voice and experience. This was to inform the development of the Maru website, which has become a significant part of the project.

We continued to research, design the conversation, and work on website planning…

Workshop 4 — final feedback and wrapping up

The final workshop came around fast, and we began by reviewing the websites participants had identified and sharing Conor’s revised design and user interface approaches. This included how the chatbot would look on mobile and desktop and what the avatar might look like. We gathered final feedback on graphic style, fonts and palette.

We also proposed that Maru’s accompanying website would contain a list of global anti-harassment organisations, resources, a glossary of terms, and a reference list that cited all the articles and references mentioned in Maru’s conversation. After all, citation is just good feminist practice :) This was well-received, and the only thing left was to share the latest iteration of the conversation design.

We walked participants through sections on Instagram privacy settings, reporting on Facebook, and how to respond if you have experience non-consensual image sharing (sometimes problematically known as revenge porn). We showed how we had simplified the labels for the main pathways and created a persistent ‘restart chat’ button so that people can always get back to these main pathways wherever they are in the conversation.

We were all sad for the co-creation workshops to end, but had a genuine sense of accomplishment even though we knew there was still a lot of work to do to get Maru ready for launch. Participants said they felt empowered and happy to have been able to work on a tangible response to the problem of harassment.

Post workshops

Following the co-design workshops, work on research, conversation design, website design and coding continued right up until a few days before launch. As we tested and refined, tested and refined, things finally came together. We added trigger warnings to all website content that needed it; we corrected the conversation design and simplified the flow; we contacted people whose stories we had included; we finalised the avatar; we wrangled with glitches in our code. We also finalised our choice of gifs — a crucial component of Maru’s tone of voice. This was challenging because none of them could contain text (tricky when the image/text relationship is the cornerstone of many a gif!), and they needed to hit the right note at all times, particularly as we knew that people chatting to Maru may be in distress.

Creating Maru has been an incredible journey. The chance to work with young activists from across the globe on a practical intervention to such a pervasive problem is the kind of work we dream about. Ultimately, Maru brings together the research and activism of an extraordinary global community of people fighting to end online harassment. We salute them all, and hope that this project provides another contribution towards creating an internet where women, girls and other marginalised groups are not silenced, threatened and harassed, but free to be online.

If you would like to talk to us about Maru, or are interested in working with Feminist Internet, please contact Charlotte: charlotte@feministinternet.com

--

--

Feminist Internet

Our mission is to make the internet more equal by combining feminism, technology, art and design.