×
Work About Contact Resume
Navigation icon
Overview Background Research Ideation Idea Prep Testing Prototype 1 Prototype 2 Prototype 3 Results Concept Proof
Mockup of Trigger Remover UI

Trigger Remover

Overview

Trigger Remover is a Chrome Extension aimed to make using the web less stressful and trigger inducing for users with mental disorders such as PTSD, anxiety, or eating disorders.

Role: Lead Designer

Duration: 4th - 29th April, 2022

Tools: Figma, Figjam, Emojify


Background

The initial stages of this project started as an exploration into how users with mental health issues experienced the web and how this experience could be improved upon.

I interviewed a 19 year old, female, paranoid schizophrenic (let’s call her Janet), to gain better insight into how someone with a mental disorder uses the web and how their disorder affects their experience. Janet referred to an instance where she saw an image of a monster online and then had the monster reappear in her room as a hallucination during a following schizophrenic episode.

After my interview with Janet, I interviewed a psychologist (let's call them Alex). I asked Alex about their knowledge of people with mental health disorders using the internet. They explained to me how...



“...the internet is full of triggers, and apps and sites, such as most social media, can be dangerous or have negative implications for people who may be vulnerable to specific content.”


With the information given to me by Janet and Alex, I defined the problem and decided upon a concept;



Problem: Triggering content on the web.



Concept: Smart trigger warning.

Research

After defining the problem of triggering content, I began researching triggers. I learned that a trigger is a psychological stimulus that could be anything from a smell, sound or sight that reminds the person of a feeling of trauma (Trigger - GoodTherapy.org Therapy Blog, 2022).

Another point I read was that thinking of a trigger could be enough of a trigger itself to cause a response. In the same study, it was shown that people are too good at word association to see a trigger warning and not think of the triggers they would be stimulated by (Nast, 2022). This suggested to me that there was a possibility for not just warning people about triggers, but, for deleting triggers.

One interesting project I found was by student staff at PARC (Prevention, Advocacy & Resource Center), Brandeis University. The project was a collection of words and phrases which have negative connotations, violent histories or be triggering. The project entitled “Suggested Language List” was to educate people on how the language they use could do harm and how they can accommodate for this (PARC, 2022).

Ideation

Crazy Eights with the prompt of “smart trigger warning” led the ideation phase.

Crazy eights sketches

The ideas of removing named triggers from webpages, removing search results containing triggers and a trigger handling system were all ideas which had some merit in pursuing and further development.

I expanded the idea's scope and details, and then merged them into one larger concept.

Sticky notes of the bes ideas from crazy eights exercises
The Idea

After merging the three concepts from the ideation phase the concept of having users input their triggers into a form, and then having an AI image and text recognition algorithm remove content containing the user's triggers from webpages was generated.

Sketch of idea made from the best concepts from the crazy eights exercise
Before Prototyping

In preparation for building the first prototype, I ideated multiple methods of covering, disguising and hiding triggers.

Examples of methods I was considering useing to hide text

Also, I defined levels of trigger removal depending on the severity of the triggering content. This allows for the user to have more choice in what they choose to see and the knowledge that some content may affect them less than other content on the page.

5 levels of trigger severity and how they would be handled by the Trigger Remover

The web also contains multiple types of content, from text to images, to videos. I decided to design trigger removal/censoring methods for each, to ensure all bases were covered and so the user wouldn’t be caught off guard while thinking they were having a safer browsing experience, resulting in the user finding an edge case.

How Testing Was Conducted

For testing the prototype, I wanted qualitative and quantitative data, to enforce the user feedback given during and after the prototype. The testing consisted of moderated usability studies so the facilitator (me) could make note of the body language the user makes as well as ask the user questions. The test ended with a series of questions about the user’s experience, how they found the interactions and their opinion on the levels of triggering content. I gathered quantitive data using Emojify, an AI emotion tracker where the user’s facial expressions would give us insight into how they feel about specific content (Emojify, 2022).

User demonstrating prototype using Emojify to detect their emotional state

Before the prototype began, a disclaimer was given to each tester that they would be seeing trigger and sensitive content, so they were may aware they could stop, take a break at any time or not look at any content if they didn’t want to. For each prototype, 7 testing participants were used each from varying age ranges and with different mental health disorders (for their privacy I will not be using any further information about the participants).

Prototype & Testing - Round 1

The first prototype was made using a news article containing some graphic content. The graphic content was then covered using different levels of resistance. Using Emojify I got a baseline of 30.2% of the time reading the graphic content that participants were nervous.

Prototype & Testing - Round 2

The second prototype addressed some of the problems the first prototype faced.

Prototype & Testing - Final Prototype

Participants only had unhappy emotions an average of 7.9% of the duration of the tests. This is a 2.3% improvement over prototype 2 and a 13.1% improvement over prototype 1.

Results

22.3%

Percentage less time users felt anxious, nervous or unhappy reading graphic content.

Takeaways

In the early stages of concept development, interviewing people with experience and industry professionals can be extremely beneficial to make sure the project goes in the right direction.

Using facial emotion recognition was an effective method of gathering quantitive data to justify design decisions and effectiveness. In future, I would consider including additional methods of gathering quantitative data, such as using Hotjar in unison with other methods.

Proof of Concept

To ensure that the designs I created are viable and possible to build, I made sure to find methods in which they could be created, most possibly as a first iteration.

Image Blurring: For the image bluring trigger cover, I found ImageMagik, a function of the Google Cloud which allows for image blurring. Google Cloud also include documentation on how to set this up using the Google Cloud API, in languages such as Node.JS, Python or Java. (My recommendation would be using Node.JS as the other proof of concepts coming up use vanilla JS and JSON). (Google Cloud, 2022)

Google Cloud Image Magik tutorial page

Triggering Content Recognition: In order for triggers in images to be covered some form of image recognition is needed that can also recognise if the content is triggering. The Google Cloud Vision API contains a ‘Safe Search’ function which recognises any potentially unwanted properties or objects in images, in our case, triggers. (Google Cloud, 2022)

Google Cloud image object recognition page, displaying that no graphic content can be seen in image

Paraphrasing Tool: To remove the lowest level of trigger in the prototype, words that could be triggering but aren’t guaranteed to have an effect were altered to other synonyms. To do this on a site you would need access to a paraphrasing tool. I found multiple open-source paraphrasers on GitHub. (HealthyTechGuy, 2022)

Github repository of paraphrasing tool

Removing Search Results: Removing triggers from search results completely is the easiest to accomplish of each of these tasks. Dorking is a method of searching the web with more specificity. If you injected the trigger with a minus sign in front of it (-trigger), no search results containing that trigger will be displayed.


References

Emojify.info. 2022. Emojify. [online] Available at: [Accessed 12 May 2022].

GoodTherapy.org Therapy Blog. 2022. Trigger - GoodTherapy.org Therapy Blog. [online] Available at: [Accessed 11 May 2022].

Google Cloud. 2022. ImageMagick Tutorial | Cloud Functions Documentation | Google Cloud. [online] Available at: [Accessed 12 May 2022].

Google Cloud. 2022. Vision AI | Derive Image Insights via ML | Cloud Vision API | Google Cloud. [online] Available at: [Accessed 12 May 2022].

HealthTechGuy, 2022. GitHub - HealthyTechGuy/paraphrasingTool: A simple React app that uses a paraphrasing Tool API to rewrite text. [online] GitHub. Available at: [Accessed 12 May 2022].

Nast, C., 2022. What if Trigger Warnings Don’t Work?. [online] The New Yorker. Available at: [Accessed 11 May 2022].

PARC, 2022. Categories. [online] Sites.google.com. Available at: [Accessed 11 May 2022].