Image recognition: a scalable way for the City of Toronto to reduce trash mismanagement.
In Toronto, it costs the city millions annually due to non-recyclable materials being placed in the recycling bin. This can damage equipment, cause workplace injuries at the recycling facility and ruin otherwise perfectly good recyclables. The city of Toronto has been enforcing initiatives to fight trash mismanagement. One of those efforts was an app that allows people to find out what waste items go where. It allows users to search by the item’s name and the app will tell you which bin it goes to.
The app worked well and it did help me put the trash in the right bin, but it didn’t work for my other family members. They are first generation Chinese Canadians knowing little English. They weren’t able to find their answers because they didn’t know the English names of the items. As a result, they end up throwing trash in the wrong bin.
I knew I need to make something that allows my family throw their trash in the right bin. So I asked myself; what is the minimum viable solution? I made two lists of names in Chinese of the most common trash we have, one for recycling and one for non-recycling, and printed them out on two pieces of paper. The names are in a big and bold font so it’s easy to read. Then I taped the paper to the lid of the trash bin so every time they throw something away, they are forced to see it.
It worked for my dad, but it didn’t work that well for my aunt. My hypothesis is that my dad and my aunt had different levels of literacy. My dad is a college graduate whereas my aunt never finished primary school. It’s harder for my aunt to process words, which could explain why my aunt is misplacing trash even though she saw the list.
I replaced the text-only list with an improved version with linear illustrations and text. The reason I use linear illustrations instead of pictures or icons is because 1. pictures are hard to identify when printed in greyscale (thanks to my black & white printer), 2. Icons are too abstract for my aunt to understand. Linear illustrations is perfect in this case since it has 100% color contrast and it’s not too abstract. The list worked. My aunt was able to quickly find items on the list by scanning through the illutrations. Even when there were items that are not on the list, she’s able to make a guess based on the materials.
Next stop: Scalability.
My low-cost solution was enough to significantly reduce trash mismanagement in my household. But then I asked myself; what about those families speaking other languages? And trash from each family differs. In Canada, 80% of the population speaks an immigrant language (a language other than English, French or an Aboriginal language) at home (Statistics Canada). With over 200 languages spoken in Canada, and thousands of trash types, I realised using a print-out list is not enough to solve the problem at scale. The Waste Wizard app’s search tool already has over 2,000 items in it’s database. You can translate the whole database into 200 different languages but people might describe each item different from the name in the database, or worse, people sometimes don’t know how to describe the trash. How might we make it so that people can search regardless of the language they speak?
Waste Wizard + image recognition
What if we can use image recognition instead of text input to search the database? Taking pictures doesn’t require people to describe the trash. My hypothesis is that if people can take a picture of the item and know if it’s recyclable, it would make the app accessible to users regardless of their language and literacy. Based on the current Waste Wizard app UX, I used Adobe XD to design what the task flow would be like.
To be continued
I believe this solution will help not only the City of Toronto, but any city in the world to reduce trash mismanagement at scale.