👻 Image Anonymiser

Welcome ! this project has been developed as part of the Full Stack Deep Learning course (2022 edition). The aim of the project was to develop an ML-powered web application that allows users to anonymise specific classes of objects in an image (e.g. faces, people, text). This problem is critical in a lot of domains and the applications include preserving privacy, protecting confidential information, removing branding references etc. The work we’ve done is open source and available in Github

The aim of this document is to present our solution, the steps we took to build it as well as the lessons learnt

The Team

First of all, this project has been built by team_003:

Product overview

At a high level, our solution is based on a two step approach: First, use deep learning models (object detection and segmentation) to locate the target object(s) in the input image; then allow the user to “customize” the way the anonymisation should be done

Our implementation is composed of three main blocks:

1) Web applications

The first MVP

Switching to Streamlit

Streamlit vs Gradio (which one to choose as a developer) First of all, both Streamlit and Gradio are great tools and we didn’t have any experience with any of them when we started. So the feedback below is based on the difficulties we encountered as beginners:

Reasons to Choose Things to be aware of
  • Super easy to create and deploy your app if you have a simple UI
  • Great documentation
  • Easy to have a Singleton pattern (e.g. for loading models at startup)
  • The Blocks api that allows you do build more complex apps is relatively new
  • It may be difficult to find solutions or examples when you face a problem
  • UI that looks "professional" out of the box
  • Relatively big community so you can find help, custom components etc.
  • The way Streamlit re-executes your script can make it challenging in terms of latency and coding if your UI is complex
  • Their new caching mechanisms are still experimental and not always "stable"

Main lesson learnt: When using Streamlit or Gradio and you want to build a complex UI, you need to deploy as soon as possible and test how your design impacts latency in a production setting (testing in localhost is a very misleading benchmark!); you may then have to use a more “traditional” javascript framework

2) The backend

The backend consists of three main components:

a) The detector:
b) The anonymiser:
c) The data persistence module (FileIO)

Main lesson learnt: When dealing with several deep learning models in your app that may be big in size, decoupling the inference server from your web app (even for an MVP) should be considered from the beggining. In particular, the memory requirements (even if the models are loaded only once), can have a huge impact on latency and functioning of the app

3) The models

Main lesson learnt: Pre-trained models can be a great way to get you started; however be careful and make sure to understand them especially when it comes to the output they generate, their input/default parameters, the dataset that was used etc.


Main lesson learnt: Even if you are still in an early stage and not using managed/orchestration services yet, there are simple optimisation tasks that can help massively with deployment including:


This has been a very rewarding experience. There are obviously so many things we wish we had more time to do, so many mistakes we should have avoided, but we did learn a lot during this 4 week period and we hope that this write-up can give you some interesting insights if you are at the beginning of your ML-product building journey 👻