Building a Facebook Chatbot using Cloud Run

Tags: docker, containers, gcp, google-cloud, serverless, computer-vision

In the quest to move everything to a pay-what-you-use model and maintaining my enthusiasm for serverless, I was inspired by Nathan’s post over on DevOpStar about creating a chatbot using facebook’s messenger platform. With my past as a town planner, I’ve been wanting to mix my two skills with programming/devops and town planning to help make the community better for a while, but been struggling to find some inspriation for an idea.

Until the other day I was with a friend musing about how people never seem to know which bin to put their waste into. Can I recycle this bottle? Can I put this fruit in the green waste? It is an organic, so it makes sense right? Wrong. Turns out fruit and veggies don’t go into the green organics bin, rather the red general waste bin (at least, in my local council - realising different councils have different rules).

The idea of “Which Bin?” was born. Take a photo of something, send it to the chatbot, and get a response back saying which bin the item needs to go into, using computer vision.

Borrowing the chatbot code from Nathan’s post, I took the approach of utilising one of Google Cloud’s newest service they announced at Next ‘19, Cloud Run.

The first step was to setup continuous deployment by enabling automatic container builds on changes to the master branch. Using GCP for this task, the Cloudbuild.yml used for this project is as follows:

  - name: ""
    entrypoint: "bash"
      - "-c"
      - |
        docker pull$PROJECT_ID/whichbin-chatbot:latest || exit 0
  - name: ""
- "$PROJECT_ID/whichbin-chatbot"

If you’re my blog’s biggest fan, you’ll notice I’m borrowing from one of my previous posts, speeding up builds using the docker cache.

The next piece of the puzzle was to connect the chatbot to the Google Cloud Vision API. What we want is to send the image we get from the user to the vision api to determine what it is, to present back to the user.

Using the previous code, provided in Nathan’s example, I wrote a new function which contacted the vision API and extracted out the descriptions:

return new Promise(async (resolve, reject) => {
  await visionClient
    .then(responseArray => {
      const [response] = responseArray;
      const labels = response.labelAnnotations;
      let descriptions = [];
      labels.forEach(label => descriptions.push(label.description));
      console.log("Image processed. The labels are:", descriptions);
    .catch(error => {

Once we had our code ready, and our container built, we just need to deploy the container and grab the endpoint provided by the cloud run service, and provide that to the Facebook developer console so the chatbot knows how to communicate with our backend.

Cloud Run Backend

Adding this to the facebook config, we can now communicate with the vision API sending it images for analysis. The best part? All of this is free (under a certain limit), and you only pay for what you use after that. The very definition of serverless!

Chatbot Vision Response