Background Jobs with Que-Go
Last updated June 14, 2023
Table of Contents
Web servers should focus on serving users as quickly as possible. Any non-trivial work that could slow down your user’s experience should be done asynchronously outside of the web process.
Worker queues are commonly used to accomplish this goal. For a more in-depth description of the worker queue architectural pattern, read the Worker Dynos, Background Jobs and Queueing article. Here we’ll demonstrate this pattern using a sample Go application using Que-Go and Heroku Postgres.
This article assumes that you have the Heroku CLI and the Go toolchain installed.
Que-Go is compatible with the Que Ruby library, enabling sharing of queues and jobs between the two languages.
Getting started
Follow the steps below to create a copy of the example application in your Heroku account:
Via dashboard
Via CLI
Install the Heroku CLI and then follow these instructions inside of a terminal.
$ go get -u github.com/heroku-examples/go-queue-example/...
$ cd $GOPATH/src/github.com/heroku-examples/go-queue-example
$ heroku create
$ heroku addons:add heroku-postgresql
$ git push heroku master
$ heroku ps:scale worker=1
Application overview
The application has two processes:
queue-example-web
: A Negroni-based application that accepts URLs in the form of a POST’d JSON document containing aurl
for indexing by a worker.queue-example-worker
: The worker processes jobs containing aurl
that should be indexed.
Because these are separate processes, they can be scaled independently based on specific application needs. Read the Process Model article for a more in-depth understanding of Heroku’s process model.
There is some shared code in shared.go
that focuses on setting up the database, the database connections and the que
client.
Web process
cmd/queue-example-web/main.go queues up the requested URLs into a queue for processing by the worker process.
Worker process
cmd/queue-example-worker/main.go runs 2 worker Go routines that process messages from the queue as they come in. The workers don’t do anything interesting with the data and just output the received data into the log stream. This is where long running processing logic would be put.
Shared logic
shared.go contains some shared initialization and setup logic that is used by both.
Testing the application
You can watch the interaction between the two processes by observing the log stream:
$ heroku logs --tail -a <app name>
In a different terminal we’ll use cURL to submit a URL for “indexing”.
$ curl -XPOST "https://<app name>-<random-indentifier>.herokuapp.com/index" -d '{"url": "http://google.com"}'
Back in the first terminal you should see output like the following:
2015-06-23T18:29:35.663096+00:00 heroku[router]: at=info method=POST path="/index" host=<app name>-<random-indentifier>.herokuapp.com request_id=84f9d369-7d6e-4313-8f16-9db9bb7ed251 fwd="76.115.27.201" dyno=web.1 connect=19ms service=31ms status=202 bytes=141
2015-06-23T18:29:35.623878+00:00 app[web.1]: [negroni] Started POST /index
2015-06-23T18:29:35.644483+00:00 app[web.1]: [negroni] Completed 202 Accepted in 20.586125ms
2015-06-23T18:29:37.750543+00:00 app[worker.1]: time="2015-06-23T18:29:37Z" level=info msg="Processing IndexRequest! (not really)" IndexRequest={http://google.com}
2015-06-23T18:29:37.753021+00:00 app[worker.1]: 2015/06/23 18:29:37 event=job_worked job_id=1 job_type=IndexRequests
Above, we can see the web process servicing the request and returned a 202 and a few milliseconds later the worker picks up the queued request and “processes” it.