Building Okayar (Part 1): How it Started + Back-end CRUD Serverless Lambda API in Go
Background
A couple of years ago, I made it my goal to just build something. I wanted to prove to myself that any time an idea or opportunity came, I’d be able to figure it out and make something happen. Building something entirely from scratch was uncharted waters for me; in my time as a software developer in the industry, I’d mostly worked on existing infrastructure and/or adding features to existing applications. It’s not a bad thing, just a natural consequence of being anything other than Engineer #1 at a company.
I’m writing this blog to both document and share what I learned, so that maybe someone in the future can use this as a framework for creating their own web application from scratch.
I decided to build Okayar, a personal OKR tracker. I’ve found that applying OKRs to my personal life has made me way, way better at setting and tracking attainable goals. For background on what an OKR is, why it’s useful, and how we can successfully apply it to our personal lives, please visit this page first!
I sat down for the first time to work on this in earnest near the beginning of COVID lockdown, in April 2020. Building something completely from scratch was daunting, so I started with what I felt I knew how to do: I sat down to build a CRUD API for my app.
Quick aside: What is a serverless application and what is AWS Lambda?
A “serverless application” is exactly what it sounds like — it’s an architecture in which we build applications without using servers. For anyone who’s ever had to deploy their own server, you know that it kinda just sucks. You have to pick an instance type it’ll work well on (Windows vs Linux vs anything else), come up with a docker image that has exactly what you need for it to work, provision memory and CPU, and then monitor scaling and uptime for basically the rest of your life.
Serverless, which is basically synonymous with Function as a Service (FaaS), solves this problem. Cloud providers have services now like AWS Lambda, in which all you need to do to run application code is push the code itself! There’s no need to prepare a Dockerfile for the server to run on, or anything else complicated. You pick the runtime (say Python 3.8), push your code as a zip, and it simply runs. You tell Lambda which function to use at the entry point, and then you execute that function.
Once you’ve written a function, you then need a way to trigger it. AWS provides a service called API Gateway that integrates with Lambda. This API Gateway can be configured with a custom domain name (your API URL), and then will accept API calls with regular HTTP methods and forward them to the Lambda you specify.
Once you have that, you’ve basically recreated the functionality of a server. You have a near 100% uptime with an API Gateway managed by AWS, you can accept HTTP methods to any path you specify, and you can process and return a response using whichever logic and integrations you need from the Lambda function. Oh, and your app can pretty much endlessly scale on the # of requests it can handle. That’s serverless in a nutshell.
New things I did: Golang & Serverless Framework
Since using AWS Lambda for the first time, I have never wanted to run a server again. Servers may have their place in the world, but in most cases, serverless applications hold advantages in maintainability, uptime, and speed of development. So, I decided to build a serverless application. For my learning, I decided to do 2 things that were new to me.
First, I used Golang. I’ve seen Golang increase in popularity over the past couple of years, and I wanted to be able to play ball if a situation arose where I needed it. So, mostly as a learning experience, I went with Golang. I’ll say right off the bat — I would not recommend this for serverless applications. While the language is great, you’ll run into tons of issues.
- You won’t be able to edit code through the AWS console because what you upload is the compiled binary, which slows down development.
- There are fewer and worse-functioning developer tools when it comes to Go lambdas, due to their very low usage rate in the industry.
- Some Serverless framework dependencies don’t play well with Golang.
- When you’re stuck, you’ll find less help on the internet.
So, while I’m glad I took some time to learn Golang, I would not do this again. Take it from me — just use Python.
Second, I used the Serverless framework (specifically Serverless framework, not to be confused with the general concept of serverless) to build and deploy my lambdas. Coming from an infrastructure-as-code background, it was strange to me to have my application code repo also be in charge of dictating and deploying infrastructure around the lambda. So I was hesitant, but Serverless was pleasantly easy to use. Its extensibility with plugins published by the community also makes it easy to find tools to help you perform adjacent tasks to what Serverless is built to do. I will go into details about my infrastructure setup in Part 2 of this series, but I should cover here 2 plugins specifically that make back-end development much easier.
serverless-offline
. This plug-in is a lifesaver and is perfect for running a dev environment to which you want to make API calls. This was another reason to use python in the future. At some point, first-class support for Go was dropped on this plug-in, and I had to find a workaround to make this work for me. For anyone hunting online how to makeserverless-offline
work with Golang, try usingsls offline — useDocker — noPrependStageInUrl — printOutputs
.serverless-dotenv-plugin
. This one is self-explanatory; it lets you create files like.env.development
and.env.production
to separate your environment variables between your local env and your actual deployment. This is especially useful for connecting to local instances of your DB for testing.
How I built my back-end: Application Code
I didn’t use any kind of “industry standard” structure for my code, but I tried to create one that made sense to me. This meant that my code ended up in a few major sections, and therefore packages within the application:
functions/
controllers/
db/
models/
helpers/
The flow (excluding auth), in summary, is like this: A request will hit functions/
, and based on the HTTP request type, will be redirected to an appropriate function in controllers/
. Then, controllers/
will conduct quality checks on the data and make calls to functions in db/
to store, extract, or delete relevant data. models/
contains structs for db/
to use to perform its actions. Finally, helpers/
contains one-off functions that didn’t depend on other packages.
Functions
A typical Golang lambda, as provided in examples by Serverless and AWS, looks something like this:
The imported libraries from AWS are immensely useful here; events.APIGatewayProxyRequest
and events.APIGatewayProxyResponse
contain all the metadata you need to receive and send back data from the party calling your API. This will be useful for things like HTTP methods and status codes, as you’ll see below.
I built each function to serve as an entry point for any request to a specific endpoint. To make that happen, the serverless.yml
file was configured like this:
As you can see, the handler bin/objectives
actually handles every HTTP method that this endpoint accepts. Handling this in the function then, is straightforward using a switch
statement:
Now, we’ve made our function handler dynamic. Based on the HTTP method, it will call a different method in controllers/
.
Controllers
The controllers are fairly standard, so I’ll make this section quick. The most interesting thing I do in each controller function is:
dbConn, err := db.GetDBConnection()defer dbConn.Close()
This gives me one dbConn
to use for any and all database calls from within the controller. An example would be something like:
objective, err = dbConn.GetObjectiveByID(numericObjectiveID)
That’s pretty much it. In the next db/
section, I’ll go over how I set up the db.GetDBConnection
method.
DB
The most important thing in the db/
folder is this file:
Now you can see the contents of GetDBConnection
that I referenced in the Controllers section above. It creates a gorm connection to the Postgres DB, which the controller method can then use for any calls to the DB. All the methods in db/
, then, use this connection. Here’s an example:
In this example, the db
that the function is called on is the output of GetDBConnection
. So, if you can imagine, a controller that needs to execute multiple database calls will call these functions using the same DB connection, ensuring that every API call made to my back-end only creates one DB connection.
Models
You may have noticed in the last gist above, the type being passed to into CreateObjective
is *models.Objective
. The models/
folder contains the struct definitions that gorm uses to interact with the DB. The documentation for gorm is pretty helpful, and helps you put something together like this:
This is how gorm knows what to do when you call db.Create
. It knows that 3 of these fields can’t be null, and it knows that these are the 3 fields we are trying to write to the DB. From there, gorm handles the rest of the magic.
Helpers
/helpers
contains methods that don’t rely on other packages in my repo. An example of this is error handlers, which I use as wrappers throughout my application code.
Using this, pretty much anywhere in my code I can call helpers.ClientError
or helpers.ServerError
to return appropriate responses to the user.
How I built my back-end: Builds & Deployment
As I’ve mentioned plenty in this article, deploying a lambda in Go isn’t easy. Like really, please just use python. If you do go down the Golang path, though, I’ll leave some tips here on how to build and deploy your code.
First, it took me forever to figure out how to build a go executable that AWS lambda was willing to run. Some build types just don’t work with AWS, and the lambda console will throw an error. Anyone out there looking for how to do this, here’s a build command that works:
env GOOS=linux GOARCH=amd64 go build -v -ldflags '-s -w' -o bin/objectives functions/objectives/main.go
Second, here’s how you spin up a dev environment using serverless-offline
so you can test your API locally:
sls offline --useDocker --noPrependStageInUrl --printOutputs
Now, I did something a little weird. I used both a Makefile
and a docker-compose
. I’m not sure if this is really standard, but I kind of just wanted to learn how to use a Makefile
for the first time. I’ll leave both below, and explain how I use each:
There are 4 commands, now, that I ever need to use:
docker-compose up migrate
: To run migrations.docker-compose run quick_build
: To quickly build new code without cleaning first. This sometimes worked whileserverless-offline
was running and sometimes didn’t, I didn’t read too much into it.docker-compose run full_build
: To clean and then build new code. This would make sure that whileserverless-offline
was running, I could run my new code.docker-compose run deploy
: To clean, then build, then push the code to AWS and “go live”.
Since it was just me working on this project, I didn’t build any CI/CD. If I were to do so, I’d probably use a version of what’s already in these two files above.
Conclusion
I built my back-end this way because I wanted to learn and use Go. I was successful in doing that, and this back-end is what’s powering Okayar today! In future posts in this series, I’ll be covering:
- Infrastructure, built using Terraform and Serverless
- UI, built using React
- Auth, handled via Firebase Auth, and integrated with both my back-end and front-end
Stay tuned and I’ll see you soon!