You got your htmx in my Django!

On Building a Side Project (Or: Why I Chose Django and htmx for a New Project in 2020)

James Pulec
9 min readFeb 20, 2021

--

This is the second post about my journey launching a side project in 2020. The first post can be found here and the last post here.

Last year, I set a goal for myself: Build, launch, and close a paying customer for a product that I built entirely on my own. I was working at a startup called Resource, and a daily problem that I encountered was driving me crazy. We had all these PRs on Github that had been reviewed and were ready to merge, but we were left waiting for checks to pass, and constantly hammering Github’s ‘Update Branch’ button.

Thus MergeCaravan was born.

But before I could get started building, I had to figure out what tools and technologies I’d be using.

I’d been working at Resource for around 2 years, primarily doing development with Javascript, both on the frontend and the backend. We used GraphQL with React and Apollo Client pretty heavily and I’d become quite familiar with both. We’d also started playing around with Hasura. Before Resource, I worked with Django, DRF and VueJS for 4 years. Knowing that I’d be working alone on MergeCaravan, I decided to optimize my technical choices based on a desire for simplicity. Which led me to confront a gut feeling I’ve had for many years…

There’s something terribly wrong with this industry, isn’t there?

On numerous occasions during my time at Chewse, I felt like there was something terribly wrong with the direction of modern web development. I couldn’t quite put my finger on it, but so often it felt like I was always fighting the tools and technologies that we were using. Maybe it was because we used Django on our backend, which was initially designed for multi page server rendered apps, but we were using it with Django Rest Framework for an Angular and Vue SPA. Maybe it was the difficulty we had in designing REST resources that made sense and didn’t require us do a lot of roundtrips to get all the data for a page. Maybe it was the level of complexity that went into properly configuring Webpack. (Configuring Webpack v1 was rough)

Whatever it was, I always had this nagging feeling in the pit of my stomach that there had to be a simpler way.

One day, I stumbled upon Intercooler.js. Where other JS frameworks and libraries pitched owning the frontend completely, Intercooler described itself as enhancements to HTML. It was supposed to supplement your existing web application, rather than try to reinvent a lot of its functionality. I became very interested in the ideas it described, which sent me down the rabbit hole of reading a lot about REST and HATEOAS.

The Road to HATEOAS

Ask the average developer about REST, and you’ll likely get some response about resources and using different HTTP verbs for different actions. It’s not an incorrect answer, but it may be a bit of an incomplete one. There’s been a lot written about what Roy Fielding outlines in his initial dissertation on REST and what “RESTful” really means. What often isn’t discussed is HATEOAS, or Hypermedia as the Engine of Application State. I won’t try to explain too much, so go check out the Wikipedia article if you want a deeper dive. But at a high level, it suggests that an application should return both data, and what operations can be performed on that data when sending it over the network. Occasionally, you find this with JSON REST APIs, often encoded with an object in a _links field.

However, HATEOAS doesn’t have to be implemented as JSON APIs. It can also be implemented using HTML. This is the approach that IntercoolerJS took. Using HTML, Intercooler returns not just the content to show the user but also the actions that can be taken (as form and link tags) encoded in the HTML. It embraces using the returned documents to provide a user the application state as they should be able to interact with it.

For the first time, that gut feeling I had about the pains of modern web development seemed to ease a bit. It all seemed to click. We could still use the same technologies and tools, and take advantage of native browser features with our apps. We didn’t have to throw away all the advancements we’d made in server side templating.

Focusing on simplicity

I thought back to my experience with Intercooler when I decided I was going to build MergeCaravan. Knowing that I would be building this project alone, I wanted to address what I knew would be a few concerns right away.

Managing complexity is of paramount important when you’re a lone developer

Knowing that I didn’t need to onboard other developers, I decided to work with tools that allowed me to execute as fast as possible. I have a high level of familiarity with Django, so it was an easy call for me to make. However, I was still left asking myself if I should be building a JSON REST API, or just stick with Django’s standard server rendered pages.

I was reminded of a conversation I had several years ago with a co-worker about the shifting of application logic from the backend to the frontend. Gradually, as applications shifted from being server rendered to being client rendered, more logic found it’s way to the client, which included things like authorization and validation rules. During a conference talk he gave, he presented the following slides highlighting this idea.

The Old Way
The New Way

Intercooler promised to return to the first model, where application logic was wholly handled on the backend, and no longer split with the client. I’d no longer find myself duplicating validation logic in Javascript and on a REST endpoint, or duplicating permission handling client side to determine what UI elements would be visible to a certain user.

Existing tools just work and provide lots of bang for your buck

It’s not just my own proficiency that made me opt for a traditional server rendered app. Many existing tools and technologies are still easiest to get started with when using server rendered apps. Spinning up a traditional Django app on Heroku is a breeze. Everything just works and using HTTP caching is as simple as configuring your headers properly. Tools like Sentry require one line to get started, and now you have observability in your app. I appreciate how ‘boring’ and well understood these technologies are.

I knew up front that I didn’t need high interactivity.

I’m generally a firm believer in the distinction between web ‘pages’ and web ‘apps’. In this case, I felt that MergeCaravan fell pretty squarely in the ‘page’ category. It doesn’t really make any sense to use offline, since it relies so heavily on Github. It also just doesn’t require a whole lot of user interactivity. There aren’t many places where I need realtime data or where autosaving form data or dragging and dropping make sense. This made it easy for me to avoid large JS libraries and try to just focus on building my app.

But what is htmx?

When I started building MergeCaravan, I decided to use Intercooler for my frontend needs. Shortly after I began, htmx launched as the successor to Intercooler. Since I had written little code and was willing to build on a fairly new library, I ported my code over.

htmx is an absolute breath of fresh air. Getting started is as simple as adding a script tag. No Webpack builds or fiddling around with npm. It’s easy to add to an existing project, and progressively add interactivity as your application needs it.

Okay. But aren’t multipage apps slower?

Aren’t single page apps slow? I’ve read endless documentation about improving the performance of single page apps. There’s a lot of different strategies and cool things you can do to help provide better performance for your single page app. And yet the vast majority of SPAs I’ve used generally don’t do these things. It takes a LOT of work to build performant SPAs, and unless your SPA performance directly correlates with a company’s bottom line (i.e. ads or e-commerce), improving performance often isn’t a high priority. It’s very possible you would get better performance for your app by just sending HTML partials over the wire instead of using JSON. HTML actually compresses quite well, and browsers have been optimized to render it very quickly.

Embrace HTTP

HTTP has a pretty simple model that’s well supported and well understood. GETs for fetching data. POST, PATCH, DELETE for modifying data. Browsers and caching infrastructure all understand these ideas. Browser security is built on top of this paradigm. You get so much for free by just embracing HTTP methods as they were intended.

Embrace HATEOAS

Coupling your UI with your data is a good thing. It simplifies authorization, and handles both concerns in the environment you control: the server. In the same way that it’s a bad move to separate concerns by technology (like separating CSS from HTML), separating your data from your UI is cross cutting concerns the wrong way.

Acknowledge that Security is Hard

Security for web apps is hard. So many inherent properties of the web as a platform make it incredibly challenging to get everything right. Whether it’s SQL injection or XSRF exploits, there are a large number of ways to get things wrong. While it’s the case that you definitely need to deal with these issues if your app is a server rendered app, when you choose to go the JSON API or GraphQL route, you end up handling some of these concerns in multiple places. The more you put your business logic in JS client code, the more likely it is that some constraint is encoded there but not in the relevant API code.

I’m sold. htmx is perfect, right?

Well, not quite. I’m quite happy with htmx, but there are certainly some shortcomings worth mentioning.

Best practices

This is probably my biggest complaint. As one might expect, with a younger ecosystem a standard set of best practices has yet to emerge. The htmx website provides a nice set of examples to help you implement some typical web patterns, but it can still be a bit difficult to understand how to split your application. You might find yourself beginning to add some interactivity to an existing page and have a hard time figuring out what parts of them DOM to swap. Should I swap large parts of the tree, or just try to swap the smallest pieces? These questions can be a bit hard to answer at first, and take some trial and error to figure out what works best. It seems that something like a full-fledged tutorial might help new users recognize these patterns sooner.

URL Organization

This isn’t exactly a fault of htmx, since it is the responsibility of your server, but I still don’t think I’ve figured out the best way to organize my urls for an app with partials. Using a typical JSON REST API resource url scheme doesn’t quite feel right, since those routes are often good as user facing urls for ‘detail’ pages for a given resource. I’ve also tried appending a string related to the partial’s use case to the resource. For example, a partial related the status of an order would have the path /orders/12345/status. This is best I’ve come up with, but I still suspect there’s a better way. Similar to how there’s a pretty well understood set of conventions for JSON REST APIs, I’d love to see some standardization around how urls get organized for partials.

With htmx, the past is the future

htmx released version 1.0 not that long ago. In many ways, it takes a lot of the best parts of the web from the past, and updates them for the needs of modern web applications. If you’re also looking to alleviate that weird gut feeling you’ve had around modern web development, or if you’re just looking for a simple way to build web apps, I’d recommend giving it a try, paired with your favorite server-side web app framework, i.e Django, Rails, Laravel, etc.

In the next and final post, I’ll discuss launching MergeCaravan, the aftermath, and what I learned during my journey of launching a side project. Check it out here.

--

--