Earlier this year I deployed a new website that pulled content from Roam. I was on a pretty big automation kick, and per usual I didn’t end up using most of it.
One thing that has stuck is writing things down. I’m continually surprised how writing makes me more effective in all areas of my life.
My previous site attempted to solve the other half of the puzzle: sharing what I write. It failed. I experimented with deep integration with my personal graph, implementing everything from Roam block refs, focusing blocks, and more. While I want to bring some of it back, this new site is much simpler.
I loved being able to write posts in the same place I’m writing everything else though. It felt so frictionless and I’ve found that to be key to keep writing. This site is still written in my knowledge graph. The difference is I’m treating posts as simply separate pages that don’t have any special features.
I’ve also switched to logseq. It’s far from perfect, but it’s more polished, fully local, and doesn’t suffer from eccentric founders.
I write content here and press Cmd+Shift+e to publish it (or click that airplane in the top right). My plugin then publishes it to my website.
One big difference between logseq and roam is logseq is fully local. Roam is hosted, and you’d think that would mean you could remotely query your graph. You can’t. To get my graph, I had to setup a server with a headless instance of Chrome that logged in to Roam. It was terrible in every way.
My new setup doesn’t magically solve it: everything is local, but how does my server query the graph? Here’s how it works:
- It’s the inverse of before. I wrote a logseq plugin that locally extracts the public parts of the graph and then sends them to my server which saves the new data. Previously, the server was asking the Roam instance for data.
- This is far simpler and removes any necessary intermediate server to make my data available.
- What about security? If I can post to an endpoint to set all the data on my website, can’t anybody?
- The endpoint is protected with a public/private key. Locally, I give logseq my private key and it signs the request with it, and my server has the public key and it validates the content before writing it down. You need my private key to publish content.
My goal with this website is simple: publish small pieces of content frequently. It’ll be unpolished and raw. As I learn thing I’ll write them down and lightly edit them into posts.
I’m starting barebones just to get something out there, and will iterate in public.
I’ll also be writing more about what’s going on with Actual and other projects I’m managing. Not having a place to do this has been a big hit to my feeling of clarity.
Static vs dynamic sites
Prompted by a recent discussion with coworkers, I wanted to write about this a bit.
This site is dynamic; it’s hosted on fly.io which makes it super easy to run a server. Requests are handled by a server which query the graph and dynamically render the content. Cloudflare caches the requests.
Why not use a static site generator? The mental model of a static site generator has always felt more complex to me. I just want to handle a request and do stuff. The nice thing is the complexity can grow to my needs; when I’ve used static site generators before, they always start simple but it always turns into a headache to do anything more than simple markdown routes.
SSG also impedes the frictionless publishing workflow. I can write this content and press Cmd+Shift+e and see it immediately on my site. That’s amazing.
There’s no compile step to wait for. I just upload some new content and it’s immediately available.
SSG favors run-time simplicity over build-time complexity. That’s a tradeoff many are willing to make, but I find having to commit my content and push to github just to publish content way too much friction.
- Maybe you introduce an intermediate server where you can push markdown files and it automatically compiles and publish, but then you already have a server…