Summer time is a time for experimental projects. I’m bad at having vacations, so I always end up building something. This time it was a minor project that accepts incoming webhooks, stores and displays it. Fully anonymous and ephemeral.
Since I use it for work nowadays, I wanted to try something with the newest and greatest ASP.NET Core 2.1.
Data storage is handled by a Redis instance that’s set to memory only. Meaning that whenever I deploy a new version, every received webhook gets wiped away, thanks to Docker. More on that below. I decided early that I didn’t want to deal with storing it for any period of time.
In the interest of making a ton of new experiences, I decided this should be deployed using Docker. Docker seems to be a great way to make a server run several small isolated messes, isntead of turning into one giant mess.
Since it seems to change everything every other month, I probably used the wrong method by using Docker Compose to wire up three separate containers for this tiny app.
I have one that runs
nginx, serving static requests and forwarding other requests to the backend that runs
ASP.NET Core. The backend talks to the third container, running
Redis. All theoretically insulated from the rest of the machine, so any poor code can’t escape out and wreak havoc on the rest of my poor server. In theory. If I understood everything.
So whenever I publish a new version, every container gets rebuilt from scratch, wiping out all the old data in the memory-only Redis instance. Anybody posted something sensitive? Just wipe it all! Great!
One interesting feature is that every IP listed when viewing a hook is wrong, since it passes through some other containers on the way. Haven’t really bothered looking into that yet…
Now go try it out!