shakedown.social is one of the many independent Mastodon servers you can use to participate in the fediverse.
A community for live music fans with roots in the jam scene. Shakedown Social is run by a team of volunteers (led by @clifff and @sethadam1) and funded by donations.

Administered by:

Server stats:

285
active users

#caddy

1 post1 participant0 posts today

Fun (actually not fun at all) fact about Caddy:

This expression will be merged with
AND:

@matcher {
    path /foo
    header Header-Name value
}

But this one will be merged with
OR, despite being functionally identical:
@matcher {
    expression `path('/foo')`
    expression `header({'Header-Name': 'value'})`
}

Caddy has some cursed, barely-documented logic where matcher blocks always merge with
AND unless two directives of the same type are adjacent. In that case, they may be merged with AND or OR depending on directive-specific logic, which is not publicly documented.


This results in completely different behavior depending on whether a matcher is defined using expression or directive syntax. Despite the docs implying that the two options are identical,
they are not! You can have an existing, functional matcher with a mix of directives and expressions, and suddenly it breaks because one of the directives was replaced with an identical expression. It's extremely counter-intuitive.

#Caddy #PSA #ServerAdmin #SelfHost

New blog post: how to pull web logs from #Caddy into #Clickhouse using #Vector.

scottstuff.net/posts/2025/02/2

Clickhouse is an open-source (plus paid, as usual) columnar DB. This lets you do ad hoc SQL queries to answer questions as well as create Grafana dashboards to show trends, etc.

scottstuff.net · Getting Caddy logs into Clickhouse via VectorAs mentioned before, I’ve been using the Caddy web server running on a couple machines to serve this site. I’ve been dumping Caddy’s access logs into Grafana’s Loki log system, but I haven’t been very happy with it for web logs. It’s kind of a pain to configure for small uses (a few GB of data on one server), and it’s slow for my use case. I’m sure I could optimize it one way or another, but even without the performance issues I’m still not very happy with it for logs analysis. I’ve had a number of relatively simple queries that I’ve had to fight with both Loki and Grafana to get answers for. In this specific case, I was trying to understand how much traffic my post on the Minisforum MS-A2 was getting and where it was coming from, and it was easier for me to grep through a few GB of gzipped JSON log files than to get Loki to answer my questions. So maybe it’s not the right tool for the job and I should look at other options. I’d been meaning to look at Clickhouse for a while; it’s an open source (plus paid cloud offering) column-store analytical DB. You feed it data and then use SQL to query it. It similar to Google BigQuery, Dremel, etc, and dozens of other similar systems. The big advantage of column-oriented databases is that queries that only hit a few fields can be really fast, because they can ignore all of the other columns completely. So a typical analytic query can just do giant streaming reads from a couple column without any disk seeks, which means your performance mostly just ends up being limited by your disks’ streaming throughput. Not so hot when you want to fetch all of the data from a single record, but great when you want to read millions of rows and calculate aggregate statistics. I managed to get Clickhouse reading Caddy’s logs, but it wasn’t quite as trivial as I’d hoped, and none of the assorted “how to do things like this” docs that I found online really covered this case very well, so I figured I’d write up the process that I used.

Oh, rofl. I just locked myself out of my own forge's web UI for an entire hour.

How? I was curious whether my HackerNews griefing snippet works, so I searched for git.madhouse-project.org on HN, followed a link, got a nice HTTP 418 Teapot, and all was fine.

But then I wanted to toot about this, and mention caddy-matcher-persistent-referrer, a small module that remembers the IP of visitors from a particular referrer, and continues to match them for some time.

I made this #Caddy module to circumvent HNers just copy pasting links after seeing the initial 418, or simply hitting enter on the address bar. With this module, they're locked out for an hour.

...and so am I, because I tested it, with a visit referred from HN.

(Of course, I can ssh into my VPS, reload Caddy, and clear its in-memory cache, which I did. But nevertheless, it's funny!)

MadHouse Git Repositoriescaddy-matcher-persistent-referrerCaddy module to aggressively match referrers

As the next step in my quest to make it easier to poison AI crawlers, I present you: OCIocaine: a project where #DockerCompose meets #Caddy and #Iocaine, to poison AI crawlers for all your sites, automatically.

The idea here is to provide a docker compose file that starts up Caddy and Iocaine, configured so that Caddy will reverse proxy for any and all services on the same docker network, as long as they have a few labels that tell it to do so. In addition, a Caddyfile snippet will be available for all of these, which takes care of routing bad visitors to Iocaine.

And if that's not enough, the whole thing comes preconfigured with a wordlist (a list of English words), and traning data (the complete works of Shakespeare), and a list of known AI crawlers (courtesy of ai.robots.txt).

All you have to do is copy the sample configuration, create a network, start it up, and deploy labeled containers into the same network, and OCIocaine takes care of the rest.

MadHouse Git RepositoriesociocaineDocker Compose meets Caddy and Iocaine to poison AI for all your sites, automatically.

Tehehehehe.

  test:
    image: traefik/whoami
    networks:
      - iocaine
    labels:
      caddy: http://127.0.0.1:21080
      caddy.import: iocaine
      caddy.reverse_proxy: "{{upstreams 80}}"

The goal: create a docker network called iocaine, deploy containers within the network, and with just a few labels, have them wrapped, so they're shadowed by iocaine. Just one compose.yml for #caddy + #iocaine to make it all work.

Probably sounds less exciting than it really is. I'll explain more once it's ready.

#MiniFlux users, can anyone help?

Hi all. I'm having some issues with MiniFlux, a #SelfHosted #RSSReader, and hoping someone can help. MiniFlux was working fine until I tried to deploy ReactFlux on the same domain as it, rss.laniecarmelo.tech, on a subpath, /reactflux. This didn't work so I removed ReactFlux. I also migrated MiniFlux from #Docker to #Pacman package, thinking it would be easier on my system. This problem, or a similar one, was occurring before I did that though.

Now, rss.laniecarmelo.tech loads the MiniFlux login page, but when I login, it redirects to a blank page at rss.laniecarmelo.tech/login. I've added trusted proxies and cookie configuration to my miniflux.conf and headers to my Caddyfile, but I still have the issue.

I'm using #Caddy for #ReverseProxy and #Cloudflare for #SSO. Has anyone seen anything like this before? This is on a #RaspberryPi500 running #ArchLinuxARM.

I've checked MiniFlux logs, and it's getting the login requests and creating sessions. I'm not sure what's happening after that. Cloudflared and Caddy seem to be working normally.

#SelFhosting #Linux #RSS #RaspberryPi #RPi #tech #technology
@selfhost @selfhosted @selfhosting

🚨 Help Needed: #CORS and #Cloudflare Access Issues with #Nextflux + #MiniFlux Setup 🚨

Hi everyone! I’m struggling with a #SelfHosted setup and could really use some advice from the self-hosting community. Lol I've been trying to figure this out for hours with no luck. Here’s my situation:

Setup

  • MiniFlux: Running in #Docker on a #RaspberryPi500 (#Stormux, based on #ArchLinuxARM).
  • Nextflux: Hosted on Cloudflare Pages.
  • Reverse Proxy: #Caddy (installed via AUR).
  • Cloudflare Access: Enabled for security and SSO.
  • Cloudflared: Also installed via AUR.
  • CORS Settings in Cloudflare Access: Configured to allow all origins, methods, and headers.

What’s Working

  • MiniFlux is accessible from my home network after removing restrictive CORS settings in both Caddy and MiniFlux.
  • Nextflux is properly deployed on Cloudflare Pages.

The Problem

Nextflux cannot connect to MiniFlux due to persistent CORS errors and authentication issues with Cloudflare Access. Here are the errors I’m seeing in the browser console:

  1. CORS Error:Access to fetch at 'https://rss.laniecarmelo.tech/v1/me' from origin 'https://nextflux.laniecarmelo.tech' has been blocked by CORS policy: No 'Access-Control-Allow-Origin' header is present on the requested resource.
  2. Cloudflare Access Redirection:

    Request redirected to 'https://lifeofararebird.cloudflareaccess.com/cdn-cgi/access/login/rss.laniecarmelo.tech'.
  3. Failed to Fetch:

    Failed to fetch: TypeError: Failed to fetch.

What I’ve Tried

  1. Service Token Authentication:

    • Generated a service token in Cloudflare Access for Nextflux.
    • Added CF-Access-Client-Id and CF-Access-Client-Secret headers in Caddy for rss.laniecarmelo.tech.
    • Updated Cloudflare Access policies to include a bypass rule for this service token.
  2. CORS Configuration:

    • Tried permissive settings (Access-Control-Allow-Origin: *) in both Caddy and MiniFlux.
    • Configured Cloudflare Access CORS settings to allow all origins, methods, and headers.
  3. Policy Adjustments:

    • Created a bypass policy for my home IP range and public IP.
    • Added an "Allow" policy for authenticated users via email/login methods.
  4. Debugging Logs:

    • Checked Cloudflared logs, which show requests being blocked due to missing access tokens (AccessJWTValidator errors).

Current State

Despite these efforts:

  • Requests from Nextflux are still being blocked by Cloudflare Access or failing due to CORS issues.
  • The browser console consistently shows "No 'Access-Control-Allow-Origin' header" errors.

Goals

  1. Allow Nextflux (hosted on Cloudflare Pages) to connect seamlessly to MiniFlux (behind Cloudflare Access).
  2. Maintain secure access to MiniFlux for other devices (e.g., my home network or mobile devices).

My Environment

  • Raspberry Pi 500 running Arch Linux ARM.
  • Both Caddy and Cloudflared are installed via AUR packages.
  • MiniFlux is running in Docker with the following environment variables:CLOUDFLARE_SERVICE_AUTH_ENABLED=trueCLOUDFLARE_CLIENT_ID=<client-id>CLOUDFLARE_CLIENT_SECRET=<client-secret>

Relevant Logs

From cloudflared:

ERR error="request filtered by middleware handler (AccessJWTValidator) due to: no access token in request"

From the browser console:

Access to fetch at 'https://rss.laniecarmelo.tech/v1/me' has been blocked by CORS policy.

Questions

  1. Is there a better way to configure CORS for this setup?
  2. Should I be handling authentication differently between Nextflux and MiniFlux?
  3. How can I ensure that requests from Nextflux include valid access tokens?

Any help or advice would be greatly appreciated! 🙏

Continued thread

Oh, another possibly interesting tidbit: this whole thing is hosted on a small CX22 VPS at Hetzner (2 VCPU, 4G Ram, €4.18/month). It's running #NixOS, #Caddy, iocaine, and has a WireGuard tunnel connecting it with a 2014 Intel Mac Mini at home. There is no other service running here, the entire purpose of this VPS is to front for my other servers that aren't on the public internet.

The heaviest applications on it are Caddy and Iocaine.

So far, even under the heaviest load, it didn't need to touch swap, and the 2 VCPUs were enough to do all the garbage generation. I didn't even notice Claude visiting, even though I was deploying new configurations while it was there.

I did notice that the load on the Mac Mini is a lot less, because AI bots do not reach it. Saves a ton on bandwidth! Not only do I save bandwidth by serving less to the crawlers, but there's no traffic on WireGuard in this case, either! That saves both VPS bandwidth, and my own at home, too.

Pretty cool.

MadHouse Git RepositoriesiocaineThe deadliest poison known to AI.
Replied in thread

One reason I will have to do something like this, is because I want to wire up my #Caddy to mount iocaine at, say, /index.php or somesuch, and link there from the real site (with an explicit note briefly explaining that it is not for humans), and keep some stats about how much time various IPs and user agents spend there, accross all sites.

This should help me discover new user agents or ip ranges to trap preemptively.

In the light of ongoing political challenges to democracy and freedom of speech, I would like to propose a tutorial on how to create a low-cost, light-weight, open-source, cross-platform, anonymous, secure social network, which I have described in the following easy-to-follow tutorial.

Reference:
"Creating a low-cost, light-weight, anonymous, secure social network - A tutorial"
biosphere.wilmarigl.de/en/?p=4

#IRC#ERGOCHAT#CADDY

My second blog post is done! I worked very hard at it.

#Monitoring #Caddy with #FluentBit and #Prometheus

After deploying this website, as I described in my previous post, I was confronted with the question of what to do next. I have a list of potential next steps at the end of the post, but as I was working through them, one thing stood out. A lot of those next steps have to do with performance, and planning for the future where there might be greater loads on the system. Also, each next step adds complexity to the stack.

The most important tool for planning, understanding the impact of changes, and for dealing with the consequence of complexity (← bugs) is the ability to understand and measure what is actually happening. Therefore, what we need is to install some monitoring, so that we can see issues and make plans based on actual data.

marctrius.net/monitoring-caddy

Please let me know if you have any feedback.

Marc Trius · Monitoring Caddy with Fluent Bit and PrometheusAfter deploying this website, as I described in my previous post, I was confronted with the question of what to do next. I have a list of potential next steps at the end of the post, but as I was working through them, one thing stood out. A lot of