my adventures in #selfhosting - day 212 (consolidating edition)
If you followed my (mis)adventures yesterday and all the issues I had with caching and #CDN for my #Wordpress site, well, I found a solution. Something that had been in front of me the whole time
ZERO additional costs
Ta-da:
: https://news.elenarossini.com/my-so-called-sudo-life/my-adventures-in-self-hosting-day-212-consolidating-edition/
#Ghost #VarnishCache
Update no.2: It turns out, I cannot install #VarnishCache on my shared hosting plan because that requires #Nginx and my plan doesn't support it...
The only option I have - to manually install Varnish - is to move my #Wordpress site from my shared hosting plan to a #VPS.
I already have 2 VPS's so it would cost me nothing but this takes a bazillion steps and I honestly don't want to do it I love the Dashboard / ease of use of my shared hosting plan vis-à-vis Wordpress.
Plan C is seeing if things are different with BunnyCDN.
Plan Z is moving all my blogging efforts to Ghost but I don't want do to that. I'll try anything to protect my Wordpress site against the Mastodon stampede.
Edit: for context, I have had this Wordpress site since 2010 (15 years now!) so I don't want to mess with it.
cc: @cleantext and @ck0 (who asked about this)
Never a dull day in this #selfhosting journey: editing important #DNS records while your child is on summer holiday - and may come see you every few minutes - is a very interesting exercise in concentration.
Special thanks to nonna (grandma) for helping with childcare this morning
I'm hoping I'm successful in setting up a more solid #CDN for my personal website because I keep DDOS'ing myself (from a simple Mastodon reply to a federated Wordpress post - 8k followers will do that).
Wish me luck!
P.S.: another moment of gratitude / deep appreciation for #VarnishCache which has been providing rock solid caching to my #Ghost site. Now I need to take care of my #Wordpress site with a pro CDN solution (Varnish isn't an option sadly bc of the Wordpress setup / I don't have direct access to the server)
I'll take one for #varnishcache too!
my adventures in #selfhosting - day 186 (bandwidth edition)
A moment of gratitude for #VarnishCache and how incredibly it has protected my self-hosted #Ghost blog from the so-called "Mastodon stampede" / "Mastodon hug of death":
Yesterday I published a page on my site with the French-language version of the Fediverse promo video https://news.elenarossini.com/fedivers-video/
Then I posted a message on my Mastodon account about it, asking people to boost it, so that people in the Francophone world could see it.
How many boosts did I get? 1300 so far (you people are amazing).
Well, my Ghost blog is still standing and super fast. Varnish is INCREDIBLE and I could not recommend it more.
Oh and my VPS with PeerTube is also still standing because I embedded the French version of the video on my Ghost site, so that hundreds of Mastodon servers attempted to fetch the cover image of the POST and not the cover image of the video.
Bandwidth consumption (for my VPS with GoToSocial and PeerTube) so far this month: 0.457 TB (my limit is 8 TB)
Bandwidth consumption for my VPS with Ghost: 0.06 TB (limit: 4 TB)
So far so good
#MySoCalledSudoLife
I'm 100% on board with this:
https://gitlab.gnome.org/GNOME/libxml2/-/issues/913
Both that the security theater is that. (In #VarnishCache we could not get a CVE under embargo because we did not have enough bugs calling for a CVE!)
But also that unpaid FOSS maintainers dont owe anybody nothing:
#varnishcache uses miniobj.h by @bsdphk which puts an unsigned int magic value at the start of each "thing pointed to", which is really helpful to guard against stray pointers, use-after-free and whatnot.
today i ran sth like
od -A None -t x4 -w 4 | grep -E <all possible magics> | sort | uniq -c | sort -rn
on a 170gb core dump to make sure that i do not overlook a memory leak. not particularly efficient, but very reliable through simplicity.
@jorijn @inawhilecrocodile the built-in malloc based stevedore has various issues specific to the underlying implementation, but independent of that, it needs more memory than configured and it has an lru fairness issue. all of these issues are solved with https://gitlab.com/uplex/varnish/slash #varnishcache
@jorijn using uds with #kubernetes is not an issue. just configure a file system shared between multiple containers of the same pod and put the uds "file" there.
fwiw, this is also the way to use varnishadm/varnishstat/varnishlog from a different container than where varnishd runs.
learning curve: yes, but it makes you more competent also :)
#varnishcache
@jorijn @monospace i did also use nginx and have no hard arguments against it besides "project governance" maybe. but a relevant benefit of using #haproxy in tcp mode is to avoid any double processing of http, which otherwise is prone to desync bugs. tcp mode simply adds/removes the tls pipe, nothing more, nothing less. all the http processing remains in #varnishcache only.
@jorijn it's a long story with much detail. but there is one relevant argument: not to have complex tls code in the same address space as varnishd itself: http://varnish-cache.org/docs/trunk/phk/ssl.html and http://varnish-cache.org/docs/trunk/phk/ssl_again.html .
what we are working on right now (unpublished WIP) uses the keyless tls idea, which cloudflare made popular (but did not invent, iirc): https://www.cloudflare.com/en-gb/learning/ssl/keyless-ssl/
@jorijn yes, as of today, the recommended way is to use #haproxy as a combined tls onloader/offloader with the PROXY2 protocol such that haproxy has "zero" configuration: see http://varnish-cache.org/docs/trunk/users-guide/vcl-backends.html#connecting-through-a-proxy and .via in http://varnish-cache.org/docs/trunk/reference/vcl-backend.html#vcl-backend-7
this also works with dns: https://github.com/nigoroll/libvmod-dynamic/blob/master/src/vmod_dynamic.vcc
that said, we will do something about this eventually #varnishcache
I am somewhat torn on bug-bounties, but we'll leave that for another day.
The combination of bug-bounties and AI generates a "make-money-fast" economic opportunity, at the cost of FOSS maintainers.
https://www.theregister.com/2025/05/07/curl_ai_bug_reports/
For the record: #VarnishCache does not pay out bug-bounties (even if we wanted to, we have no money) and this shit-show will certainly not make us start.
@tdp_org all #varnishcache based do.
That's an interesting metaphor and it raises so many questions:
1. Wouldn't it have been smarter to torque them correctly from the start ?
2. How does one even determine the correct torque for any one bolt ?
3. When somebody starts a FOSS project today, where do they acquire a torque-wrench ?
And no, I'm not teasing you (this time ) those are some of the questions I tried to find answers to with #varnishcache's "dial it to 11" code quality rule.
Your periodic reminder that #microsoft is not a competent or serious company:
https://blog.orange.tw/posts/2025-01-worstfit-unveiling-hidden-transformers-in-windows-ansi/
(@bagder is remarkably restrained in the quoted responses. I would have gone off the rails, but do not need to, because we decided on day one that #varnishcache would not run on Windows).
If you think that is not bad enough, read the Cyber Safety Review Board's report about the Microsoft Exchange clowncar:
vmod-dynamic, our #varnishcache module for dynamic backends from #dns (A/CNAME and SRV records) has received some bug fixes and workarounds for issues in Varnish-Cache 7.5 and 7.6.
The new wait_timeout and wait_limits parameters are now supported.
See the changelog for details: https://github.com/nigoroll/libvmod-dynamic/blob/master/CHANGES.rst#76-branch
A release branch for 7.6 has been created: https://github.com/nigoroll/libvmod-dynamic/tree/7.6
HAPPY 18TH BIRTHDAY #VarnishCache ! To celebrate this memorable occasion, we have just tagged Version 1.0.0-rc1 of https://gitlab.com/uplex/varnish/slash, which contains fellow, our advanced, #io_uring based, high performance, eventually persistent, always consistent #opensource storage engine.
Read the full announcement: https://varnish-cache.org/lists/pipermail/varnish-announce/2024-February/000762.html
And the changelog: https://gitlab.com/uplex/varnish/slash/-/blob/master/CHANGES.rst?ref_type=heads