• 0 Posts
  • 11 Comments
Joined 10 months ago
cake
Cake day: June 7th, 2025

help-circle
  • The simple, maybe unhelpful answer is that fail2ban needs to have two things at once: the logs, and a way to block the network traffic.

    Where exactly you want those things to coincide is really up to you, there might only be one point that simultaneously has access to both those things, or there might be multiple points depending on how your systems and services and network is configured, or if you’re in a bad situation you might find you don’t really have any single point where both those things are simultaneously possible, in which case you’ll need to reconfigure something until you do have at least one point where both those things are again coincident.

    As far as best practices, I can’t really say for sure, but I know that one of the more convenient ways to run it is usually on the same system, I usually run it outside of docker, on the host, which can pretty easily get access to the container’s logs if necessary, and let fail2ban block traffic on the whole system. For me, any system running any publicly accessible network services that allow password login gets a fail2ban instance.

    A whole-network approach where you block the traffic on the firewall is fine too, if that’s what you prefer and what you want to work towards, but it’s probably going to be significantly more complex to set up because now you need to either figure out how to get fail2ban to be able to access your firewall or a way for your firewall to get the logs it needs.


  • It’s literally the core foundation of my entire self-hosting configuration. I could not live without Forgejo. I can’t imagine being shackled to Github or some other hosted provider anymore for something as important as my git repositories.

    Gitea’s okay too in every practical respect, but Forgejo is the more community-led fork and in my opinion less likely to be corporatized and enshittified far in the future, so I’ve hitched my wagon there and couldn’t be happier. The fork is starting to diverge slowly, so it seems like direct migration is no longer possible. That said, git repositories are git repositories, and they have most of the important history and stuff inside them already, so unless you’re super attached to stuff like issues and whatever you can still migrate, you’ll just lose some stuff.


  • You don’t have any great options but you do have some options. You’ll need dynamic DNS, which you can get for free by various providers. This will manage a “dynamic” DNS entry for your occasionally changing, non-static IP at home. The dynamic DNS entry won’t be on your own domain name, it will be on the provider’s domain name. But wait! That’s just step one.

    You can still get your own, fully-functional domain name, and you can have all the domains and subdomains you want, and set them up however you want, with one important restriction: You can’t use IP addresses (because yours is dynamic, and changes all the time and you would have to be constantly updating your domain every time it does, and there would be delays and downtime while everything gets updated).

    Instead, your personal domains have to use CNAME records. This substitutes the IP from a different domain INTO your domain. So you CNAME every entry on your own fancy domains to point at your dynamic DNS provider, which manages the dynamic part of the problem for you and always gives the real IP you need. Nobody sees the dynamic DNS name, it’s there, but it’s happening behind the scenes, they still see your fancy personalized domain names.

    It’s still not going to be perfect, it won’t work well or at all for certain services like email hosting (self-hosting this is not for the faint of heart anyway) that are very strict about how their DNS and IP addresses need to be set up, but it will likely be good enough for 99% of the stuff you want to self-host.




  • I think there’s room for a little bit of nuance that page doesn’t do a great job of describing. In my opinion there’s a huge difference between volunteer maintainers using AI PR checks as a screening measure to ease their review burden and focusing their actual reviews on PRs that pass the AI checks, and AI-deranged lone developers flooding the code with “AI features” and slopping out 10kloc PRs for no obvious reason.

    Just because a project is using AI code reviews or has an AGENTS.md is not necessarily a red flag. A yellow flag, maybe, but the evidence that the Linux Kernel itself is on that list should serve as an example of why you can’t just kneejerk anti-AI here. If you know anything about Linus Torvalds you know he has zero tolerance for bad code, and the use of AI is not going to change that despite everyone’s fears. If it doesn’t work out, Linus will be the first one to throw it under the bus.



  • It’s being built inch by inch. You won’t even know it’s there until you realize you can’t squeeze through it anymore. The trend is extremely obvious: TPM, Secure boot, Windows Store UWP applications, forced updates without consent, or intentional opt-outs that conveniently get ignored or forgotten when it’s convenient for Microsoft to force something. They are intent on taking full control of PCs and locking them down exactly the same way Android phones are locked down, they will follow a few footsteps behind what Android is doing now by preventing third-party apps and app stores, but it’s obviously coming, because they are on exactly the same path for exactly the same reasons.

    I don’t imagine we can save everybody either. But that doesn’t mean it’s not worth trying. The more they tighten their grip, the more will slip through their fingers, and all I care about is that the rebellion against Windows grows large enough to survive indefinitely, if not thrive.