Common Server issues – FAQs and answers from those in the know
Rewrite rules to remove www for secure and non secure but it's not working
1 May 2026 @ 10:49 am
I spent a lot of time defining these rules but they are not working. Something is wonky.
I have 2 a records on my first host (which is not hostinger) that point www.automation.MYDOMAIN.com as well as automation.MYDOMAIN.com to the IP of my VPS on Hostinger.
I verified that the DNS for both A records all point properly to hostinger and are resolved properly. Pinging works and all is good.
I am not using directories because i am just connecting my n8n interface running on port 5678 to the outside.
What i want is very simple and i thought i had achieved it but it's not working properly.
I want all my none secure requests whether with or without WWW to be redirected to the secure version of that url so that is why i configured this block:
<VirtualHost *:80>
ServerName automation.MYDOMAIN.com
ServerAlias www.automation.MYDOMAIN.com
Redirec
Is Thunderbird a secure alternative to Outlook from an IT perspective? [closed]
30 April 2026 @ 1:56 pm
I've been on this multi-year quest to convince our IT department to allow an alternative to Outlook (e.g., Thunderbird). Please understand that I'm not an sysadmin of any stripe, but merely a lowly user. Over the years I've gone from ambitious (allow Thunderbird on IT-managed devices), to personal devices only, and now to only myself as a test case. The request for personal devices came back with the response:
The request has been strongly considered, but ultimately, we are charged with reducing security risk whenever possible. Given the accelerating threat landscape driven by constant advancement in AI tooling, supply chain attacks, and the overall increase in risk, it would be preferred to maintain a more homogenous environment, with Outlook and its security integrations. The risk goes beyond just your personal email and data. This application is able to access your mailbox, but without the ability to interface with Defender for Endpoint, M365 Advanced Threat
Keycloak won't start on Azure Container App - getting killed by probes
30 April 2026 @ 10:40 am
I'm trying to start up a productionised Keycloak on Azure Container Apps. As far as I can tell, its starting up fine but being shut down because of health probes thinking it isn't healthy. Here are the logs for the application which show it starting then being terminated ...
Connecting to stream...
2026-04-30T10:26:06.64790 Connecting to the container 's175d01-ca-keycloak'...
2026-04-30T10:26:06.70026 Successfully Connected to container: 's175d01-ca-keycloak' [Revision: 's175d01-ca-keycloak--0000004', Replica: 's175d01-ca-keycloak--0000004-d95459d4b-7wfph']
2026-04-30T10:25:58.6577656Z stdout F 2026-04-30 10:25:58,636 INFO [org.infinispan.CONTAINER] (main) ISPN000974: Virtual threads support: enabled
2026-04-30T10:25:59.7978463Z stdout F 2026-04-30 10:25:59,797 INFO [org.hibernate.orm.jdbc.batch] (JPA Startup Thread) HHH100501: Automatic JDBC statement batching enabled (maximum batch size 32)
2026-04-30T10:25:59.8935145Z stdout F 2026-04-30 10:25:59,893 WARN [io.
Setting up Hysteria 2 tunnel on 3X-UI + v2rayN (PC) [migrated]
29 April 2026 @ 6:10 pm
Goal: To bypass China's Great Firewall. Use Hysteria for all UDP traffic to increase speed for streaming videos and games. Then use VLESS for everything else (TCP).
I got VLESS + Reality set up and working with help mostly from Gemini AI:
3X-UI on Ubuntu 24 on a Hong Kong server with CN2 GIA (optimized connection) to China
No firewalls or security groups on the server
v2rayN on Windows 11
But I want to take it the next step and also add Hysteria 2, but it's hard to get the correct info from AI and unfortunately there is very little info on setup guides (there are some Chinese videos, but no auto-translation).
So far what I got for Hysteria 2 Inbound on 3X-UI:
Port 4443 (3X-UI won't let me use 443 since VLESS is using that)
I clicked "Set Cert from Panel" to fill in the public/private keys
Everything else blank or default like blank SNI, uTLS=chrome, ALPN=h3, etc.
Does an MTU of 65202 make sense in a PCIe-based cluster network?
27 April 2026 @ 3:40 pm
I'm migrating from an old stand-alone server to a 4-way cluster, whose nodes (and control board, which also acts as a router to the outside world) are networked by a backplane with PCI Express packet switch (see the datasheet for details). Whilst fighting slow operation and instabilities, I found out that the manufacturer had set the MTU of the PCIe link to 65202, which is maybe normal for loopback connections, but not for a "real" network interface (irrelevant entries omitted):
mixtile@blade3n3:~$ ip addr show
[…]
6: pci0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65202 qdisc fq_codel state UP group default qlen 50000
link/ether 02:b9:24:b7:73:0a brd ff:ff:ff:ff:ff:ff
inet 10.20.0.13/24 metric 100 brd 10.20.0.255 scope global pci0
valid_lft forever preferred_lft forever
inet6 fe80::b9:24ff:feb7:730a/64 scope link
valid_lft f
What permissions is my user lacking for zfs send pool replication?
27 April 2026 @ 2:52 pm
Sending from a zraid0-1 on TrueNAS 26.0.0-BETA.1 (zfs-2.4.1-1 zfs-kmod-2.4.1-1) to a zfs zraid0-1 array on Zima's CasaOS (zfs-2.3.2-1 zfs-kmod-2.3.2-1).
I'm probably going to install TrueNAS 26.0.0-BETA.1 on the Zima (Zima is a hardware brand) host if I can't figure this out today. Thanks for any suggestions.
On the target (recipient host) I set these permissions.
zfs allow -u supdog -d receive,create,mount,dedup,snapdir,copies,userprop,keyformat,keylocation,pbkdf2iters zima/xool
Then I sent using this command:
zfs send -w -c -R xool@rebalance | ssh [email protected] zfs receive -s -F zima/xool
The transfer ran for several hours and towards the end I started seeing these errors.
cannot receive org.freenas:description property on zima/xool/supdog: permission denied
cannot receive copies property on zima/xool/supdog: permission denied
cannot receive snapdir property on zima/xool/.system: permission denied
cannot receive readonly property on zima/x
Scheduled Task set to run every X minutes does not work after server reboot
27 April 2026 @ 1:46 pm
I have a script set to run every 5 minutes in the Windows 2019 task scheduler, and after a server reboot it never just resumes at the next expected interval.
To fix it I have to edit the schedule, set it to the next expected runtime, then save (and re-enter the domain account password).
What's going on here?
Is it not maintaining the saved credentials across the reboot? Do I have some checkbox set wrong on the "Conditions" or "Settings" tab?
Am I missing a role?
To clarify, I'm using a scheduled trigger, set to "daily" at an arbitrary time (say midnight), with "repeat task every 5 minutes".
If it ran at 10am, is rebooted at 10:02am, shouldn't it know that it was next scheduled to run at 10:05am? (This is how schedules work in SQL Server Agent, for example.)
Or will it not run until the following midnight?
On Rocky Linux, how can I know *before* installing it if updating a package will require a reboot?
27 April 2026 @ 8:08 am
After updating one or more packages with dnf, I usually use the needs-restarting command to find out if the server needs a reboot, but when the dnf update command finishes, the update has already been done and I have to reboot.
What I'd like to do is know before installing a package if that update will require a reboot. The reason is simple: to keep the system updated automatically and postpone updates that require a reboot until a later manual intervention.
I'd need something like:
[user@host ~]# needs-a-reboot-after <PackageName> [enter]
If you install/update "<PackageName>", you'll need to reboot the server.
[user@host ~]#
Is there already something out there that does this?
Thanks everyone...
Ceph web dashboard can't display OSDs and devices
26 April 2026 @ 11:58 am
I've now got my Ceph cluster almost ready to use, but in the web dashboard, I don't see any of the four OSDs I've created. Neither do I find any of my NVMe drives the OSDs reside on:
Error message: No devices (HDD, SSD or NVME) were found. Creation of OSDs will remain disabled until devices are added.
Here is what I get on the command line:
mixtile@blade3n1:~$ sudo ceph osd tree
[sudo] password for mixtile:
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 29.80798 root default
-9 7.45200 host blade3n1
3 ssd 7.45200 osd.3 up 1.00000 1.00000
-7 7.45200 host blade3n2
2 ssd 7.45200 osd.2 up 1.
Error message: No devices (HDD, SSD or NVME) were found. Creation of OSDs will remain disabled until devices are added.
Here is what I get on the command line:
mixtile@blade3n1:~$ sudo ceph osd tree
[sudo] password for mixtile:
ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF
-1 29.80798 root default
-9 7.45200 host blade3n1
3 ssd 7.45200 osd.3 up 1.00000 1.00000
-7 7.45200 host blade3n2
2 ssd 7.45200 osd.2 up 1.AWS PA-VM with GWLB gets no packets
24 April 2026 @ 1:39 pm
I have a Palo Alto PA-VM in AWS set up for a "bump-in-the-wire" firewall for traffic in the same region but different VPC and different account with a Gateway Load-Balancer (GWLB) in between.
The short version of this question: does a proper GWLB setup (same region, different accounts) for a "hairpin", "bump-on-the-wire", "north-south" traffic inspection require extra pieces (such as a TGW or other intermediary step) for packets to actually reach the firewall? Is there another technical limitation I'm overlooking?
I tried this same setup in my test environment first (all in the same region using different VPCs, main difference was everything on the same account) and it worked fine. I'm cheap, so I swapped the PA-VM for a Linux EC2 at that time.
The current setup will have traffic moving as follows:
random internet client --> IGW (data vpc) --> VPCendpoint (data vpc, for GWLB) --> GWLB (fw vpc) -->