Common Server issues – FAQs and answers from those in the know
Is it possible to have custom reload/restart-like commands in systemctl for a daemon?
31 March 2026 @ 4:07 pm
I'm developing a daemon that runs under control of systemd, and that has a feature to reload code that's inbetween "systemctl reload" and "systemctl restart". Unlike "systemctl reload" that sends a signal to re-read just the configuration, it also reloads the code, but unlike "systemctl restart" it's not a true hard restart that forgets all the state, but instead the state is written to a file and a signal causes the daemon to replace its code with execve() and then it reads the old state from the file.
Is there any feature in systemd that would allow me to add some custom command to systemctl that would be inbetween reload and restart, just for this daemon?
Technically the feature would be implemented by some signal such as SIGUSR1 or SIGUSR2.
I'm expecting that there could be cases where a hard restart will be done instead of the lighterweight "dump state + execve + reload state", so it would be useful to have
Powering down idle HDDs in a ZFS pool to conserve power
31 March 2026 @ 8:35 am
I've got a storage server with some NVMe SSDs for system and metadata cache and 24 3.5" HDDs for storing historic data. Since this data is very infrequently accessed (at least once per day for synchronization, often that is the only access at all for several days), it would be good to spin down the drives during the remaining time to conserve power.
The system is running Debian (actually Proxmox, but no VMs there) and ZFS. The storage pool is made up of 3x 8-way raidz1, with a mirrored special device on NVMes for metadata storage. The disks are Seagate Exos SAS drives.
My questions:
Will the drive lifetime decrease (or increase) if they are only active maybe 1/4 of the time, but with at least one spin-down/spin-up every day?
Can or do I need to tell ZFS about this behavior, so that it isn't confused by very long initial access times when a drive needs to spin up first?
Does ZFS even let a drive go to sleep, or does
libnss-extrausers use cases and details
31 March 2026 @ 5:40 am
What are the use cases for libnss-extrausers? I've seen a couple of examples but the details are lacking.
If you use libnss-extrausers, don't you have to change all user and group IDs to keep from colliding with the corresponding /etc files? If this is done then effectively only files with "world" access in directories with "world" access are available.
What about home directories, do you use a common directory such as /tmp or what do you do?
This is why I'm asking about the use cases to better understand the benefit.
Mutual TLS Abruptly Stopped Working on Tomcat 9 and 11 Servers
30 March 2026 @ 11:06 pm
I have several java webapps running on 2 tomcat servers--with various JDK (17 and 25) and Tomcat versions (9 and 11), in a test/development environment--where the server is configured to request a client certificate. When the browser prompts me, I select my CAC card certificate on the browser popup and then am prompted to enter my pin. The prompts for certificate selection and pin entry have always occurred immediately after navigating to the url (so there is no post-handshake authentication).
Unfortunately, this stopped working for me; one day it was working and another day it totally stopped. I am 100% certain that no server nor personal configurations changed since the time it was working; no server, connector, JDK, or any other settings were manually changed. And even more strange, it is very inconsistent between users: it still works 100% for one team member, it works only on Firefox for another team member, a third memeber can only get it to sometimes work in Chrome's p
Strange permissions on shared Start Menu folders in Windows 10 IoT Enterprise LTSC
30 March 2026 @ 9:04 pm
I have a fleet of Dell OptiPlex 3000 machines that were purchased due to one reason only: legal license for Windows 10 IoT Enterprise LTSC.
I am getting very strange set of permissions on a clean re-install on certain folders that relate to the shared Start Menu:
C:\Users\All Users\Start Menu\Programs
S-1-5-21-3671523672-3566060235-3176437112-1000:(I)(OI)(CI)(DE,DC)
DESKTOP-UROR7BK\admin:(I)(OI)(CI)(DE,DC)
NT AUTHORITY\SYSTEM:(I)(OI)(CI)(F)
BUILTIN\Administrators:(I)(OI)(CI)(F)
BUILTIN\Users:(I)(OI)(CI)(RX)
Everyone:(I)(OI)(CI)(RX)
C:\Users\All Users\Start Menu
Everyone:(DENY)(S,RD)
Everyone:(RX)
NT AUTHORITY\SYSTEM:(F)
BUILTIN\Administrators:(F)
C:\Users\All Users
NT AUTHORITY\SYSTEM:(OI)(CI)(F)
BUILTIN\Administrators:(OI)(CI)(F)
CREATOR OWNER:(OI)(CI)(IO)(F)
BUILTIN\Users:(OI)(CI)(RX)
BUILTIN\Users:(CI)(WD,AD,WEA,WA)
C:\Users
NT AUTHORITY\SYSTEM:(OI)(CI)(F)
BUILTIN\Administrators:(OI)(CI)(F)
BUILTIN\Users:(RX)
BUILTIN\Users:(OI)(CI)(IO)(
Delete Fails when Windows NFS mounted on Linux vm
30 March 2026 @ 5:37 pm
I have a windows NFS setup with AD and on mounting it in linux vm. I’m able to create and edit the files but not able to delete the files. The user seems to be correctly mapped but delete fails.
I have even given full control to the user but it still doesn’t work.
Can someone help me understand the possible causes for this?
The delete works correctly on the windows machine.
Exchange Server 2019 Root URL Gives 500 Error But OWA/ECP Work Fine
30 March 2026 @ 3:52 pm
We have a pair of Windows Server 2019 servers running Microsoft Exchange 2019 called mail01 and mail02.
I recently noticed that browsing to https://mail01.domain.com/ gives a 500 error (well actually it gives Server Error in '/' Application but the logs show the 500 error). However, browsing to https://mail01.domain.com/owa or ecp works fine. Additionally, browsing to https://mail02.domain.com/ also works fine.
So it is something in the standard root URL redirection on that one server.
The event viewer has the following error:
Could not load file or assembly 'Microsoft.Exchange.HttpUtilities, Version=15.0.0.0, Culture=neutral, PublicKeyToken=31bf3856ad364e35' or one of its dependencies.
Event code: 3008
Event message: A c
IPv6 address randomly drops on Ubuntu 24.04 (Oracle Cloud) after switching from ufw to nftables
29 March 2026 @ 9:09 am
I'm experiencing an intermittent IPv6 connectivity issue on an Oracle Cloud Infrastructure (OCI) instance after migrating my firewall from ufw to nftables.
Environment:
OS: Ubuntu 24.04
Cloud Provider: Oracle Cloud (OCI)
Local Network: Dual-stack (IPv4 and IPv6 available)
The Problem:
My SSH connection to the server via IPv6 is intermittent, while SSH via IPv4 remains perfectly stable.
Upon investigation, I noticed that when the connection fails, the IPv6 address completely disappears from the network interface (checked via ip addr). When the connection is working, the IPv6 address is present.
Troubleshooting Steps Taken:
I have already allowed all ICMPv6 traffic in the Oracle Cloud Web Console (Security Lists).
If I change my local nftables input chain default policy to accept, the IPv6 address reappears in ip addr after a few minutes, and SSH via IPv6 works perfectly again.
However, if I change the nftables defa
Nginx on Debian 13 serving Moodle
28 March 2026 @ 5:32 pm
So i am configuring nginx on a new server and the first part of the install was working great but now i am getting errors related to the js files. I tried everything i read online. I could really use your help.
my nginx conf file:
server {
#listen 80;
#listen [::]:80;
listen 443 ssl; # managed by Certbot
listen [::]:443 ssl ipv6only=on; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/subdomainhere/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/subdomainhere/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
server_name subdomainhere www.subdomainhere;
root /var/www/html/moodle;
index index.html index.htm index.php;
add_header 'Content-Security-Policy' 'upgrade-insecure-requests';
location / {
try_files $uri $uri/ =404;
}
# PHP-FPM
File stuck in php-fpm opcache?
28 March 2026 @ 2:17 pm
I had a php file I was unable to update. The server was constantly returning an old version of the file. After deleting the file I got 404, but restoring the file again returned an old version of the file. All other files I tested worked as expected. Copying the file to a new file name worked as expected. Just that one file wouldn't update.
After calling opcache_reset it started working.
So it seems the cache was not correctly invalidated for that one file on change.
This is scary. Why would this happen? How can I prevent this from happening again, besides disabling opcache?
I found this other example of it happening to someone (though it doesn't specify it happening to just one file): Why Does PHP-FPM sometimes get stuck serving old files?
EDIT:
opcache.validate_timestamps is enabled and opcache.revalidate_freq is 2
EDIT: This could