Common Server issues – FAQs and answers from those in the know
How can I block all but Cloudflare Ips to certains website on one server but allow unfettered access to others
2 April 2026 @ 11:02 pm
The only way I can see of doing it is using IPSecurity in web.COnfig but its not working, it either errors with 500 or allows everything through.
I've tried to use Firewall but Windows firewall will block access to all sites but I have one site that need to be unlocked. What solutions are there.
I've tried to put in web.config but I get a 500 error.
<security>
<ipSecurity allowUnlisted="true">
<clear/>
<add ipAddress="173.245.48.0/20" allowed="true" />
<add ipAddress="103.21.244.0/22" allowed="true" />
<add ipAddress="103.22.200.0/22" allowed="true" />
<add ipAddress="103.31.4.0/22" allowed="true" />
<add ipAddress="141.101.64.0/18" allowed="true" />
<add ipAddress="108.162.192.0/18" allowed=&
Spamassassin meta rule and URIBL
2 April 2026 @ 5:30 pm
I'm trying to understand a behavior of spamassassin meta rules.
If i set up a custom rule like this:
header __LOCAL_SOME_CONDITION xxxxxxxx
meta SOME_META (__LOCAL_SOME_CONDITION && !URIBL_ABUSE_SURBL)
describe SOME_META test meta rule
score SOME_META -10
The goal is to avoid some false-positive when some condition is matched but the URIBL check if false, details about the false-positive are useless, they are not the point of the question.
But what I get on mail header is:
1.9 URIBL_ABUSE_SURBL
-10 SOME_META
And that make no sense, how can SOME_META match if URIBL_ABUSE_SURBL match?
The only suggestion was found by a colleague with ChatGPT and is that URIBL_ABUSE_SURBL check is a DNS check and run async, so when SOME_META check run (before the end of the async call) URIBL_ABUSE_SURBL is still false.
This makes sense, but checking spamassassin doc/googling about meta rules I wasn't ab
Copy files between two DFS locations without involving client during copy operation?
2 April 2026 @ 3:38 pm
User needs to be able to copy files from one location in DFS to another (hosted on different physical servers) without involving the client machine during the copy activity. Currently they are using Windows File Explorer to open both locations and copy/paste the files which works reasonably when they are on-site but performance is very poor when they are working remotely because the operation is being managed from their machine which is connecting through the corporate VPN. Is there a good way to initiate the file copy so that it will be done directly from one server to the other without involving the remote client machine during the operation?
Return custom status code using php http_response_code() on 404 pages nginx
2 April 2026 @ 1:50 pm
I am using /default.php for 404 pages, and returning status code 200 using the config.
server {
listen [::]:443 ssl;
server_name www.mywebsite.com;
ssl_certificate /.cert/cert.pem;
ssl_certificate_key /.cert/key.pem;
root /usr/share/nginx/html;
index index.php;
error_page 404 =200 @defaultblock;
location @extensionless-php {
rewrite ^(.*)$ $1.php last;
}
location @defaultblock {
try_files $uri $uri/ /default.php$is_args$args;
}
location / {
try_files $uri $uri/ @extensionless-php;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_pass 127.0.0.1:9000;
fastcgi_index index.php;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
}
}
However, I want to return the status code using php http_response_code() or header() function. Current
HAProxy & Percona XtraDB data loss
2 April 2026 @ 12:40 pm
We have a 2 proxy, 4 MYSQL server setup. Our data guys do not want their oozie database on Percona. We left it as is for years.
Last year I moved it over from the single server it currently sits on, and kept an eye on the tables. After I saw records being written real time I left it in place.
A day later I got a message stating data was missing. It transpires that some transaction data was not being written but it appears that the transaction may have been started.
I moved the DB back to a single host. I reviewed the percona setup and the proxy. The parameters seem to fall into line with what examples I could see out there. Client and server timeouts are 60 minutes and the proxy balance is roundrobin ("Each server is used in turns, according to their weights.")
I am mostly ignorant about haproxy but, I have to assume that once the connection is made all data with the same (session?) identification information would be purely between the sour
Quota limit reached - but I only have 1 schedule (Compute Engine API: Disk Snapshot Schedules)
2 April 2026 @ 11:14 am
I just noticed that I'm hitting the quota 20 / 20 for Compute Engine API: Disk Snapshot Schedules.
But I only ever created a single snapshot schedule and attached it to 18 persistent disks.
It clearly doesn't target the schedules (I have 1) but it's way too low to be related to disk backups (20 disks limit? low).
What's that quota about?


Is it possible to have custom reload/restart-like commands in systemctl for a daemon?
31 March 2026 @ 4:07 pm
I'm developing a daemon that runs under control of systemd, and that has a feature to reload code that's inbetween "systemctl reload" and "systemctl restart". Unlike "systemctl reload" that sends a signal to re-read just the configuration, it also reloads the code, but unlike "systemctl restart" it's not a true hard restart that forgets all the state, but instead the state is written to a file and a signal causes the daemon to replace its code with execve() and then it reads the old state from the file.
Is there any feature in systemd that would allow me to add some custom command to systemctl that would be inbetween reload and restart, just for this daemon?
Technically the feature would be implemented by some signal such as SIGUSR1 or SIGUSR2.
I'm expecting that there could be cases where a hard restart will be done instead of the lighterweight "dump state + execve + reload state", so it would be useful to have
Delete Fails when Windows NFS mounted on Linux vm
30 March 2026 @ 5:37 pm
I have a windows NFS setup with AD and on mounting it in linux vm. I’m able to create and edit the files but not able to delete the files. The user seems to be correctly mapped but delete fails.
I have even given full control to the user but it still doesn’t work.
Can someone help me understand the possible causes for this?
The delete works correctly on the windows machine.
Edit:
This how I have reproduced the issue
Assigned my user full control permissions to the share and in this case I was able to create,edit and delete on both server and linux vm.
Updated the user to just have RX, even in this case I was able to create,edit and delete on both windows and linux because the user is part of the BUILTIN/Users and this group has permissions.
Didn't change anything, just unmounted the share on linux vm and restarted the windows server(complete restart). In this case it works on wi
Nginx on Debian 13 serving Moodle
28 March 2026 @ 5:32 pm
So i am configuring nginx on a new server and the first part of the install was working great but now i am getting errors related to the js files. I tried everything i read online. I could really use your help.
my nginx conf file:
server {
#listen 80;
#listen [::]:80;
listen 443 ssl; # managed by Certbot
listen [::]:443 ssl ipv6only=on; # managed by Certbot
ssl_certificate /etc/letsencrypt/live/subdomainhere/fullchain.pem; # managed by Certbot
ssl_certificate_key /etc/letsencrypt/live/subdomainhere/privkey.pem; # managed by Certbot
include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot
ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot
server_name subdomainhere www.subdomainhere;
root /var/www/html/moodle;
index index.html index.htm index.php;
add_header 'Content-Security-Policy' 'upgrade-insecure-requests';
location / {
try_files $uri $uri/ =404;
}
# PHP-FPM
Is it possible to nest HAPROXY settings (defaults)?
25 March 2026 @ 12:19 pm
The HAProxy documentation states that a named defaults is possible. The anonymous defaults are always used if a named version is not called.
If we have errorfile xxx /etc/haproxy/errors/errorsxxx.http for various error codes in our defaults (or even in a separate defaults http), and if we also have defined, e.g., defaults impala with various settings for that specific service, could defaults impala contain defaults http?
Or even specify more than one defaults collective within the proxy config?
Otherwise a lot of duplication is likely to occur.