serverfault.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Common Server issues – FAQs and answers from those in the know

Spamassassin meta rule and URIBL

2 April 2026 @ 5:30 pm

I'm trying to understand a behavior of spamassassin meta rules. If i set up a custom rule like this: header __LOCAL_SOME_CONDITION xxxxxxxx meta SOME_META (__LOCAL_SOME_CONDITION && !URIBL_ABUSE_SURBL) describe SOME_META test meta rule score SOME_META -10 The goal is to avoid some false-positive when some condition is matched but the URIBL check if false, details about the false-positive are useless, they are not the point of the question. But what I get on mail header is: 1.9 URIBL_ABUSE_SURBL -10 SOME_META And that make no sense, how can SOME_META match if URIBL_ABUSE_SURBL match? The only suggestion was found by a colleague with ChatGPT and is that URIBL_ABUSE_SURBL check is a DNS check and run async, so when SOME_META check run (before the end of the async call) URIBL_ABUSE_SURBL is still false. This makes sense, but checking spamassassin doc/googling about meta rules I wasn't ab

Copy files between two DFS locations without involving client during copy operation?

2 April 2026 @ 3:38 pm

User needs to be able to copy files from one location in DFS to another (hosted on different physical servers) without involving the client machine during the copy activity. Currently they are using Windows File Explorer to open both locations and copy/paste the files which works reasonably when they are on-site but performance is very poor when they are working remotely because the operation is being managed from their machine which is connecting through the corporate VPN. Is there a good way to initiate the file copy so that it will be done directly from one server to the other without involving the remote client machine during the operation?

Return custom status code using php http_response_code() on 404 pages nginx

2 April 2026 @ 1:50 pm

I am using /default.php for 404 pages, and returning status code 200 using the config. server { listen [::]:443 ssl; server_name www.mywebsite.com; ssl_certificate /.cert/cert.pem; ssl_certificate_key /.cert/key.pem; root /usr/share/nginx/html; index index.php; error_page 404 =200 @defaultblock; location @extensionless-php { rewrite ^(.*)$ $1.php last; } location @defaultblock { try_files $uri $uri/ /default.php$is_args$args; } location / { try_files $uri $uri/ @extensionless-php; } location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } However, I want to return the status code using php http_response_code() or header() function. Current

HAProxy & Percona XtraDB data loss

2 April 2026 @ 12:40 pm

We have a 2 proxy, 4 MYSQL server setup. Our data guys do not want their oozie database on Percona. We left it as is for years. Last year I moved it over from the single server it currently sits on, and kept an eye on the tables. After I saw records being written real time I left it in place. A day later I got a message stating data was missing. It transpires that some transaction data was not being written but it appears that the transaction may have been started. I moved the DB back to a single host. I reviewed the percona setup and the proxy. The parameters seem to fall into line with what examples I could see out there. Client and server timeouts are 60 minutes and the proxy balance is roundrobin ("Each server is used in turns, according to their weights.") I am mostly ignorant about haproxy but, I have to assume that once the connection is made all data with the same (session?) identification information would be purely between the sour

Quota limit reached - but I only have 1 schedule (Compute Engine API: Disk Snapshot Schedules)

2 April 2026 @ 11:14 am

I just noticed that I'm hitting the quota 20 / 20 for Compute Engine API: Disk Snapshot Schedules. But I only ever created a single snapshot schedule and attached it to 18 persistent disks. It clearly doesn't target the schedules (I have 1) but it's way too low to be related to disk backups (20 disks limit? low). What's that quota about? The quota view The schedules list

Poor video quality when connecting live media between phones

2 April 2026 @ 7:21 am

Our company uses IP telephony based on an office PBX. The Cisco 8865 and Yealink SIP-T58W phone models are used. With a direct RTP connection (not via PBX), the video quality transmitted from the Cisco phone deteriorates (pixels appear), and from the Yealink phone - the video narrows down . If the traffic goes through the PBX, there are no problems with the video. The only thing that catches your eye is the high average/maximum packet delta in Wireshark (when analyzing RTP streams), however, all phones are on the same voice vlan and dscp qos is used. What could be the problem, given that there is no packet loss or jitter in the traffic dump and the payload does not change when traffic passes through the PBX server?

Best way to handle Zoom integration conflicts across multiple GoHighLevel accounts?

2 April 2026 @ 5:04 am

I’m working with GoHighLevel and trying to integrate Zoom for scheduling and meetings, but I’ve run into an issue where the Zoom account seems to already be connected to another sub-account. The error message indicates that the Zoom account is already integrated elsewhere, even after attempting to remove integrations from the current account settings. From what I understand, GoHighLevel (LeadConnector) only allows one active connection per Zoom account, but it’s not very clear where the original integration is stored (agency level vs sub-account level). I’ve already tried: Removing integrations from the current sub-account Switching calendar integrations Re-authorizing Zoom Still facing the same issue. What would be the correct way to fully disconnect a Zoom account from all GoHighLevel instances so it can be reconnected cleanly? Also, is there a recommended workflow to avoid this conflict when managing multiple client account

How can I convert a Powershell command into a batch file [migrated]

1 April 2026 @ 7:13 pm

The following command works in PowerShell: .\bcs.ps1 -sites a.com,b.guide,c.com But produces an error when PowerShell is called from the command prompt. I removed the inner quotes and that doesn't work either. powershell -file .\bcs.ps1 -sites "a.com","b.guide","c.com" Any suggestions?

Is there any way to use URL masked serverless backends AND support GRPC?

1 April 2026 @ 4:40 pm

Looking through the docs its not clear to me if this is possible. I have previously use URL masked serverless backends to route to my cloud run services. This is easy to configure on the GCP load balancer and its a one time config and all future cloud run services that are deployed are automatically accessible via https://my-lb.mydom.com/service-abc. The problem is GRPC does not support path based routing. So I can't send GPRC requests to my-lb.mydom.com/service-abc:443. I don't want to use host based routing and I want to avoid Cloud Service Mesh or Traffic Director. Is there no way to support this with vanilla GCE load balancing? If I have to use Cloud Service Mesh and/or Traffic Director does it work with URL masked backends so it automatically routes by cloud run service name?

use Nvidia L40s GPU passthrough on Proxmox [closed]

1 April 2026 @ 1:24 pm

How to configure Proxmox host, which has Nvidia L40s GPU(1) for passthrough to a VM running on same Proxmox host. I have HPE Proloiant DL380 with Intel Xeon Processors. 1 Nvidia L40s GPU card. I want to acheive Pcie passthrough to a VM running on Proxmox.