serverfault.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Common Server issues – FAQs and answers from those in the know

How can I get my kvm switch and VMs that are on the same VLAn to comm8unicate?

29 March 2026 @ 5:12 am

I've setup a kvm host using a vlan trunk (using 2 bonded nics) and then created several vms on the host. The host is configured with an IP in the native VLAN. Some of the vms are connected to vlan 1, the native vlan, others are connected to VLAN 20 and VLAN 50. I have communication between all the VMs, regadless of which VLAN they are connected to and other devices on the network can communicate with all the VMs and with the host server. The host server and vms connected to VLAN 20 or 50 can also communicate. However there is no communication between the host server and the vms connected to the native vlan. I've setup the kvm host using as follows, (using https://intelligentsysadmin.wordpress.com/2023/01/24/bridged-vlans-with-networkmanager/ as a general guide): Create the bond interface connection: nmcli con add type bond con-n

Configuring network in virt-manager/Qemu/KVM?

29 March 2026 @ 4:06 am

I'm running a Linux Mint Mate 22.3 host with a Linux Mint Mate 22.2 guest. The host is a brand new install. The guest is running on a virtual disk that I moved over from a VMWare configuration. The guest works well enough but I'm having issues with networking, I can reach out to the Internet from the guest without issue. I can even ssh into the guest from the host. But I'm unable to access the guest from any other machine on my local network. Any ideas what the problem might be? My initial thought was this might be because the guest networking is configured as NAT, and I've not been able to find current, non-contradictory instructions on configuring bridge mode. But thinking about it, I might be assuming too much. Ideas as to what the issue might be?

How can I view connection attempts from non-allowlisted IPs in Azure OpenAI / Azure AI Foundry?

29 March 2026 @ 12:41 am

I am using Azure OpenAI and Azure AI Foundry with network restrictions enabled (IP allowlist). When requests originate from IP addresses that are not on the allowlist, they are blocked (hopefully all the time). However, I would like to audit or monitor those denied connection attempts, specifically: See the source IPs that attempted access but were not allowlisted. Count or analyze rejected requests over time. See the query content. Troubleshoot whether legitimate clients are being blocked. How can I view connection attempts from non-allowlisted IPs in Azure OpenAI / Azure AI Foundry?

Nginx on Debian 13 serving Moodle

28 March 2026 @ 5:32 pm

So i am configuring nginx on a new server and the first part of the install was working great but now i am getting errors related to the js files. I tried everything i read online. I could really use your help. my nginx conf file: server { #listen 80; #listen [::]:80; listen 443 ssl; # managed by Certbot listen [::]:443 ssl ipv6only=on; # managed by Certbot ssl_certificate /etc/letsencrypt/live/subdomainhere/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/subdomainhere/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot server_name subdomainhere www.subdomainhere; root /var/www/html/moodle; index index.html index.htm index.php; add_header 'Content-Security-Policy' 'upgrade-insecure-requests'; location / { try_files $uri $uri/ =404; } # PHP-FPM

File stuck in php-fpm opcache?

28 March 2026 @ 2:17 pm

I had a php file I was unable to update. The server was constantly returning an old version of the file. After deleting the file I got 404, but restoring the file again returned an old version of the file. All other files I tested worked as expected. Copying the file to a new file name worked as expected. Just that one file wouldn't update. After calling opcache_reset it started working. So it seems the cache was not correctly invalidated for that one file on change. This is scary. Why would this happen? How can I prevent this from happening again, besides disabling opcache? I found this other example of it happening to someone (though it doesn't specify it happening to just one file): Why Does PHP-FPM sometimes get stuck serving old files?

Trigger scripts via dovecot imapsieve without actually touching the read-only mailbox

27 March 2026 @ 11:15 pm

Configuring a Sieve script to run after IMAP flag changes like so: [..] dovecot_config_version = 2.4.0 protocol imap { mail_plugins { acl = yes imap_acl = yes imap_sieve = yes } } sieve_plugins { sieve_imapsieve = yes } sieve_script script_name { cause = flag driver = file name = script_name # content does not matter, empty file sufficient path = /etc/dovecot/file-name.sieve type = after } [..] In conjunction with an ACL of lookup/read/write/write-seen (no insert/post!) on a mailbox gives me errors like this.. on each and every flag IMAP change: imap(redacted@example)<123> Error: sieve: Execution of script 'script_name/file-name' failed with unsuccessful implicit keep Is there a more proper (not relying on dovecot hand

Apache redirect all ports from subdomain to back end server

27 March 2026 @ 4:20 pm

I have multiple servers with different services. I have Apache on a publicly accessible server, pointing different subdomains to different servers (*.example.com, *.serv1.example.com, *.serv2.example.com). One problem that is when I go to serv1.example.com:5320, it goes to example.com:5320. Here is the Apache server config: <VirtualHost *:80> ServerName serv1.example.com ServerAlias *.serv1.example.com ProxyPreserveHost On ProxyPass / http://100.10.20.30/ ProxyPassReverse / http://100.10.20.30/ </VirtualHost> I tried using <VirtualHost *:*> but that didn't route traffic through in general. The back end servers are also running nginx, so that I could give them their own subdomains, and the main server is running Ubuntu with the backend servers running Arch linux. Any help would be much appreciated.

How to configure dovecot to handle one specific user special?

27 March 2026 @ 1:13 pm

My dovecot setup looks like this: # 2.4.2 (0962ed2104): /etc/dovecot/dovecot.conf # Pigeonhole version 2.4.2 (767418c3) # OS: Linux 6.12.0-160000.25-default x86_64 # Hostname: eagle # 9 default setting changes since version 2.4.0 dovecot_config_version = 2.4.0 dovecot_storage_version = 2.4.0 listen = * protocols = imap lmtp ssl = required protocol imap { imap_idle_notify_interval = 60 secs mail_max_userip_connections = 10 } ssl_server { cert_file = /etc/dovecot/certs/cert.pem key_file = /etc/dovecot/certs/privkey.pem } namespace inbox { mail_driver = mbox mail_inbox_path = /var/mail/%{user} mail_path = ~/Mail inbox = yes separator = / } mbox { read_locks = fcntl write_locks = fcntl } passdb pam { service_name = dovecot } userdb passwd { use_worker = yes } service imap-login { inet_listener imap { } inet_listener imaps { port = 993 ssl = yes } } namespace inbox { inbox = yes separator = / mailbox Drafts { special_use

Email from Google to Microsoft 365 fails, but no error messages

27 March 2026 @ 12:55 pm

Due to a migration process, I need to adjust our inbound mail configuration to transition to exchange online. The old configuration Config A MX 10 mymail-1.mydomain.com. MX 20 mymail-2.mydomain.com. The currently desired configuration is something like Config B MX 5 mydomain.mail.protection.outlook.com. MX 10 mymail-1.mydomain.com. MX 20 mymail-2.mydomain.com. After enabling this, everything seemd to work fine: As expected, almost all incoming mails entered via Exchange Online, every now and then some incoming mail used the fallback or even the fallback of the fallback. There was but one(?) exception that did not work: Mails coming from Google customers. From what I can assess from my side, Google perhaps tried to go via the Microsoft route and failed, but never tried the fallback or second fallback. Even after several hours, neither did the mail arrive her

Puppet fails with Cannot allocate memory - fork(2) on Debian Trixie VMs (Ganeti) unless RAM is increased to 8 GB

27 March 2026 @ 3:59 am

I am facing a memory-related issue on Debian Trixie VMs running on Ganeti. These VMs are used exclusively as PostgreSQL database servers. The same Puppet configuration works fine on Debian Bullseye and Bookworm, but consistently fails on Trixie. Environment Hypervisor: Ganeti Guest OS: Debian Trixie VM RAM: 4 GB (fails), works only at 8 GB Workload: PostgreSQL + Puppet agent Puppet version: Puppet 7 PostgreSQL version: 14 Problem When running Puppet (runpuppet), I get multiple failures like: Error: Could not evaluate: Cannot allocate memory - fork(2) Error: Could not prefetch mount provider 'parsed': Cannot allocate memory - fork(2) Error: Could not prefetch sysctl provider 'augeas': Cannot allocate memory - fork(2) Example full output: Error: /Stage[main]/Ssh/Exec[/bin/systemctl enable systemd-networkd-wait-online.service]: Could not evaluate: Cannot allocate memory - fork(2) Error: /Stage[main]/Profiles::Monitor