Common Server issues – FAQs and answers from those in the know
Remove a line only if it is outside a blockinfile block
25 December 2025 @ 12:43 am
Using Ansible, I am looking to add or update multiple entries to a postfix main.cf file. I want to use ansible.builtin.blockinfile since some of my entries will be multiline and/or have comments.
Is there a way to remove lines only if they are outside of a blockinfile block?
Example
This example is simplified; in my actual situation, I need to replace ~7 entries.
Original main.cf:
mynetworks = 127.0.0.0/16
Desired (Blockinfile markers are acceptable of course):
mynetworks = cidr:/etc/postfix/mynetworks.cidr
hash:/etc/postfix/mynetworks.fqdn
Implementation (not working correctly):
ansible.builtin.lineinfile:
path: /etc/postfix/main.cf
regexp: '^mynetworks.*'
state: absent
ansible.builtin.blockinfile:
path: /etc/postfix/main.cf
marker: "# {mark} ANSIBLE MANAGED BLOCK test"
block: |
mynetworks = cidr:/etc/postfix/mynetworks.cidr
Reverse Proxy supporting UDP with Proxy Protocol headers
24 December 2025 @ 11:21 pm
I want to host a Minecraft server on my home PC for both Java Edition (TCP) and Bedrock Edition (UDP).
Because my home connection is behind CGNAT, I cannot expose the server directly, so I’m using a very low-resource VPS as a public entry point and forwarding traffic to my PC over Tailscale.
The basic setup works using a simple reverse proxy (NGINX with the stream module), but there is a major limitation: on the backend server, all player connections appear to come from the VPS IP, not from the players’ real IP addresses.
I learned that this can be solved for TCP by enabling PROXY protocol, which prepends the original client IP to the connection. With NGINX acting as the proxy and the backend configured to accept PROXY protocol, this works perfectly for Minecraft Java Edition (TCP).
However, Minecraft Bedrock Edition uses UDP, and this is where I’m stuck:
NGINX only supports PROXY protocol over TCP, not UDP.
I tested HAProxy Community Edition
DeepWiki-Open Docker image ignores OPENAI_API_KEY from .env, no LLM models loaded
24 December 2025 @ 11:05 am
I’m deploying DeepWiki-Open using Docker Compose on a RHEL 9 server and the application starts correctly, but no LLM models are available in the UI, preventing wiki generation.
Environment
OS: RHEL 9
Docker + Docker Compose
Image: deepwiki-open-deepwiki:latest
Issue
UI error:
“Failed to load model configurations. Using default options.”
Model Provider and Model Selection dropdowns are empty
No LLM backends appear to be registered
Logs
Container logs show:
OPENAI_API_KEY is not set
GOOGLE_API_KEY is not set
This indicates the container is not receiving environment variables from the .env file, despite documentation suggesting .env support.
Result
DeepWiki UI and backend are operational
Model configuration never loads
Wiki generation is impossible
Question
What is the correct way to pass API keys (OpenAI or compatible providers) to the DeepWiki-Ope
Error 0x80070001 When Deploying 3rd Party MDR Agent Through Intune Apps
23 December 2025 @ 10:43 pm
I am trying to utilize uploading a Win32 app to Intune to deploy to Intune-joined devices. When the test device receives the laptop the install fails with the following event ID logs.
Event ID:1040, beginning a windows installer transactionEvent ID:11708, Installation failedEvent ID:1033, Installation success or error status: 1603
Intune reports device install status details as Status:Failed, Status details:0x80070001
The app package is configured as instructed by the app's documentation. Research on the install error looks to be permission based. This seems odd to me as a the intune manager should have full control, correct? Research on the status code given by Intune only finds things related to VMs and images. This is not relevant to this issue because the te
Split VPN per user with wireguard and policy based routing
23 December 2025 @ 10:24 pm
There is a box that is connected to a vpn via a wireguard link. But I would like to implement split VPN setup such as that traffic from specific system user processes goes via the vpn. All other traffic should be routed via the default internet gateway.
Traffic from user vpner should be routed via the vpn network.
What I already have that does not seem to work :
/etc/wireguard/vpn.conf
[Interface]
PrivateKey = xxx
FwMark = 0x7148 #29000
Address = 10.9.0.2/32
Table = 29000
PostUp = /etc/wireguard/ws.sh %i PostUp
PreDown = /etc/wireguard/ws.sh %i PreDown
[Peer]
PublicKey = xxx
AllowedIPs = 0.0.0.0/0, ::/0
Endpoint = xxx
/etc/wireguard/ws.sh
#!/bin/bash
set -e
IF=$1
MODE=$2
What is the difference between a Tri-mode RAID Controller and an Entry RAID Adapter?
23 December 2025 @ 9:13 pm
I currently have a system with a Tri-Mode RAID Controller. The device is EOL, and I am looking at an Entry RAID Adapter.
I need to determine the difference between each one. Here is what I know so far:
The Entry RAID Adapter does not require Reserved Memory Region Reporting structure.
The RAID Adapter allows the CPU for RAID.
The RAID controller has its own processor and configures RAID without the use of the system's CPU.
ceph orch always spits out error "ENOENT: Module not found" no matter which command. Why?
23 December 2025 @ 6:57 pm
I'm now adding OSDs to my Ceph cluster. Creating the OSD nodes themselves worked on all nodes:
sudo ceph osd create
0
But once I tried to add the SSD on my nodes to the cluster, I got this:
mixtile@blade3n1:~$ sudo ceph orch apply osd --all-available-devices
Error ENOENT: Module not found
mixtile@blade3n1:~$ sudo ceph orch daemon add osd node-01:/dev/nvme0n1
Error ENOENT: Module not found
Other commands ended up with exactly the same error message, so that I'm afraid there's something wrong with the orchestrator itself. Ceph commands, which do not use the orchestrator, apparently do work:
mixtile@blade3n1:~$ sudo ceph -s
cluster:
id: 3d540960-ce05-11f0-a5b1-0e281c59af8c
health: HEALTH_WARN
1 MDSs report slow metadata IOs
1/4 mons down, quorum blade3n4,blade3n2,blade3n3
Reduced data availability: 2 pg
Kerberos Authentication vs Domain Controller Authentication – superseded templates and RSA key length
23 December 2025 @ 4:56 pm
I currently have two certificates installed on my Domain Controllers:
Kerberos Authentication
Validity: 1 year
Key length: RSA 2048
Hash: SHA-256
Domain Controller Authentication
Validity: 5 years
Key length: RSA 1024
Hash: SHA-256
I want to fully move to Kerberos Authentication (RSA 2048) and deprecate the legacy Domain Controller Authentication certificate.
My questions are:
If I edit the Kerberos Authentication certificate template and add only the “Domain Controller Authentication” template under Superseded Templates, is that sufficient to ensure auto-enrollment replaces it?
Since the two templates use different RSA key lengths (2048 vs 1024), does this difference affect or block the supersedence behavior in any way?
The goal is Calico default Pod CIDR 192.168.0.0/16 vs VirtualBox VM network 192.168.56.0/24 (kubeadm) — safe or must change? [migrated]
23 December 2025 @ 12:29 pm
I’m building a Kubernetes homelab in VirtualBox using kubeadm (1 control-plane + 2 workers) and Calico as the CNI. Each VM has two NICs:
NIC1: NAT (DHCP) for internet access
NIC2: Host-Only (static IPs) for node-to-node traffic
The Host-Only network is 192.168.56.0/24. Calico’s default IPPool CIDR in custom-resources.yaml is 192.168.0.0/16. Since 192.168.0.0/16 includes 192.168.56.0/24, the Pod network overlaps the VM/node network.
Example configuration:
VirtualBox Host-Only network: 192.168.56.0/24
VirtualBox NAT network: 10.0.2.0/24 (DHCP)
Nodes (Host-Only NIC / static IPs):
k8s-cp1: 192.168.56.21
k8s-w1: 192.168.56.22
k8s-w2: 192.168.56.23
Pod CIDR options:
Calico default IPPool: 192.168.0.0/16 (possible overlap)
Alternative Pod CIDR: 10.244.0.0/16 (non-overlapping example)
kubeadm init command options:
Option A (keep Cal
How to assess the trustworthiness of cloud browser automation platforms before trusting them with cookies/tokens/accounts and running scraping [closed]
23 December 2025 @ 12:11 pm
I’m doing browser automation for data extraction and partially for user-like scenarios such as logging in, filling out forms, and exporting data from a personal account area. I want to use a third-party cloud browser automation platform so I don’t have to maintain my own servers.
The problem is that I need to understand how much I can trust such platforms, because in the process I will have:
cookies / session tokens,
sometimes a login/password (or one-time codes),
proxies,
page results that may contain sensitive information.
At the same time, anti-bot checks and CAPTCHAs pop up almost everywhere (often reCAPTCHA/Turnstile). This kind of scenario doesn’t really qualify as “clean traffic”, so I’ll be integrating a third-party bypass/solver solution.
Questions:
By what practical signs/criteria should I evaluate the trustworthiness of these platforms (without “marketing”): what must I ask/check (logging, data storage, sess