Common Server issues – FAQs and answers from those in the know
kube-api doesn't resolve correctly webhook service name using internal dns
3 February 2026 @ 6:12 pm
I have an issue with kube-apiserver which try to resolve an audit-log webhook service name using external DNS (192.168.2.23 is an external DNS server from another LAN defined in /etc/resolv.conf) instead of using internal core-dns dns service. This cause the webhook call to fail:
root@master01:/etc/kubernetes/manifests# crictl logs 6e5eaedc6391d 2>&1 | grep "webhook" | tail -5
2026-02-03T17:45:38.106154746Z AUDIT: id="9899a656-48a9-4a85-84ca-d855082123fb" stage="ResponseComplete" ip="10.2.10.71" method="watch" user="system:serviceaccount:argocd:argocd-application-controller" groups="\"system:serviceaccounts\",\"system:serviceaccounts:argocd\",\"system:authenticated\"" as="<self>" asgroups="<lookup>" user-agent="argocd-application-controller/v0.0.0 (linux/amd64) kubernetes/$Format" n
mariadbd: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled ... and the user ... not a member of sysctl kernel.io_uring_group
3 February 2026 @ 2:25 pm
I've been fighting to silence the following warning for an hour now:
[Warning] mariadbd: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup).
create_uring failed: falling back to libaio
2026-02-03 13:43:43 0 [Note] InnoDB: Using Linux native AIO
As, "somehow", suggested in this post, I tried to add capabilities and ulimits:
docker run -d --name mariadb \
--network lempnet --ip 172.40.0.120 \
--volume datadb:/var/lib/mysql
--volume /sys/fs/cgroup/memory.pressure:/sys/fs/cgroup/memory.pressure
--env MARIADB_ROOT_PASSWORD=pipo \
--restart unless-stopped \
--ulimit memlock=128000 --ulimit nproc=128000 \
--cap-add CAP_SYS_ADMIN
How to go about debugging Pleroma federation issues?
3 February 2026 @ 12:15 pm
TL/DR
I'm seeing ssl_verify_hostname:verify_cert_hostname in my logs and my posts dont reach foreign servers. I know of no way how to debug that.
longer story
what I did to get here
I've been running my personal pleroma-instance for like a year now.
Since that's kinda outdated, I tried to do an upgrade to the current 'stable' branch.
So I git checkouted that and tried to run it. - that failed because of too old Elixir version.
So I decided to do even worse and upgraded my debian setup from Bookworm to Trixie.
That out of the way, I ran migrations and started pleroma (some problems with broken gopher support, so I disabled that)… after a while it ran and I was using it for 3-4 days.
I noticed near complete lack of any interactions and tried to debug why. Turns out since the upgrade none of my posts federated to other instances. (I fetched their posts fine though)
What I tried to debug
DNS Server And Email Server on same machine [closed]
3 February 2026 @ 12:13 pm
There is a DNS server and Email server following email client app two users are there.
Private set up is DNS server, Email server, Email client App, Domain Name (min 2 email id user)
Explain functions process of how things are working.
In what scenarios has adopting a cloud service added operational complexity rather than reducing it? [closed]
3 February 2026 @ 8:40 am
I’ve personally seen managed databases, auto-scaling, and complex networking setups backfire—sometimes costing hours or even days of troubleshooting. Curious what real-life experiences others have had and what lessons you all have learned.
Identifing Legacy SCSI RAID Configuration from Drives [closed]
3 February 2026 @ 2:25 am
I have two Ultrawide320 36GB drives from 2001 that appear to have been part of a RAID configuration, not sure which RAID type.
The previous owner doesn't have any details about the system these were deployed in. I am working up a analysis strategy to see if data on these drives can be recovered.
I was planning on purchasing a PCI Ultrawide SCSI card to see what I can glean.
Is there a Linux tool that can help me determine what the RAID configuration was these two drives to mount them and retrieve the data?
DNSSEC NSEC black lies break aggressive NSEC caching?
2 February 2026 @ 11:12 pm
I am currently experiencing an issue with hosting my DNSSEC-enabled zone at Bunny.net's DNS service:
Sporadically, DNSSEC aware resolvers are returning SERVFAIL or NXDOMAIN responses for records which should exist.
An example is given by this RIPE Atlas measurement:
https://atlas.ripe.net/measurements/152625241/results
It shoes that the anycast node with NSID dns4eu-fra-1 is returning an NXDOMAIN while the rest of the resolver network returns to the correct reply.
My provider Bunny.net is employing NSEC black lies to prohibit zone enumeration.
I have somehow the feeling that something is wrong here, as my resolver is returning an extended error messaging indicating the the NXDOMAIN response has been cached ("2NEP: synthesized from aggressive cache"):
Shared secrets with CSI secret sync enabled
2 February 2026 @ 5:33 pm
I would be interested to understand how to handle shared secrets used by many resources (e.g. deployments) by using CSI Secret Provider Classes.
In this moment I have many Helm Releases in the same namespace that references and sync the same secret from Azure Key Vault. By seeing the yaml of one secret referenced more than once, I see many owner references, obv the ReplicaSets of all helm releases's revisions.
kind: Secret
metadata:
creationTimestamp: "2026-01-20T17:22:24Z"
labels:
secrets-store.csi.k8s.io/managed: "true"
name: rabbitmq-password
namespace: stackit
ownerReferences:
- apiVersion: apps/v1
kind: ReplicaSet
name: serveragent-7bb68c7d49
uid: 23d5e5b9-7ee7-4b59-ac0b-04f32974d7e5
- apiVersion: apps/v1
kind: ReplicaSet
name: serveragent-c59895896
uid: 45841770-e66f-48ac-b164-81a30835439b
- apiVersion: apps/v1
kind: ReplicaSet
name: device-5fb776cfc4
uid: 19e59d53-4b24-4644-8d0f-
How to set up 802.1X with EAP-TLS certificates to restrict switch access to authorized computers only? [closed]
2 February 2026 @ 1:17 pm
I am setting up a lab where network devices such as switches and access points only allow management from computers that are authorized with certificates and 802.1X authentication.
Specifically, I need help on:
Setting up the certificate authority and issuing client certificates.
Configuring 802.1X authentication on switches from vendors such as Cisco, Aruba, etc.
Configuring a RADIUS server (Windows NPS or FreeRADIUS) to work with the certificates.
Ensuring that only client devices with valid certificates can access the switch and configure it.
I am looking for a step-by-step lab guide or tutorial, preferably one that shows the complete flow from certificate issuance to switch access restriction.
Any links to videos, guides, or detailed instructions would be very helpful.
Thank you!
How to use a different IP address on a wireguard interface point to point connection
2 February 2026 @ 11:35 am
I have a Wireguard tunnel on two Debian 12 machines, host1 and host2. They use "tunnel-IP's" with a /31 mask, only for the purpose of this P2P connection, that shouldn't be used elsewhere.
Host IP wg0-IP
host1 172.21.0.1 172.31.0.0/31
host2 172.22.0.1 172.31.0.1/31
host3 172.22.0.2 -
host1 can ping host2. The packets originate from the tunnel-IP 172.31.0.0, and host2 has a route back to host1.
host1 fails to ping host3. The packets arrive at host3, but host3does not have a route back to 172.31.0.1/31, so the packets do not return.
host3can ping host1, so static routes for 172.21.0.0/16 are set up correctly.
What is the most robust and reasonable way to set up the tunnel to be used onl