Common Server issues – FAQs and answers from those in the know
How to set up 802.1X with EAP-TLS certificates to restrict switch access to authorized computers only?
2 February 2026 @ 1:17 pm
I am trying to set up a lab environment where only computers that are authorized with certificates can log in to and configure a network device (Swtich, AP and etc). I want to use 802.1X authentication.
Specifically, I would like guidance on:
Setting up the certificate authority and issuing client certificates.
Configuring 802.1X authentication on the switch (any vendor: Cisco, Aruba, etc.).
Configuring a RADIUS server (Windows NPS or FreeRADIUS) to work with the certificates.
Ensuring that only clients with valid certificates can access the switch and configure it.
I am looking for a step-by-step lab guide or tutorial, preferably one that shows the complete flow from certificate issuance to switch access restriction. Any links to videos, guides, or detailed instructions would be very helpful.
Thank you!
How to use a different IP address on a wireguard interface point to point connection
2 February 2026 @ 11:35 am
I have a Wireguard tunnel on two Debian 12 machines, host1 and host2. They use "tunnel-IP's" with a /31 mask, only for the purpose of this P2P connection, that shouldn't be used elsewhere.
Host IP wg0-IP
host1 172.21.0.1 172.31.0.0/31
host2 172.22.0.1 172.31.0.1/31
host3 172.22.0.2 -
host1 can ping host2. The packets originate from the tunnel-IP 172.31.0.0, and host2 has a route back to host.
host1 fails to ping host3. The packets arrive at host3, but host3does not have a route back to 172.31.0.1/31, so the packets do not return.
host3can ping host1, so static routes for 172.21.0.0/16 are set up correctly.
What is the most robust and reasonable way to set up the tunnel to be used only
Asking advice on using PBR with kubernetes
2 February 2026 @ 9:32 am
My cluster info :
Kubernetes version: 1..3.7
Deployment: bare metal
Installation method: kubesprayHost.
OS: Rocky Linux 9
CNI : Calico v3.30.5 vxlan, with ipvs later nftables. with strictARP.
CRI : containerd v2.1.5
IP forwarding was enabled
rp_filter was disabled
This is the story
I am learning kubernetes, but when configuring my POCs I try to be close to a real life deployment.
For example, in my POC I’ve :
A Firewall, Opnsense that covers all the platform, it’s the entry point of my POC.
01 VM used a deployer (ansible) machine.
03 VMs as controlplan
03 VMs as workers and rook-ceph cluster.
I’ve created several networks :
A management network, where I can access the platform with ssh and kubectl, this network is for admins. In my mind this can be accessed from the LAN also.
A pod to pod network, Calico
Ubuntu Crontab missing sbin in PATH var
2 February 2026 @ 9:07 am
My Logrotate default config is using this postrotate to reload nginx as logs rotation:
postrotate
invoke-rc.d nginx rotate > /dev/null 2>&1
endscript
What I have found is that when running it via crontab, invoke-rc.d is undefined.
This is because $PATH does not contain /usr/sbin, but just /usr/bin:/bin.
I have no custom crontab setup, just the one that comes with Ubuntu and Logrotate.
How to fix this, so Crontab can see /usr/sbin, too?
I am using Docker to build Ubuntu 22.04 environment.
If the cloud is “highly available,” why do outages still take entire regions down? [closed]
2 February 2026 @ 5:53 am
Cloud platforms promise scalability, agility, and operational efficiency—but no cloud is perfect.
If the cloud is “highly available,” why do outages still take entire regions down?
Are we designing for failure—or just hoping our cloud provider won’t fail today?
When your cloud goes down, is it an incident… or a design flaw?
Robocopy to a network drive in task scheduler
2 February 2026 @ 2:50 am
I'm trying to do a simple robocopy to a network drive via powershell and call that from task scheduler.
I can't get the network drive to be visible to powershell in task scheduler. The scripts I test run fine from CMD or in the Powershell ISE
This is the command:
robocopy "E:\Backups\Offsite" "Y:\" /MIR /MT:64 /E
This works fine, except in Task Scheduler.
If I use the UNC path then I get "Error 161 .... The specified path is invalid
robocopy "E:\Backups\Offsite" "\\<obscured>.com.au" /MIR /MT:64 /E
I have tried pushd, and NET USE, but none of those are solving this problem when run via task scheduler.
This is on Windows Server 2022. I have "Run whether user is logged on or not" checked, and "Run with highest priviliges" checked, and nothing else checked.
Why does the same lvreduce command has such different execution times on consecutive runs?
1 February 2026 @ 8:25 pm
I have a VM with a large 6TB disk. The VM has been booted with a Ubuntu24 liveISO. I am incrementally shrinking the logical volume with the time lvreduce --resizefs --size 1T /dev/vgname/lvname command.
The underlying luns in the datastore are seeing little to no activity while this test was conducted.
I have so far ran this command 4 consecutive times and got pretty different execution times:
1st run: real 280m
2nd run: real 327m
3rd run: real 160m
4th run: real 437m
The end goal is replacing the 6TB disk with a smaller 1TB disk and affecting the OS as little as possible.
Why are the execution times so different?
Is this the optimal approach for reducing the size of a logical volume?
Can I shrink an mdadm raid5 partition that is larger than the others?
1 February 2026 @ 2:08 pm
I have a 4-drive mdadm raid5 array comprising 3x 2TB drives and 1x 4TB drive which recently replaced a failed drive. I plan to make all subsequent replacements 4TB drives so eventually I will enlarge the array, but this first one simply went into the array as a single 4TB partition, with the headroom "dead space" for the meantime. (I've done the array enlargement process once already with 1TB -> 2TB drives.)
This time, I forgot to leave ~100MB slack space at the end of the drive, in case subsequent drives are slightly smaller. In this instance I'm quite likely to be buying the same model for the remaining 3x 4TB drives, but I've seen it suggested that that still may not guarantee the drives' capacities will be identical.
What are my options for reducing the size of the new partition? I know I can always fail+remove the new drive and start over, but I'd like to save the time and disk wear if I can.
From the manpage it appears I could --fail
NPM cannot connect to server using Websocket
31 January 2026 @ 9:07 pm
I have a web application that communicates with a server using Websockets. When I access it directly, it works without problems. Unfortunately, when I access it through Nginx Proxy Manager, I get the following message:
Cannot connect to server: timeout
Check is server is reachable at
ws://talker.srv:8000/_event
I have read the documentation about Websocket proxying at:
https://nginx.org/en/docs/http/websocket.html
I have set the Websocket Support to "on", and in the "Custom Locations" tab, I have put in the following:
Location: /_event/
Scheme: http
Forward Hotname/IP: <My-Systems-IP-Address>
Forward Port: 8000
And I have added the following to the location:
location /_event/ {
proxy_pass http://0.0.0.0:8000;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection
RAID1 and RAID6 arrays disappeared in Dell PERC6, all disks are listed as available
31 January 2026 @ 4:51 pm
We have a Dell PowerEdge R740xd server with PERC H730P Mini RAID controller. It has 2x 1.28 Tb SAS hard drives, 1x 12tb SAS, and 6x 12 Tb SATA hard drives installed.
RAID1 was assembled from 2x 1.28 Tb SAS drives, a RAID1+ 1 SAS 12tb hot spare was assembled (it had OS Win Serv on it). RAID6 (data archive) was assembled from 6x 12 Tb SATA hard drives.
We've got a task to add another RAID5 array to store information on 3x 14 Tb SATA hard drives. When creating RAID5 using Dell Lifecycle Controller --- Hardware configuration --- Configure RAID !, we made a fatal error, checked the "Select all available disks" option to create RAID5, and all the listed disks turned out to be available. Accordingly, after a new RAID5 array was created, there was only one RAID5 in the Virtual Disk Management menu, RAID1 and RAID6 disappeared. Moreover, all the SAS hard drives in RAID5 are not turned on, they are in Ready mode. The RAID5 array was quickly initialized.
Realizin