serverfault.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Common Server issues – FAQs and answers from those in the know

IPv6 Auto Configuration Assigning Ipv6 Address

4 February 2026 @ 6:53 am

I am trying to debug an issue where in one setup the auto assigning of Ipv6 address is working but for an other setup it does not. Hence my post to understand how to find what the issue is. We have an IPv6 firmware device whose MAC address is 00:XX:XX:00:8C:94. When we power-on the drive and have a Raspberry PI setup acting like DHCv6 server connected to firmware over network. My findings for working scenario: I see in Wireshark logs and Firmware logs that Firmware has subscribed for below IPv6 Multicast address: FF02::1:FF00:8C94 The RI then sends the data for same multicast address FF02::1:FF00:8C94 With router advertisement ICMPv6 Option (Prefix information : 2001:1200:1100:1000::/64) So firmware gets itself IPv6 address assigned to 2001:1200:1100:1000:xxxx:xxxx:xxxx:xxxx Non-working scenario: Firmware M

Hmac key error on Invidious

4 February 2026 @ 3:08 am

Following the Invidious installation, I created the following Docker Compose File services: invidious: image: quay.io/invidious/invidious:latest # image: quay.io/invidious/invidious:latest-arm64 # ARM64/AArch64 devices restart: unless-stopped # Remove "127.0.0.1:" if used from an external IP ports: - "3000:3000" environment: # Please read the following file for a comprehensive list of all available # configuration options and their associated syntax: # https://github.com/iv-org/invidious/blob/master/config/config.example.yml INVIDIOUS_ : | db: dbname: invidious user: kemal password: kemal host: invidious-db port: 5432 check_tables: true invidious_companion: # URL used for the internal communication between invidious and invidious companion # There is no need to change that except if Invidious companion

GPO for Folder Redirection fails intermittently for subset of users

3 February 2026 @ 11:59 pm

We have a Folder Redirection Group Policy that redirects the Desktop and Documents folders to network share \\fileserver\users$. 10% of users on Windows 11 23H2, the folders fail to redirect on logon with error ID 112 in the event log, leaving local folders. This is intermittent. Logging off and on sometimes fixes it. Could this be a timing issue with drive mapping or network resource availability during logon? Should we implement a logon script delay or use "Wait for network" policy, and if so, what's the current best practice?

Copy Scheduled Tasks to another server

3 February 2026 @ 10:38 pm

The following script: (copied from copilot response): Get-ScheduledTask | ForEach-Object { $taskName = $_.TaskName $taskPath = $_.TaskPath $exportPath = "C:\Tasks\$($taskName).xml" Export-ScheduledTask -TaskName $taskName -TaskPath $taskPath -OutputFile $exportPath } Returns this error: ForEach-Object : A parameter cannot be found that matches parameter name 'OutputFile'. At C:\Users\myUserName.ENT\exportScheduledTasks.ps1:2 char:21 + Get-ScheduledTask | ForEach-Object { + ~~~~~~~~~~~~~~~~ + CategoryInfo : InvalidArgument: (:) [ForEach-Object], ParameterBindingException + FullyQualifiedErrorId : NamedParameterNotFound,Microsoft.PowerShell.Commands.ForEachObjectCommand

Windows Server 2019 SMB client loses access to Hitachi HNAS share (BizTalk workload)

3 February 2026 @ 9:23 pm

I have Windows Server 2019 with biztalk for client side that write and read files on a nas folder from Hitachi version 14.9. The server biztalk wrote on example a folder shared name mapped on domain.int.local.xxx\sharefolder\test\test2 My problem is that after each day(less than 16 hours) , my shared folder become not available. I have to restart the service workstation to release the smb session/connections. I am troubleshooting an SMB stability issue between Windows Server 2019 (BizTalk) and a Hitachi HNAS 14.x. Details : Environment Windows Server 2019 (SMB client) BizTalk Server (high file read/write workload) Hitachi NAS (HNAS) 14.x SMB protocol: mostly SMB 3.1.1, some 3.0.2 Authentication: Kerberos NIC: VM (vmxnet3) NAS accessed via a single SMB alias/share Problem Under load, the SMB share hosted on the Hitachi NAS becomes unavailable from the Windows Server 2019

Managing nssdb accross multiple machines

3 February 2026 @ 7:25 pm

I'm looking into using nssdb for apache cert management as opposed to certificate/key files. I've used nssdb very seldomly and am going through the docs. One aspect I am trying to consider is how to distribute certificates in nssdb across multiple hosts. As it stands I have both client and CA certs that I maintain in an rpm and am able to update them via patching. I'm curious if anyone is aware of a strategy automatically ensure that certificates are up to date in a standard nssdb location across multiple hosts? I'd like to continue to use rpm to manage certificates if at all possible, but am unsure how I can use rpm/script to validate the contents of nssdb.

kube-api doesn't resolve correctly webhook service name using internal dns

3 February 2026 @ 6:12 pm

I have an issue with kube-apiserver which try to resolve an audit-log webhook service name using external DNS (192.168.2.23 is an external DNS server from another LAN defined in /etc/resolv.conf) instead of using internal core-dns dns service. This cause the webhook call to fail: root@master01:/etc/kubernetes/manifests# crictl logs 6e5eaedc6391d 2>&1 | grep "webhook" | tail -5 2026-02-03T17:45:38.106154746Z AUDIT: id="9899a656-48a9-4a85-84ca-d855082123fb" stage="ResponseComplete" ip="10.2.10.71" method="watch" user="system:serviceaccount:argocd:argocd-application-controller" groups="\"system:serviceaccounts\",\"system:serviceaccounts:argocd\",\"system:authenticated\"" as="<self>" asgroups="<lookup>" user-agent="argocd-application-controller/v0.0.0 (linux/amd64) kubernetes/$Format" n

mariadbd: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled ... and the user ... not a member of sysctl kernel.io_uring_group

3 February 2026 @ 2:25 pm

I've been fighting to silence the following warning for an hour now: [Warning] mariadbd: io_uring_queue_init() failed with EPERM: sysctl kernel.io_uring_disabled has the value 2, or 1 and the user of the process is not a member of sysctl kernel.io_uring_group. (see man 2 io_uring_setup). create_uring failed: falling back to libaio 2026-02-03 13:43:43 0 [Note] InnoDB: Using Linux native AIO As, "somehow", suggested in this post, I tried to add capabilities and ulimits: docker run -d --name mariadb \ --network lempnet --ip 172.40.0.120 \ --volume datadb:/var/lib/mysql --volume /sys/fs/cgroup/memory.pressure:/sys/fs/cgroup/memory.pressure --env MARIADB_ROOT_PASSWORD=pipo \ --restart unless-stopped \ --ulimit memlock=128000 --ulimit nproc=128000 \ --cap-add CAP_SYS_ADMIN

How to go about debugging Pleroma federation issues?

3 February 2026 @ 12:15 pm

TL/DR I'm seeing ssl_verify_hostname:verify_cert_hostname in my logs and my posts dont reach foreign servers. I know of no way how to debug that. longer story what I did to get here I've been running my personal pleroma-instance for like a year now. Since that's kinda outdated, I tried to do an upgrade to the current 'stable' branch. So I git checkouted that and tried to run it. - that failed because of too old Elixir version. So I decided to do even worse and upgraded my debian setup from Bookworm to Trixie. That out of the way, I ran migrations and started pleroma (some problems with broken gopher support, so I disabled that)… after a while it ran and I was using it for 3-4 days. I noticed near complete lack of any interactions and tried to debug why. Turns out since the upgrade none of my posts federated to other instances. (I fetched their posts fine though) What I tried to debug

DNS Server And Email Server on same machine [closed]

3 February 2026 @ 12:13 pm

There is a DNS server and Email server following email client app two users are there. Private set up is DNS server, Email server, Email client App, Domain Name (min 2 email id user) Explain functions process of how things are working.