serverfault.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Common Server issues – FAQs and answers from those in the know

Does an MTU of 65202 make sense in a PCIe-based cluster network?

27 April 2026 @ 3:40 pm

I'm migrating from an old stand-alone server to a 4-way cluster, whose nodes (and control board, which also acts as a router to the outside world) are networked by a backplane with PCI Express packet switch (see the datasheet for details). Whilst fighting slow operation and instabilities, I found out that the manufacturer had set the MTU of the PCIe link to 65202, which is maybe normal for loopback connections, but not for a "real" network interface (irrelevant entries omitted): mixtile@blade3n3:~$ ip addr show […] 6: pci0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 65202 qdisc fq_codel state UP group default qlen 50000 link/ether 02:b9:24:b7:73:0a brd ff:ff:ff:ff:ff:ff inet 10.20.0.13/24 metric 100 brd 10.20.0.255 scope global pci0 valid_lft forever preferred_lft forever inet6 fe80::b9:24ff:feb7:730a/64 scope link valid_lft f

What permissions is my user lacking for zfs send pool replication?

27 April 2026 @ 2:52 pm

Sending from a zraid0-1 on TrueNAS 26.0.0-BETA.1 (zfs-2.4.1-1 zfs-kmod-2.4.1-1) to a zfs zraid0-1 array on Zima's CasaOS (zfs-2.3.2-1 zfs-kmod-2.3.2-1). I'm probably going to install TrueNAS 26.0.0-BETA.1 on the Zima (Zima is a hardware brand) host if I can't figure this out today. Thanks for any suggestions. On the target (recipient host) I set these permissions. zfs allow -u supdog -d receive,create,mount,dedup,snapdir,copies,userprop,keyformat,keylocation,pbkdf2iters zima/xool Then I sent using this command: zfs send -w -c -R xool@rebalance | ssh [email protected] zfs receive -s -F zima/xool The transfer ran for several hours and towards the end I started seeing these errors. cannot receive org.freenas:description property on zima/xool/supdog: permission denied cannot receive copies property on zima/xool/supdog: permission denied cannot receive snapdir property on zima/xool/.system: permission denied cannot receive readonly property on zima/x

Scheduled Task set to run every X minutes does not work after server reboot

27 April 2026 @ 1:46 pm

I have a script set to run every 5 minutes in the Windows 2019 task scheduler, and after a server reboot it never just resumes at the next expected interval. To fix it I have to edit the schedule, set it to the next expected runtime, then save (and re-enter the domain account password). What's going on here? Is it not maintaining the saved credentials across the reboot? Do I have some checkbox set wrong on the "Conditions" or "Settings" tab? Am I missing a role? To clarify, I'm using a scheduled trigger, set to "daily" at an arbitrary time (say midnight), with "repeat task every 5 minutes". If it ran at 10am, is rebooted at 10:02am, shouldn't it know that it was next scheduled to run at 10:05am? (This is how schedules work in SQL Server Agent, for example.) Or will it not run until the following midnight?

Clarification on MACC Eligibility & Reference Architecture for Hybrid SaaS (Azure Marketplace)

27 April 2026 @ 12:43 pm

We are currently in the process of listing our hybrid SaaS solution on the Azure Marketplace as a transactable offer and would like clarification on the path toward MACC eligibility. Our understanding of the progression is: Publish SaaS offer on Azure Marketplace Achieve Co-sell Ready status Qualify for Azure IP Co-sell eligibility Become eligible for MACC-aligned deals We have a few specific questions regarding hybrid SaaS scenarios: Reference Architecture Diagram (RAD) Requirements For Azure IP Co-sell eligibility, we understand that a Reference Architecture Diagram demonstrating Azure service utilization is required. In our case, the product is a hybrid SaaS solution with limited direct Azure workload hosting. Most of our Azure interaction is through: Azure APIs / integration endpoints Azure Marketplace SaaS fulfillment A

On Rocky Linux, how can I know *before* installing it if updating a package will require a reboot?

27 April 2026 @ 8:08 am

After updating one or more packages with dnf, I usually use the needs-restarting command to find out if the server needs a reboot, but when the dnf update command finishes, the update has already been done and I have to reboot. What I'd like to do is know before installing a package if that update will require a reboot. The reason is simple: to keep the system updated automatically and postpone updates that require a reboot until a later manual intervention. I'd need something like: [user@host ~]# needs-a-reboot-after <PackageName> [enter] If you install/update "<PackageName>", you'll need to reboot the server. [user@host ~]# Is there already something out there that does this? Thanks everyone...

Ceph web dashboard can't display OSDs and devices

26 April 2026 @ 11:58 am

I've now got my Ceph cluster almost ready to use, but in the web dashboard, I don't see any of the four OSDs I've created. Neither do I find any of my NVMe drives the OSDs reside on: Expand cluster → OSDs Error message: No devices (HDD, SSD or NVME) were found. Creation of OSDs will remain disabled until devices are added. Here is what I get on the command line: mixtile@blade3n1:~$ sudo ceph osd tree [sudo] password for mixtile: ID CLASS WEIGHT TYPE NAME STATUS REWEIGHT PRI-AFF -1 29.80798 root default -9 7.45200 host blade3n1 3 ssd 7.45200 osd.3 up 1.00000 1.00000 -7 7.45200 host blade3n2 2 ssd 7.45200 osd.2 up 1.

After a while, /etc/resolv.conf stops using /etc/netns/X/resolv.conf

18 February 2022 @ 11:44 am

My setup: /etc/ns-shared-resolv.conf is written to regularly with nameserver x.x.x.x, updated from a script /etc/netns/ag2/resolv.conf is a symlink to the above (along with ag3, ag4).. for central DNS settings in root netnso Long-running service running in ag2 netns (via ip netns exec ag2 ..., launched from a systemd service) What happens: Everything works fine.. for some arbitrary number of hours. After that, DNS requests fail. Using tcpdump I can see DNS requests going to "the wrong place" .. the DNS server in root /etc/resolv.conf, NOT the netns one. At the same time while that's not working, ip netns exec ag2 cat /etc/resolv.conf works to show the correct settings. If I start a new ip netns exec ag2 bash shell, it gets the "corre

chronyd on debian could not open IPv6 NTP socket

22 November 2019 @ 2:39 am

$ sudo journalctl -fu chrony -- Logs begin at Thu 2019-11-14 03:51:16 UTC. -- Nov 22 01:46:40 miranda-ntp-server-01 chronyd[5984]: Selected source 169.254.169.123 Nov 22 02:29:29 miranda-ntp-server-01 systemd[1]: Stopping chrony, an NTP client/server... Nov 22 02:29:29 miranda-ntp-server-01 systemd[1]: Stopped chrony, an NTP client/server. Nov 22 02:29:29 miranda-ntp-server-01 systemd[1]: Starting chrony, an NTP client/server... Nov 22 02:29:29 miranda-ntp-server-01 chronyd[9999]: chronyd version 3.0 starting (+CMDMON +NTP +REFCLOCK +RTC +PRIVDROP +SCFILTER +SECHASH +SIGND +ASYNCDNS +IPV6 -DEBUG) Nov 22 02:29:29 miranda-ntp-server-01 chronyd[9999]: Could not open IPv6 NTP socket : Address family not supported by protocol Nov 22 02:29:29 miranda-ntp-server-01 systemd[1]: Started chrony, an NTP client/server. Nov 22 02:29:29 miranda-ntp-server-01 chronyd[9999]: Frequency 29.566 +/- 0.024 ppm read from /var/lib/chrony/drift Nov 22 02:29:29 miranda-ntp-server-01 chronyd[9999]: U

adfs giving error on authnrequest message

29 January 2019 @ 8:22 pm

I have a SAML2 service provider and am trying to set up SSO with an ADFS identity provider. Currently my service provider is only working with Okta and OneLogin. When they initiate the authentication (send me a Response message), it succeeds, but when authentication is initiated from my side (sending them an AuthnRequest message), ADFS is erroring. I'm unable to determine why and not very familiar with ADFS. The error logs provided by the identity provider (anonymized) have this Verbose,1/25/2019 8:37:10 AM,AD FS Tracing,70,None," Message after decoding: <?xml version=""1.0"" standalone=""yes""?> <samlp:AuthnRequest xmlns:samlp=""urn:oasis:names:tc:SAML:2.0:protocol"" xmlns:saml=""urn:oasis:names:tc:SAML:2.0:assertion"" Destination=""https://exmaple.com/adfs/ls/""

Error deploying large WAR file on tomcat 9, at startup

21 November 2018 @ 7:13 pm

JRE Version:1.8.0_191-b12 Tomcat version: 9.0.13 Windows 10 I have a large WAR file (300MB) several hundred of files, classes, struts actions etc. When I start Tomcat 9.0.13 from a Windows Service I get the following error when I try to access the application via a URL: 21-Nov-2018 12:49:42.544 SEVERE [http-nio-9090-exec-1] org.apache.catalina.core.StandardHostValve.invoke Exception Processing /workflow/ java.lang.SecurityException: AuthConfigFactory error: java.lang.reflect.InvocationTargetException at javax.security.auth.message.config.AuthConfigFactory.getFactory(AuthConfigFactory.java:85) at org.apache.catalina.authenticator.AuthenticatorBase.findJaspicProvider(AuthenticatorBase.java:1239) at org.apache.catalina.authenticator.AuthenticatorBase.getJaspicProvider(AuthenticatorBase.java:1232) at org.apache.catalina.authenticator.AuthenticatorBase.invoke(AuthenticatorBase.java:481) at org.apache.catalina.core.StandardHostValve.inv