serverfault.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Common Server issues – FAQs and answers from those in the know

Harbor fails to list artifacts: "400 OK" error in UI

23 January 2026 @ 5:13 am

I am using Harbor version: v2.14.1-f1393edc Harbor helm chart version: 1.18.1 I deployed Harbor and created a Docker Hub proxy cache. I am using Nebius object storage which is s3-compatible storage. The Docker Hub proxy cache seems working well, and files succeed writing to Nebius object storage. enter image description here However, when I click one artifact I got error for example enter image description here Here is my Harbor helm chart values expose: type: clusterIP tls: enabled: false externalURL: https://harbor.example.com existingSecretAdminPassword: hm-harbor-secret existingSecretAdminPasswordKey: HARBOR_ADMIN_PASSWORD core: replicas: 3 jobservice: replicas: 3 portal: replicas: 3 registry: repli

Azure Budget creation: Why is "Actual" cost greyed out in Alert conditions?

22 January 2026 @ 11:45 pm

I am attempting to set up a new Budget in the Azure Portal under Cost Management -> Budgets for an Azure Subscription. When I reach the Alert conditions tab to set my alert thresholds, the Type dropdown for "Actual cost" is greyed out/disabled, and I am only able to select "Forecasted." Why? Details of my environment: Scope: Azure Subscription Offer Type: Pay-As-You-Go Permissions: I have Owner permissions on this scope. I have confirmed that there is active spending and historical data visible in the Cost Analysis views for this scope. Screenshot of the issue:

Problems with RCU in Ubuntu virtual machine

22 January 2026 @ 7:03 pm

I am creating multiple VMs on Proxmox VE with Terraform using Telmate/terraform-provider-proxmox: 13 VMs with Ubuntu 22.0.4 and 13 with Astra Linux. I tried various settings for the cpu section in Terraform, but they had no effect. However, this issue occurs on VMs with 8 vCPUs. Sometimes one or two virtual machines fail to boot due to a strange error. enter image description here I added these kernel arguments: rcu_nocbs=0-31 nohz=full Now the errors have become less frequent and have changed. enter image description here

Adding LUKS encrypted disk to a LUKS+LVM root filesystem

22 January 2026 @ 5:01 pm

I have the following problem: How can I add storage to a machine already using LUKS+LVM without being asked for a passphrase for every added disk? Goal: Add space to the logical volume (LV) lv-var mounted on /var. I'm using Ubuntu 24.04 LTS What I tried: Added a new disk to the VM. It’s detected as sdb. Encrypted the disk with LUKS and opened it as luks_sdb. Opened the new disk’s LUKS container and created a physical volume (PV) on it. Added this PV to the volume group (VG) ubuntu-vg, where lv-var lives (using vgextend). Extended lv-var (using lvextend). Resized the filesystem with resize2fs because it’s ext4. To avoid needing a separate passphrase prompt for each disk, I created a LUKS key: Created a keyfile and added it to the LUKS container. Updated /etc/crypttab and added an entry for the new disk with luks_sdb. Ran update-initramfs -u. Everyth

Grafana on Kubernetes - Notification duplicate in a HA setup

22 January 2026 @ 4:37 pm

I've set up Grafana by deploying the official helm chart with ArgoCD. I have 3 grafana pods running. In order to achieve HA and to avoid having duplicate notifications, I set up the unified_alerting part in grafana.ini like so : unified_alerting: enabled: true ha_listen_address: 0.0.0.0:9094 ha_peers: "grafana-headless.grafana.svc.cluster.local:9094" ha_peer_timeout: "30s" I see no errors in the logs and the alertmanager metrics' shows the following : alertmanager_cluster_members 3 alertmanager_cluster_failed_peers 0 alertmanager_cluster_health_score 0 So I can tell that the configuration works. However, when an alert is firing, I receive 2 notifications and I can't figure out why. This happens with multiple notification policies (happens with email notifications and Teams notifications). Sometimes, I receive the duplicate a few seconds after the first one (in fact the time set for ha

Mariadb-Dump wrong socket-path

22 January 2026 @ 3:39 pm

When I try to run mariadb-dump, I get an error message saying that the socket cannot be found. mariadb-dump -uroot -p --databases mysql >> mysql_dump.sql mariadb-dump: Got error: 2002: "Can't connect to local server through socket /run/mysql/mysql.sock' (2)" when trying to connect I don't understand where the path is determined, because different paths are defined in my.cnf. [mysql] port = 3306 socket = /var/lib/mysql/mysql.sock default-character-set = utf8mb4 #Default [mysqld] sql_mode = NO_ENGINE_SUBSTITUTION,STRICT_TRANS_TABLES,NO_ZERO_DATE,NO_ZERO_IN_DATE,ERROR_FOR_DIVISION_BY_ZERO default_storage_engine = InnoDB #Default default_tmp_storage_engine = InnoDB #Default # GENERAL # server_id = 117 user = mysql socket

Ayuda: CPE220 / Loco M2 no navegan como clientes WiFi en UAP-LR en un entorno 2.4 GHz muy saturado (no son conexiones a UAP-LR) [closed]

22 January 2026 @ 12:14 pm

vengo a consultar a la comunidad porque ya probé muchas variantes y quiero confirmar si estoy frente a una limitación técnica de radiofrecuencia o a un error puntual de configuración. El objetivo no es culpar a ningún equipo, sino entender el comportamiento observado y saber si es esperable en este tipo de escenario. Escenario: UniFi UAP-LR (solo 2.4 GHz, baja cantidad de clientes asociados, sin carga significativa de tráfico). Entorno extremadamente saturado, con más de 80 SSID visibles distribuidos en todos los canales de 2.4 GHz (1–13), con alto nivel de interferencia y retransmisiones. Distancia aproximada al UAP-LR: entre 100 y 120 metros, con 1 o 2 paredes intermedias (no línea de vista limpia). Router MikroTik como gateway y servidor DHCP, red 192.168.88.0/24, funcionamiento correcto con otros clientes WiFi y cableados. Situación observada: Un router hogareño (Tenda / TP-Link común), configurado como cliente

Consumer M.2 NVMe is not recognized in server with U.2 [closed]

22 January 2026 @ 11:14 am

I have DL380 gen10 server with U.2 slots and it currently works with two enterprise-level U.2 NVMe drives. I also want to use it with standard consumer NVMe drives which I have a lot. I got M.2 -> U.2 adapters to get there. Both enterprise U.2 drive and M.2/U.2 consumer drive works same way in a computer via PCIe->U.2 adapter, and I can't imagine there would be any difference for a host to use either of them. However, server works fine with U.2 drive and refuse to deal with M.2/U.2 consumer drive. What seems completely wrong to me is there is no PCI devices detected at all. In my understanding there is nothing between PCIe lanes and drive so PCIe device has to be detected. My question is - is there any known detectable difference in using U.2 drives in a server? Maybe it is a power issue like server drives are dealing with 12v only or something like that. The question inspired explicitly for better understanding of the technology used in mentioned h

Why top and vmstat don't help to isolate rogue physical IO processes? [closed]

22 January 2026 @ 6:59 am

I'm sharing the source code for pio (Process I/O), a utility that displays I/O statistics for processes. This appears to be older code designed for Solaris systems (2002-2004 era) that uses the /proc filesystem. What This Program Does The pio program reads process I/O statistics from the Solaris /proc filesystem and displays them in a tabular format. It shows: PID (Process ID) InpBlk (Input blocks read) OutpBlk (Output blocks written) RWChar (Read/Write characters) MjPgFlt (Major page faults) Comm (Command name with arguments) Key Technical Details Solaris procfs: This code is specific to Solaris/Illumos systems, which expose process information through /proc/<pid>/usage and /proc/&

Issues with uid in rush.rc config

22 January 2026 @ 3:52 am

I have an existing rush config on an ubuntu (cough, cough) 18.04 server which I am trying to replace 24.04. This service allows another server to upload log files via scp the files are processed by process which watches the directory for new files. on the new server: rush.rc rule default acct on umask 002 env - USER LOGNAME HOME PATH # # Uncomment this to activate the notification subsystem: # (Also install 'rush-notifier' or a similar script.) # #post-socket inet://localhost # fall-through ###################### # File moving services ###################### # Scp requests: only putting, no fetching. # # The server host needs the paths # # /srv/rush/srv/incoming/{alpha,ftp} # # and that they be writable! A specific # group can be assigned to all users # expected to gain access via GNU rush. rule scp-to command ^scp (-v )?-t( --)? set[0] /usr/bin/scp chroot /data/upload chdir "/" /etc/passwd: upload:x:988:988::/data/