serverfault.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Common Server issues – FAQs and answers from those in the know

Best way to handle Zoom integration conflicts across multiple GoHighLevel accounts?

2 April 2026 @ 5:04 am

I’m working with GoHighLevel and trying to integrate Zoom for scheduling and meetings, but I’ve run into an issue where the Zoom account seems to already be connected to another sub-account. The error message indicates that the Zoom account is already integrated elsewhere, even after attempting to remove integrations from the current account settings. From what I understand, GoHighLevel (LeadConnector) only allows one active connection per Zoom account, but it’s not very clear where the original integration is stored (agency level vs sub-account level). I’ve already tried: Removing integrations from the current sub-account Switching calendar integrations Re-authorizing Zoom Still facing the same issue. What would be the correct way to fully disconnect a Zoom account from all GoHighLevel instances so it can be reconnected cleanly? Also, is there a recommended workflow to avoid this conflict when managing multiple client account

How can I convert a Powershell command into a batch file [migrated]

1 April 2026 @ 7:13 pm

The following command works in PowerShell: .\bcs.ps1 -sites a.com,b.guide,c.com But produces an error when PowerShell is called from the command prompt. I removed the inner quotes and that doesn't work either. powershell -file .\bcs.ps1 -sites "a.com","b.guide","c.com" Any suggestions?

Is there any way to use URL masked serverless backends AND support GRPC?

1 April 2026 @ 4:40 pm

Looking through the docs its not clear to me if this is possible. I have previously use URL masked serverless backends to route to my cloud run services. This is easy to configure on the GCP load balancer and its a one time config and all future cloud run services that are deployed are automatically accessible via https://my-lb.mydom.com/service-abc. The problem is GRPC does not support path based routing. So I can't send GPRC requests to my-lb.mydom.com/service-abc:443. I don't want to use host based routing and I want to avoid Cloud Service Mesh or Traffic Director. Is there no way to support this with vanilla GCE load balancing? If I have to use Cloud Service Mesh and/or Traffic Director does it work with URL masked backends so it automatically routes by cloud run service name?

use Nvidia L40s GPU passthrough on Proxmox [closed]

1 April 2026 @ 1:24 pm

How to configure Proxmox host, which has Nvidia L40s GPU(1) for passthrough to a VM running on same Proxmox host. I have HPE Proloiant DL380 with Intel Xeon Processors. 1 Nvidia L40s GPU card. I want to acheive Pcie passthrough to a VM running on Proxmox.

Is it possible to have custom reload/restart-like commands in systemctl for a daemon?

31 March 2026 @ 4:07 pm

I'm developing a daemon that runs under control of systemd, and that has a feature to reload code that's inbetween "systemctl reload" and "systemctl restart". Unlike "systemctl reload" that sends a signal to re-read just the configuration, it also reloads the code, but unlike "systemctl restart" it's not a true hard restart that forgets all the state, but instead the state is written to a file and a signal causes the daemon to replace its code with execve() and then it reads the old state from the file. Is there any feature in systemd that would allow me to add some custom command to systemctl that would be inbetween reload and restart, just for this daemon? Technically the feature would be implemented by some signal such as SIGUSR1 or SIGUSR2. I'm expecting that there could be cases where a hard restart will be done instead of the lighterweight "dump state + execve + reload state", so it would be useful to have

Powering down idle HDDs in a ZFS pool to conserve power

31 March 2026 @ 8:35 am

I've got a storage server with some NVMe SSDs for system and metadata cache and 24 3.5" HDDs for storing historic data. Since this data is very infrequently accessed (at least once per day for synchronization, often that is the only access at all for several days), it would be good to spin down the drives during the remaining time to conserve power. The system is running Debian (actually Proxmox, but no VMs there) and ZFS. The storage pool is made up of 3x 8-way raidz1, with a mirrored special device on NVMes for metadata storage. The disks are Seagate Exos SAS drives. My questions: Will the drive lifetime decrease (or increase) if they are only active maybe 1/4 of the time, but with at least one spin-down/spin-up every day? Can or do I need to tell ZFS about this behavior, so that it isn't confused by very long initial access times when a drive needs to spin up first? Does ZFS even let a drive go to sleep, or does

libnss-extrausers use cases and details

31 March 2026 @ 5:40 am

What are the use cases for libnss-extrausers? I've seen a couple of examples but the details are lacking. If you use libnss-extrausers, don't you have to change all user and group IDs to keep from colliding with the corresponding /etc files? If this is done then effectively only files with "world" access in directories with "world" access are available. What about home directories, do you use a common directory such as /tmp or what do you do? This is why I'm asking about the use cases to better understand the benefit.

Mutual TLS Abruptly Stopped Working on Tomcat 9 and 11 Servers

30 March 2026 @ 11:06 pm

I have several java webapps running on 2 tomcat servers--with various JDK (17 and 25) and Tomcat versions (9 and 11), in a test/development environment--where the server is configured to request a client certificate. When the browser prompts me, I select my CAC card certificate on the browser popup and then am prompted to enter my pin. The prompts for certificate selection and pin entry have always occurred immediately after navigating to the url (so there is no post-handshake authentication). Unfortunately, this stopped working for me; one day it was working and another day it totally stopped. I am 100% certain that no server nor personal configurations changed since the time it was working; no server, connector, JDK, or any other settings were manually changed. And even more strange, it is very inconsistent between users: it still works 100% for one team member, it works only on Firefox for another team member, a third memeber can only get it to sometimes work in Chrome's p

Strange permissions on shared Start Menu folders in Windows 10 IoT Enterprise LTSC

30 March 2026 @ 9:04 pm

I have a fleet of Dell OptiPlex 3000 machines that were purchased due to one reason only: legal license for Windows 10 IoT Enterprise LTSC. I am getting very strange set of permissions on a clean re-install on certain folders that relate to the shared Start Menu: C:\Users\All Users\Start Menu\Programs S-1-5-21-3671523672-3566060235-3176437112-1000:(I)(OI)(CI)(DE,DC) DESKTOP-UROR7BK\admin:(I)(OI)(CI)(DE,DC) NT AUTHORITY\SYSTEM:(I)(OI)(CI)(F) BUILTIN\Administrators:(I)(OI)(CI)(F) BUILTIN\Users:(I)(OI)(CI)(RX) Everyone:(I)(OI)(CI)(RX) C:\Users\All Users\Start Menu Everyone:(DENY)(S,RD) Everyone:(RX) NT AUTHORITY\SYSTEM:(F) BUILTIN\Administrators:(F) C:\Users\All Users NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) CREATOR OWNER:(OI)(CI)(IO)(F) BUILTIN\Users:(OI)(CI)(RX) BUILTIN\Users:(CI)(WD,AD,WEA,WA) C:\Users NT AUTHORITY\SYSTEM:(OI)(CI)(F) BUILTIN\Administrators:(OI)(CI)(F) BUILTIN\Users:(RX) BUILTIN\Users:(OI)(CI)(IO)(

Delete Fails when Windows NFS mounted on Linux vm

30 March 2026 @ 5:37 pm

I have a windows NFS setup with AD and on mounting it in linux vm. I’m able to create and edit the files but not able to delete the files. The user seems to be correctly mapped but delete fails. I have even given full control to the user but it still doesn’t work. Can someone help me understand the possible causes for this? The delete works correctly on the windows machine.