Common Server issues – FAQs and answers from those in the know
Containerized Postgresql collation
26 February 2026 @ 4:47 am
I have recently become aware of possible dangers in a longer running postgres containerized instance in the form of collation issues; when updating to newer minor version containers, it is possible for the collation versions to get out of sync, causing warning database "postgres" has a collation version mismatch and related issues. Seems to sometimes be ignored and sometimes directly cause issues, depending on operation.
I realize you can simply go into the container and run ALTER DATABASE <database> REFRESH COLLATION VERSION;, but didn't know if there was a more automated / better way to handle this in a more hands-off environment (One that simply deploys the latest major version locked postgres image, and will pull new images)? I know I could likely run a command to iterate over the present databases, but again, wanted a line on best practices. I also know realistically that best practice might be to version lock to minor version, b
How do I provision a secure (SHA256128/AES256) IKEv2 VPN using a Provisioning Package?
25 February 2026 @ 10:41 pm
Using Windows Configuration Designer I am able to make a package to deploy a VPN as per https://learn.microsoft.com/en-us/windows/configuration/wcd/wcd-connectivityprofiles#vpn.
This VPN defaults to the insecure SHA1/modp1024 algorithms that no longer work in 2026, and to make the VPN work you need the following additional powershell command:
Set-VpnConnectionIPsecConfiguration -ConnectionName "VPN Helsinki" -AuthenticationTransformConstants SHA256128 -CipherTransformConstants AES256 -EncryptionMethod AES256 -IntegrityCheckMethod SHA256 -DHGroup Group14 -PfsGroup PFS2048 -Force
What modifications must I make to the provisioning package to set the algorithms above?
This is documented as possible in the VPNv2 CSP, but there appears to be no documented way to embed a VPNv2 CSP into a provisioning package
.NET server app running on Linux as a service with SQLite - how do I set it up?
25 February 2026 @ 10:06 pm
I am in the process of packaging an application for Azure Marketplace.
The application is a dotnet serve that uses local SQLite database.
Azure Marketplace, in the process of creating the offer, insists to not have any custom users in the image - the image validation fails if I create a dedicated user to run my server as systemd service.
In fact the last step in the preparation is to run
$ sudo waagent -force -deprovision+user
which deletes the user I am logged in as.
Since I don't have a dedicated user for my service, I tried using DynamicUser=yes
The limitation, however, is with my SQLite database - I need it to remain in place, or use a preexisting one if the customer copied it. Dynamic users are restricted from creating and writing to files by default, and using StateDirectory is created under a /private directory if it exists.
What is my best option? Is it ok to use some of the existing users (not root) - like da
PC has Public network profile and has DCOM error 1068 [closed]
25 February 2026 @ 3:48 pm
Here's my edit: kick me off Stack Exchange. Your moderation has gone to shit and the number of people voluteering to address questions has plummeted. Also they are rude. Reddit from now own.
Fuck right off.
A PC shows the current network profile as Guest or Public in Control Panel > Advanced Sharing Center. Network and Sharing Center shows only one (non expandable) entry, "Unknown".
Event Viewer has these events every 1-2 seconds:
Error 10005, DistributedCOM
DCOM got error "1068" attempting to start the service netprofm with arguments "Unavailable" in order to run the server:
{A47979D2-C419-11D9-A5B4-001185AD2B89}
When I look at NLA in Services the message is Error 1075: The dependency service does not exist or has been marked for deletion.
Now what?
Dovecot is not allowing global sieve extensions
24 February 2026 @ 11:06 pm
I'm running dovecot-2.4.1-4 and postfix-3.10.5-1 on my Debian 13 machine. These are the default dovecot and postfix versions which got installed via "apt".
Everything is working fine with this email server, except for the fact that sieve thinks that global extensions are not enabled.
However, I have done everything that I can think of in order to enable the use of global extensions.
In conf.d/90-sieve.conf ...
sieve_script personal {
driver = file
path = /var/lib/dovecot/sieve
active_path = /var/lib/dovecot/sieve/default.sieve
}
sieve_script default {
type = default
name = default
driver = file
path = /var/lib/dovecot/sieve/default.sieve
}
sieve_global_extensions = +vnd.dovecot.pipe +vnd.dovecot.execute
sieve_plugins = sieve_imapsieve sieve_extprograms
sieve_pipe_bin_dir = /usr/share/dovecot-pigeonhole/sieve
In conf.d/90-sieve-extprograms.conf ...
sieve_pipe_socket_dir = sieve-pipe
sieve_filt
Sugon openbmc: how to reset password?
24 February 2026 @ 9:00 pm
I have a board (kgpe-d16) with sugon-openbmc as bmc.
I want to access via http.
This is the actually situation
ipmitool user list 1
ID Name Callin Link Auth IPMI Msg Channel Priv Limit
1 false false true CALLBACK
2 root true false true CALLBACK
3 admin true true true ADMINISTRATOR
ssh access works
Http not and return "Login failed. Please try again."
Any idea?
Youtube stream with ffmpeg getting squared 1:1
24 February 2026 @ 7:11 pm
I'm streaming to youtube with ffmpeg. This is the command
ffmpeg -probesize 32 -analyzeduration 0 -thread_queue_size 64 -f x11grab -draw_mouse 0 -video_size 1920x1080 -framerate 30 -use_wallclock_as_timestamps 1 -i :101+0,0 -thread_queue_size 64 -f pulse -ac 2 -ar 44100 -i auto_null.monitor -c:v libx264 -preset veryfast -tune zerolatency -b:v 13500k -maxrate 13500k -bufsize 18000k -pix_fmt yuv420p -g 60 -x264opts keyint=60:scenecut=0 -vf setsar=1:1,setdar=16/9 -c:a aac -b:a 160k -ac 2 -ar 44100 -af aresample=async=1:first_pts=0 -fflags nobuffer -flags low_delay -max_muxing_queue_size 512 -f flv -flvflags no_duration_filesize rtmp://a.rtmp.youtube.com/live2/KEY
Youtube displays the stream as a square, with the black bars on top and below. You can see it here:
https://www.youtube.com/watch?v=GdUlCYkcs_4
In stats for nerds you will notice the resolution at 1080x1
Apache prefork overload when Googlebot crawls thousands of subdomains via vhost rewrite
24 February 2026 @ 1:33 pm
Environment
VPS: 2 CPU / 4 GB RAM / 1 IP
OS: CentOS 7.4
Web server: Apache (Sentora) + PHP 5.4 prefork
DNS/CDN: Cloudflare Free + Flexible SSL
~10 main domains, ~24k mini-sites (subdomains)
Architecture overview
I use one central domain (maindomain.com) to handle routing
for all subdomains of all other domains.
On Cloudflare, every secondary domain has:
CNAME * maindomain.com
All requests from subdomains are rewritten at the Apache global VirtualHost level
(not .htaccess), and AllowOverride is disabled and mapped dynamically to folders under:
/app/sites/{unique-sub-domain-slug}/
Each main domain itself has its own independent VirtualHost and works normally.
Symptoms
Everything works correctly under low traffic.
However, when
Permissions denied inside podman volume
24 February 2026 @ 1:08 pm
I'm running synapse inside a podman compose setup, But, inside the container, the service runs into errors due to not being able to access a file inside the mounted volume:
File "/usr/local/lib/python3.13/site-packages/synapse/media/media_storage.py", line 233, in store_into_file
os.makedirs(os.path.dirname(media_filepath), exist_ok=True)
~~~~~~~~~~~^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "<frozen os>", line 218, in makedirs
File "<frozen os>", line 218, in makedirs
File "<frozen os>", line 218, in makedirs
File "<frozen os>", line 228, in makedirs
PermissionError: [Errno 13] Permission denied: '/data/media_store/remote_content'
Inside the container (with podman exec -it <container> bash) I can see that the directory is owned by root:
#
Monitor expiration of OpenSSH certificates (users and hosts)
24 February 2026 @ 10:52 am
Using OpenSSH certificates for host keys and user keys, I wonder whether their expiration could be monitored automatically in advance (before they actually do expire):
Of course the issuing CA could monitor the expiration, but it's not clear whether all certificates are used actually
When the user logs in using a certificate, the ID, serial number and fingerprint are logged, so that could be cross-checked with the CA
When logging in to a host that presents a host certificate, nothing about the certificate is being displayed (only when using ssh -v you'll see certificate, ID, serial number, issuing CA and validity)
Say you have a central monitoring system, how could host and user certificates be monitored?
Specifically user certificates used for automatic processing (like clusters, configuration, backup, monitoring) would be interesting.