Common Server issues – FAQs and answers from those in the know
Why does one Foundry resource got a *.services.ai.azure.com custom subdomain while another in the same resource has *.api.cognitive.microsoft.com?
9 May 2026 @ 11:58 pm
I have two Azure AI Foundry projects under the same subscription, same resource group, same region (eastus2), both visible in the Foundry portal at ai.azure.com. Their "Azure AI model inference endpoint" URLs differ in structure:
Project A: https://<resource-A-name>.services.ai.azure.com/models
Project B: https://eastus2.api.cognitive.microsoft.com/models
Resource A has a custom subdomain matching the project name. Project B falls back to the shared regional Cognitive Services endpoint.
Why does one Azure AI Foundry project get a *.services.ai.azure.com custom subdomain while another in the same resource group uses the regional *.api.cognitive.microsoft.com endpoint?
Can a service have multiple of the same type of port in /etc/services?
9 May 2026 @ 11:21 pm
I'm running dedicated game servers on my homelab. Many of these servers communicate with multiple ports on the same protocol (for example, Conan Exiles uses tcp/7777, tcp/25575, udp/7777, udp/7778, and udp/27015). I want to add the ports needed for a given server to /etc/services to make firewall management with ufw a little less verbose - it's easier to specify a service to allow than to specify each port, and it's self-documenting in a way.
Because of how IANA assigns ports, /etc/services has many services with both a TCP and a UDP port defined. However, I don't think I see any instances where a service specifies multiple TCP or UDP ports under it. The manpage for services(5) doesn't suggest that this can be done, nor does it suggest that it can't either. Could I do something like this at the end of the file to define a "conanexiles" service?
conanexiles 7777/tcp
conanexiles 25575/tcp
conanexiles 7777/udp
conanexiles 7778/udp
conanexiles 27
Linux share mounted from Windows shows R/O even though shared R/W from Linux host
9 May 2026 @ 7:50 pm
I have an old Windows 2012 server which saves daily DB backups off to a share on linux.
The automated backup script is intermittently failing.
When I manually copy files as a domain admin user, I get an error: You need permission to perform this action."
No details available, three buttons: Try again, skip and cancel. Getting properties on the remote folder is shows as "RO (only applies to files in folder)", even though it is shared RW in the linux host's /etc/exports file. I tried exportfs -ra just in case, but still the same.
None of my admin accounts can copy the file over, even thought they're part of the SQL backup user group. I also get a strange error when I try to modify the R/O attribute on the remote folder I want to copy into. When I uncheck the Read-Only attribute and click apply, I get: "An error occurred applying attributes to the file: \server\path\folder\some-file-inside. An unexpected network error occurre
Ubuntu host in Azure cannot reach Internet after restart [closed]
8 May 2026 @ 6:39 pm
I have an Ubuntu host in Azure running virtualmin for a web host. After restarting, the device cannot reach the Internet, either by IP or DNS.
Several packages were updated, however no additional packages were installed. I can't ping the gateway (expected, I believe ICMP is disabled to the gateway) but I also can't ping 8.8.8.8. IPv6 is disabled on this device, and it has an attached public IP.
I found information about SSH breaking, but this is more than SSH. I cannot install packages or reach any running services inbound. Traceroute is not installed, and I can't reach apt to install it.
Nothing changed with the NSG rules, but here are the rules:
Network configuration in Azure:
The host receives the IP address using DHCP. I did not do any manual config other than disabling IPv6, wh
Network configuration in Azure:
The host receives the IP address using DHCP. I did not do any manual config other than disabling IPv6, whHow can I setup double nginx reverse proxy?
8 May 2026 @ 12:48 pm
I want to use double nginx reverse proxy:
client <--> first nginx reverse proxy <--> second nginx reverse proxy <--> web server
1st nginx server IP: 111.222.333.444. Second nginx server IP: 555.666.777.888. The web server where site is hosted IP: 999.000.111.222. The domain is test.domain.com.
1st nginx config file (111.222.333.444.conf):
server {
listen 80;
server_name test.domain.com;
location / {
proxy_pass http://555.666.777.888;
proxy_cache off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_redirect off;
charset off;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
}
}
server {
listen 443 ssl;
server_name test.domain.com;
ssl_certificate /etc/nginx/conf.d/ssl.test.domain.com.pem;
ssl_certificate_key /etc/nginx/conf.d/ssl.test.domain.com.key;
location / {
proxy_pass https://555.666.777.888;
proxy_cache off;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $h
Nginx 1.18 Proxy: Backend response alignment issues with Apache 2.4 (Leaking adjacent response headers)
8 May 2026 @ 6:30 am
I am troubleshooting a production issue where my Nginx 1.18 reverse proxy appears to be desynchronizing from an Apache 2.4.41 backend.
The Problem:
During high-concurrency testing or when using specifically crafted Transfer-Encoding: chunked requests, the Nginx frontend seems to lose track of request/response boundaries. I am seeing cases where the response for Request B contains the headers or body fragments intended for Request A (which was an internal-only GET request).
Specifically, I’ve seen Base64-encoded CSS strings from the backend appear in the body of an unrelated 404 response.
My Setup:
Frontend: Nginx 1.18.0 (Ubuntu)
Backend: Apache 2.4.41 via proxy_pass
Protocol: HTTP/1.1 with Keep-Alive enabled.
The Goal:
I need the backend to strictly isolate responses so that smuggled or malformed requests in the pipeline don't "bleed" into the next legitim
How can we fix SQL Server performance? [closed]
8 May 2026 @ 2:38 am
What's the best way to handle SQL Server performance drops?
We have SQL Server 2008 R2, migrated from SQL Server 2000. The database is in SQL Server 2000 compatibility mode. The server has 32 GB RAM, and is 10 years old.
The performance decreases day by day. RAM is 90% used, CPU usage is 10% to 30%, the database is 50 GB.
I asked AI Anthropic, it gave me this SQL scripts:
SELECT TOP 20
wait\\\_type,
wait\\\_time\\\_ms / 1000 AS wait\\\_time\\\_seconds,
waiting\\\_tasks\\\_count,
signal\\\_wait\\\_time\\\_ms / 1000 AS signal\\\_wait\\\_seconds
FROM sys.dm\\\_os\\\_wait\\\_stats
WHERE wait\\\_type NOT IN (
'SLEEP\\\_TASK',
'BROKER\\\_TASK\\\_STOP',
\\\_BUFFER\\\_FLUSH',
'CLR\\\_AUTO\\\_EVENT',
'CLR\\\_MANUAL\\\_EVENT',
'LAZYWRITER\\\_SLEEP',
'RESOURCE\\\_QUEUE',
'SLEEP\\\_SYSTEMTASK',
'WAITFOR',
'LOGMGR\\\_QUEUE',
'CHECKPOINT\\\_QUEUE',
'REQUEST\\\_FOR\\\_DEADLOCK\
Decoding the 22-char salt of a password (PHP/MySQLi) [closed]
7 May 2026 @ 3:24 pm
This is my current code:
if (!$row['is_verified']) {
$message = 'Verifiera din e-post först.';
} elseif (password_verify($postPass, base64_decode($row['PassPhrase1'])) {
This decodes the salt of the password using base64_decode (the salt is the 22-char long REMEMBER VARCHAR(22) of the password)
But it does not decrypt the actual hash that it is stored with, that was created using password_hash("Code_of_Conduct", PASSWORD_ARGON2ID);
Thanks in advance!
How do I get dnssec auto policy signing to output readable files?
7 May 2026 @ 1:37 pm
By default, dnssec automatic signing produces 'raw' files as output. These are unreadable binary files.
If I do not care about the couple of extra megabytes the normal text format output takes, and find the ease of use of being able to tell what's being broadcast by my DNS server by cat ing a file to the terminal rather than using complicated tools and online checkers makes the crazy complexity of dnssec a little less brain-mushifying. How do I get it to output a file that can be read by humans in the signed format?
I.e.: By default it does the automated 'semi-equivalent' (this command doesn't work, I don't know one that does*, the records are missing their values, but I hope I get the point across; I'm not interested in manually signing but I am interested in readable output) of
cd /var/named/run-root/var/
dnssec-signzone -O raw -S -K keys/site.com site.com Ksite.com.+014+37707.key
but I want the equivalent of:
Certificates for https [closed]
7 May 2026 @ 11:02 am
What is the best way to get certificates for https on Linux, nowadays?
I need free certificate for public web site (with API on sub domain).
It will be good to do not let root access for this tool.
Edited
Dear moderators, please, answer on my question before closing it.
That is rude.
Any IT question can be classified as product recommendation off-topic.
For example, if you ask about nginx, that is recommendation of nginx.
If you ask about apache, that is recommendation of apache.
=====
I'm asking about client and its setup on server to get free https certificate.
I want to know possible variants of this.
Not self-signed, free, with sub domain, with limited access on server.