explainshell.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Match linux command-line arguments to view their help text.

stackshare.io

VN:F [1.9.22_1171]
Rating: 8.0/10 (1 vote cast)

Dev / Production stacks for all to see. Handy tool to see what software is trending today.

aws.amazon.com

VN:F [1.9.22_1171]
Rating: 7.7/10 (3 votes cast)

Amazons’s cloud computing & web hosting service.

Apache Spark troubleshooting and upgrade agents now available as Kiro powers

3 April 2026 @ 9:49 pm

The Apache Spark troubleshooting agent and upgrade agent for Amazon EMR are now available as Kiro powers, bringing one-click access to AI-assisted Spark operations directly in Kiro. With these powers, data engineers can reduce troubleshooting time from hours to minutes and compress Spark version upgrades from months to weeks. When a Spark job fails, the troubleshooting power identifies the root cause by analyzing logs, metrics, and configurations across EMR on EC2 and EMR Serverless, and provides specific code recommendations for PySpark applications. The upgrade power automates Spark version upgrades, such as moving from EMR 6.5 to EMR 7.12, by handling code transformation and dependency resolution through remote validation and data quality comparison on EMR. Both powers connect to Spark agents through MCP Proxy for AWS with IAM role-based authentication, and all actions are recorded in AWS CloudTrail for full auditability.. The Apache Spark troubleshooting and upg

AWS Glue Schema Registry is now available in three more AWS regions

3 April 2026 @ 9:04 pm

You can now use the AWS Glue Schema Registry, a serverless and free feature of AWS Glue, in the Asia Pacific (Jakarta), Europe (Spain), and Europe (Zurich) regions to validate and control the evolution of streaming data using registered Apache Avro, JSON, and Protobuf schema formats. The Schema Registry acts as a centralized repository for managing data format and structure between decoupled applications in data streaming systems. By using it, you can eliminate data validation logic and cross-team coordination, improve streaming data quality, and reduce downstream application failures. Through Apache-licensed serializers and deserializers, the Schema Registry integrates with C# and Java applications developed for Apache Kafka/Amazon Managed Streaming for Apache Kafka, 

Amazon SageMaker Data Agent introduces charting capabilities and support for materialized views

3 April 2026 @ 8:30 pm

Amazon SageMaker Data Agent now supports interactive charting, SQL analytics on Snowflake data sources, and materialized view management in Amazon SageMaker Unified Studio notebooks. Data Agent now provides a complete analytics workflow that goes beyond code generation, enabling you to explore AWS and external data sources, visualize results, and optimize query performance, all with natural language prompts. You can ask "plot monthly revenue trends by region for 2025" and Data Agent generates an interactive chart directly in your notebook, where you can hover over data points, and modify without writing code. When your analysis spans AWS and Snowflake, you can query Snowflake tables through external connections and join them with your AWS Glue Data Catalog data in a single prompt. Additionally, you can ask "analyze my notebook and suggest which queries would benefit from materialized views" and the agent recommends optimizations based on your query patterns, creates the view

Amazon Bedrock Guardrails announces general availability of cross-account safeguards

3 April 2026 @ 7:15 pm

Amazon Bedrock Guardrails now enables centralized enforcement of safety controls across all AWS accounts within an organization through cross-account safeguards. Amazon Bedrock Guardrails offers configurable safeguards that help block up to 88% of harmful multimodal content from both input prompts and model responses, while filtering hallucinated responses from foundation models. Central security teams and administrators can now automatically implement these controls for all foundation model interactions in Amazon Bedrock across their organization, eliminating the operational overhead of manually configuring guardrails for each account. With cross-account safeguards, you can specify a guardrail ID from your management account in a new Amazon Bedrock policy that automatically enforces configured safeguards across all member entities including organizatio

Partner Revenue Measurement now supports AWS Marketplace Metering for certain AWS Marketplace products

3 April 2026 @ 6:55 pm

Today, AWS announces the launch of Partner Revenue Measurement integration with AWS Marketplace Metering for Amazon Machine Image (AMI) and Machine Learning (ML) products listed in AWS Marketplace. Partner Revenue Measurement allows Partners to better understand their AWS revenue impact and product consumption patterns. The AWS Marketplace Metering capability automatically measures AWS service consumption when customers purchase and use AMI and ML products via AWS Marketplace. Partners can now gain visibility into how their solutions impact Amazon Elastic Compute Cloud (Amazon EC2) and Amazon SageMaker AI service consumption across partner-managed and customer-managed accounts. This method complements Partner Revenue Measurement’s Resource Tagging and User Agent string capabili

Partner Revenue Measurement now supports User Agent string for certain AWS services

3 April 2026 @ 6:55 pm

Today, AWS announces the general availability of Partner Revenue Measurement User Agent string — a new capability that enables AWS Partners to measure AWS service consumption driven by their solutions using AWS APIs and SDKs. Partner Revenue Measurement allows Partners to better understand their AWS revenue impact and product consumption patterns. The User Agent string capability allows Partners to embed a unique product code from their AWS Marketplace listing as a user agent to quantify and measure the AWS revenue impact of that solution across certain services.   Partners can now add a user agent (format APN_1.1/pc_<AWS Marketplace product-code>$) in their application to enable AWS service consumption measurement by solution across partner-managed and customer-managed accounts. Partners can also set an environment variable in their SDKs or configure a setting in their AWS shared configuration file to automatically apply the User Agent string to all AWS service

AWS Secrets Manager console now supports custom input for AWS KMS keys

3 April 2026 @ 6:00 pm

AWS Secrets Manager console now allows you to specify a custom customer managed AWS Key Management Service (KMS) key when creating secrets. You can now provide a KMS key Amazon Resource Name (ARN) directly in the console, in addition to selecting from the pre-populated list of KMS keys in your current account. Previously, when creating a secret through the AWS Secrets Manager console, you could only select customer managed KMS keys from a dropdown list that displayed keys within the same AWS account. With this enhancement, you can now enter a KMS key ARN to use a key from a different account, aligning the console experience with the existing API capabilities. This simplifies cross-account encryption workflows and provides greater flexibility in managing your encryption keys across multiple accounts. This feature is available in all AWS Regions where AWS Secrets Manager is available. To learn more about using customer managed KMS keys with AWS Secrets Manager, visit t

Amazon CloudWatch introduces PromQL querying with Query Studio Preview

3 April 2026 @ 7:00 am

Amazon CloudWatch announces Query Studio in public preview, a unified query and visualization experience that brings native PromQL querying to CloudWatch for the first time. Query Studio combines PromQL and CloudWatch Metric Insights in a single interface, enabling you to query AWS vended metrics and OpenTelemetry metrics using the language you prefer without switching between consoles. Query Studio provides a visual form builder with autocomplete and a code editor with syntax highlighting, making it accessible to both new and experienced users. For example, a team running applications on Amazon EC2 can correlate their custom OpenTelemetry application metrics with EC2 vended metrics side by side, quickly spot issues across their stack, and create alarms or add charts to dashboards directly from their query results. Amazon CloudWatch Query Studio is available in public preview in US East (N. Virginia), US West (Oregon), Asia Pacific (Sydney), Asia Pacific (Singapore

Amazon ElastiCache Serverless now supports IPv6 and dual stack connectivity

2 April 2026 @ 9:00 pm

Amazon ElastiCache Serverless now supports IPv6 and dual stack connectivity, expanding beyond the IPv4 connectivity that was previously available. This gives you greater flexibility in how your applications connect to your Serverless caches. When creating an ElastiCache Serverless cache, you can now choose from three network type options — IPv4, IPv6, or dual stack. With dual stack connectivity, your cache accepts connections over both IPv4 and IPv6 simultaneously, making it ideal for migrating to IPv6 gradually while maintaining backward compatibility with applications connecting over IPv4. IPv6 connectivity enables you to use IPv6-only subnets with your Serverless caches, eliminating the need for IPv4 addresses and helping you meet compliance requirements for IPv6 adoption. IPv6 and dual stack connectivity for ElastiCache Serverless is available in all AWS Regions, including

Amazon CloudWatch launches OTel Container Insights for Amazon EKS (Preview)

2 April 2026 @ 8:41 pm

Amazon CloudWatch introduces Container Insights with OpenTelemetry metrics for Amazon EKS, available in public preview. Building on the existing Container Insights experience, this capability provides deeper visibility into EKS clusters by collecting more metrics from widely adopted open source and AWS collectors and sending them to CloudWatch using the OpenTelemetry Protocol (OTLP). Each metric is automatically enriched with up to 150 descriptive labels, including Kubernetes metadata and customer-defined labels such as team, application, or business unit. Curated dashboards in the Container Insights console present cluster, node, and pod health with the ability to aggregate and filter metrics by instance type, availability zone, node group, or any custom label. For deeper analysis, customers can write queries using the Prometheus Query Language (PromQL) in CloudWatch Query Studio. The CloudWatch Observability EKS add-on provides one-click installation through the Amazon EK

networkworld.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Information, intelligence and insight for Network and IT Executives.

French government take Bull by horns for €404 million

3 April 2026 @ 4:46 pm

French supercomputer company Bull is under new ownership. IT outsourcer Atos sold Bull to the French government this week for a mere €404 million (around $460 million) after acquiring the company for €620 million (then $860 million) in May 2014. The government was acting to protect national security interests: Bull’s supercomputers are used by the French national nuclear weapons research laboratory, CEA-DAM. It’s the second time that Bull has been nationalized: The first time, in 1982 was to save it from bankruptcy. Atos, has had financial troubles of its own. In August 2024, it tried — and failed —

CERT-EU blames Trivy supply chain attack for Europa.eu data breach

3 April 2026 @ 4:37 pm

The European Union’s Computer Emergency Response Team, CERT-EU, has traced last week’s theft of data from the Europa.eu platform to the recent supply chain attack on Aqua Security’s Trivy open-source vulnerability scanner. The attack on the AWS cloud infrastructure hosting the Europa.eu web hub on March 24 resulted in the theft of 350 GB of data (91.7 GB compressed), including personal names, email addresses, and messages, according to

Cisco: Latest news and insights

3 April 2026 @ 12:47 pm

Cisco (Nasdaq:CSCO) is the dominant vendor in enterprise networking, and under CEO Chuck Robbins, it continues to shake things up.  Cisco is focusing on strategic AI initiatives and partnerships across various regions to build and power AI data centers and ecosystems. This includes collaborations with major players like BlackRock, Global Infrastructure Partners, Microsoft and Nvidia to drive investment and scale AI infrastructure. The networking giant continue

Cisco fixes critical IMC auth bypass present in many products

2 April 2026 @ 10:32 pm

Cisco has released patches for a critical vulnerability in its out-of-band management solution, present in many of its servers and appliances. The flaw allows unauthenticated remote attackers to gain admin access to the Cisco Integrated Management Controller (IMC), which gives administrators remote control over servers even when the main OS is shut down. The vulnerability, tracked as CVE-2026-20093, stems from incorrect handling of password changes and can be exploited by sending specially crafted HTTP requests. This means servers with their IMC interfaces exposed directly to the local network — or worse, to the internet — are at immediate risk. The Cisco IMC is a baseboar

Kyndryl service targets AI agent automation, security

2 April 2026 @ 9:53 pm

A newly launched service from Kyndryl is designed to help businesses automate and control agentic workflows across the enterprise. The Agentic Service Management package combines a maturity model, structured assessments, implementation blueprints, and a phased roadmap aligned to emerging standards, including ISO 42001, in a single service delivered by the Kyndryl Consult division. With the service, customers get an evaluation of their organization’s AI implementations to spot gaps across service management, AI governance, security

Google Research touts memory-compression breakthrough for AI processing

2 April 2026 @ 9:43 pm

Memory prices are falling, and stock prices of memory companies took a hit, following news from Google Research of a breakthrough that will greatly reduce the amount of memory needed for AI processing. AI is notorious not only for processing requirements but also for high memory requirements. Vast amounts of memory are needed to process large language models and perform inferencing, which has led to a considerable shortage of available memory as AI data centers have consumed all of the supply. Enter Google Research and

Why can’t we have nice routers anymore?

2 April 2026 @ 9:02 pm

The Trump-dominated FCC is under the delusion that it can magically restore US-made Wi-Fi manufacturing by blocking all foreign-made consumer routers. The Federal Communications Commission (FCC) banned essentially all new model consumer Wi-Fi routers built outside the US when there are no—none, nada—US router OEMs. In case you haven’t noticed, the US gave up manufacturing tech goods ages ago because American workers were always annoyingly demanding a living wage. The nerve of some people! According to the FCC, this must be done because (drum r

Amazon Middle East datacenter suffers second drone hit as Iran steps up attacks

2 April 2026 @ 9:01 pm

Iranian drones have targeted Amazon’s largest Middle East datacenter in Bahrain for the second time in a month in part of what appears to be a planned strategy to disrupt the region’s digital economy. According to press reports, the ME-SOUTH-1 (Bahrain) AWS site, operated by telecom company Batelco, was hit by the latest drone attack on April 1. Bahrain’s interior minister confirmed to the FT that the attack had caused a fire. On April 2, the

New tool on AWS makes it easier to develop quantum error correction

2 April 2026 @ 1:06 pm

Google just moved up its timeline for quantum computers to 2029 because of improvements in quantum computer hardware, quantum error correction, and algorithms. In 2019, Google estimated it would take 20 million qubits to break RSA encryption. By May of 2025, Google revised those estimates down to 1 million. This February, researchers at Australia’s Iceberg Quantum said in a pre-print report that only 100,000 physical qubits we

IBM, Arm team up to bring Arm software to IBM Z mainframes

2 April 2026 @ 12:08 pm

IBM and Arm have announced a plan to develop hardware that can run both IBM and Arm-based workloads, to let Arm software run on IBM mainframes. The two companies plan to work on three things: building virtualization tools so Arm software can run on IBM platforms; making sure Arm applications meet the security and data residency rules that regulated industries must follow; and creating common technology layers so enterprises have more software options across both platforms, IBM said in a statement.

forensicswiki.org

VN:F [1.9.22_1171]
Rating: 8.0/10 (1 vote cast)

Computer forensic tools and techniques used by investigators

cyberciti.biz

VN:F [1.9.22_1171]
Rating: 6.0/10 (2 votes cast)

online community of new and seasoned Linux / Unix sysadmins.

Unable to load the feed. Please try again later.

heartinternet.co.uk

VN:F [1.9.22_1171]
Rating: 8.3/10 (3 votes cast)

Hosting packages for an initial web presence

How to Check for Available Domains

31 March 2026 @ 1:48 pm

The post How to Check for Available Domains appeared first on Heart Internet.

SSL Certificates are changing. Here’s what you need to know.

17 March 2026 @ 10:12 am

The rules around SSL certificates are changing across the whole internet. The good news is that for most customers, very little will change on your side. This is an industry-wide... The post SSL Certificates are changing. Here’s what you need to know. appeared first on Heart Internet.

Hosting VPS Linux vs Windows VPS

9 March 2026 @ 3:03 pm

The post Hosting VPS Linux vs Windows VPS appeared first on Heart Internet.

Domain Name Transfer Checklist: Everything You Need to Know

3 March 2026 @ 2:56 pm

The post Domain Name Transfer Checklist: Everything You Need to Know appeared first on Heart Internet.

Heart Internet Win Gapstars Innovation Award 2026

23 February 2026 @ 11:57 am

We’re incredibly proud to celebrate our Site Reliability Engineering team, who have won the Gapstars Innovation Award for their outstanding work improving platform stability, security, and visibility across our shared... The post Heart Internet Win Gapstars Innovation Award 2026 appeared first on Heart Internet.

A/B Testing Explained: A Practical Guide To Better Results | Part 1

20 February 2026 @ 8:32 am

If you want to improve your website you probably need to do A/B testing, otherwise known as split testing. Instead of guessing, A/B testing allows you to experiment more scientifically.... The post A/B Testing Explained: A Practical Guide To Better Results | Part 1 appeared first on Heart Internet.

How to enable two-factor authentication (2FA) on your Heart Internet account

28 January 2026 @ 12:37 pm

Account security matters, and switching on two-factor authentication (2FA) is a quick win. 2FA adds a second check during the sign-in process, so even if someone compromises your password, they still can’t get in.  To enable 2FA:  Step 1: Open your... The post How to enable two-factor authentication (2FA) on your Heart Internet account appeared first on Heart Internet.

How to Choose the Perfect Domain Name for Your Business

9 July 2025 @ 9:30 am

Get Your Name Right – The Internet Never Forgets Choosing a domain name might sound simple – until you realise it’s the online equivalent of naming your child. No pressure.... The post How to Choose the Perfect Domain Name for Your Business appeared first on Heart Internet.

What is a VPS? And is it Time You Got One?

25 June 2025 @ 9:30 am

Discover what a VPS server is, how VPS hosting works, and why it’s ideal for small businesses. Learn the benefits and explore VPS plans with Heart Internet. The post What is a VPS? And is it Time You Got One? appeared first on Heart Internet.

We’re Now Certified by the Green Web Foundation

11 June 2025 @ 9:30 am

💚 Hosting that works hard, treads lightly.   Big news: Heart Internet is now officially listed with the Green Web Foundation. That means our hosting services are recognised as being... The post We’re Now Certified by the Green Web Foundation appeared first on Heart Internet.

serverfault.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Common Server issues – FAQs and answers from those in the know

How can I block all but Cloudflare Ips to certains website on one server but allow unfettered access to others

2 April 2026 @ 11:02 pm

The only way I can see of doing it is using IPSecurity in web.COnfig but its not working, it either errors with 500 or allows everything through. I've tried to use Firewall but Windows firewall will block access to all sites but I have one site that need to be unlocked. What solutions are there. I've tried to put in web.config but I get a 500 error. <security> <ipSecurity allowUnlisted="true"> <clear/> <add ipAddress="173.245.48.0/20" allowed="true" /> <add ipAddress="103.21.244.0/22" allowed="true" /> <add ipAddress="103.22.200.0/22" allowed="true" /> <add ipAddress="103.31.4.0/22" allowed="true" /> <add ipAddress="141.101.64.0/18" allowed="true" /> <add ipAddress="108.162.192.0/18" allowed=&

Spamassassin meta rule and URIBL

2 April 2026 @ 5:30 pm

I'm trying to understand a behavior of spamassassin meta rules. If i set up a custom rule like this: header __LOCAL_SOME_CONDITION xxxxxxxx meta SOME_META (__LOCAL_SOME_CONDITION && !URIBL_ABUSE_SURBL) describe SOME_META test meta rule score SOME_META -10 The goal is to avoid some false-positive when some condition is matched but the URIBL check if false, details about the false-positive are useless, they are not the point of the question. But what I get on mail header is: 1.9 URIBL_ABUSE_SURBL -10 SOME_META And that make no sense, how can SOME_META match if URIBL_ABUSE_SURBL match? The only suggestion was found by a colleague with ChatGPT and is that URIBL_ABUSE_SURBL check is a DNS check and run async, so when SOME_META check run (before the end of the async call) URIBL_ABUSE_SURBL is still false. This makes sense, but checking spamassassin doc/googling about meta rules I wasn't ab

Return custom status code using php http_response_code() on 404 pages nginx

2 April 2026 @ 1:50 pm

I am using /default.php for 404 pages, and returning status code 200 using the config. server { listen [::]:443 ssl; server_name www.mywebsite.com; ssl_certificate /.cert/cert.pem; ssl_certificate_key /.cert/key.pem; root /usr/share/nginx/html; index index.php; error_page 404 =200 @defaultblock; location @extensionless-php { rewrite ^(.*)$ $1.php last; } location @defaultblock { try_files $uri $uri/ /default.php$is_args$args; } location / { try_files $uri $uri/ @extensionless-php; } location ~ \.php$ { try_files $uri =404; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } } However, I want to return the status code using php http_response_code() or header() function. Current

HAProxy & Percona XtraDB data loss

2 April 2026 @ 12:40 pm

We have a 2 proxy, 4 MYSQL server setup. Our data guys do not want their oozie database on Percona. We left it as is for years. Last year I moved it over from the single server it currently sits on, and kept an eye on the tables. After I saw records being written real time I left it in place. A day later I got a message stating data was missing. It transpires that some transaction data was not being written but it appears that the transaction may have been started. I moved the DB back to a single host. I reviewed the percona setup and the proxy. The parameters seem to fall into line with what examples I could see out there. Client and server timeouts are 60 minutes and the proxy balance is roundrobin ("Each server is used in turns, according to their weights.") I am mostly ignorant about haproxy but, I have to assume that once the connection is made all data with the same (session?) identification information would be purely between the sour

Quota limit reached - but I only have 1 schedule (Compute Engine API: Disk Snapshot Schedules)

2 April 2026 @ 11:14 am

I just noticed that I'm hitting the quota 20 / 20 for Compute Engine API: Disk Snapshot Schedules. But I only ever created a single snapshot schedule and attached it to 18 persistent disks. It clearly doesn't target the schedules (I have 1) but it's way too low to be related to disk backups (20 disks limit? low). What's that quota about? quota view schedules list

Delete Fails when Windows NFS mounted on Linux vm

30 March 2026 @ 5:37 pm

I have a windows NFS setup with AD and on mounting it in linux vm. I’m able to create and edit the files but not able to delete the files. The user seems to be correctly mapped but delete fails. I have even given full control to the user but it still doesn’t work. Can someone help me understand the possible causes for this? The delete works correctly on the windows machine. Edit: This how I have reproduced the issue Assigned my user full control permissions to the share and in this case I was able to create,edit and delete on both server and linux vm. Updated the user to just have RX, even in this case I was able to create,edit and delete on both windows and linux because the user is part of the BUILTIN/Users and this group has permissions. Didn't change anything, just unmounted the share on linux vm and restarted the windows server(complete restart). In this case it works on wi

Nginx on Debian 13 serving Moodle

28 March 2026 @ 5:32 pm

So i am configuring nginx on a new server and the first part of the install was working great but now i am getting errors related to the js files. I tried everything i read online. I could really use your help. my nginx conf file: server { #listen 80; #listen [::]:80; listen 443 ssl; # managed by Certbot listen [::]:443 ssl ipv6only=on; # managed by Certbot ssl_certificate /etc/letsencrypt/live/subdomainhere/fullchain.pem; # managed by Certbot ssl_certificate_key /etc/letsencrypt/live/subdomainhere/privkey.pem; # managed by Certbot include /etc/letsencrypt/options-ssl-nginx.conf; # managed by Certbot ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; # managed by Certbot server_name subdomainhere www.subdomainhere; root /var/www/html/moodle; index index.html index.htm index.php; add_header 'Content-Security-Policy' 'upgrade-insecure-requests'; location / { try_files $uri $uri/ =404; } # PHP-FPM

Is it possible to nest HAPROXY settings (defaults)?

25 March 2026 @ 12:19 pm

The HAProxy documentation states that a named defaults is possible. The anonymous defaults are always used if a named version is not called. If we have errorfile xxx /etc/haproxy/errors/errorsxxx.http for various error codes in our defaults (or even in a separate defaults http), and if we also have defined, e.g., defaults impala with various settings for that specific service, could defaults impala contain defaults http? Or even specify more than one defaults collective within the proxy config? Otherwise a lot of duplication is likely to occur.

DNS fails to resolve records for _msdcs.xyz.lan zone

26 June 2025 @ 4:12 pm

I have two Windows Server 2019 DNS servers and Domain Controllers connected with site-site VPN, and a client in a third location. The client can resolve the FQDN and hostname values for the servers. Dcdiag shows the DNS servers are clean. The _ldap._tcp.dc._msdcs.xyz.lan record exists in the DNS servers, and is resolvable and pingable on the domain controllers. nslookup for _ldap._tcp.dc._msdcs.<domain>.lan from the client fails. I see queries to the root servers. (a.root-servers.net). Wireshark shows the query went to the correct DNS server, but the DNS server returns non-existent domain. This is preventing computers from joining the domain. I'm not using external forwarders or DNS servers. All other records for the domain resolve. What is puzzling is in the DNS server, there are two zones. xyz.lan and a single _msdcs stub that contains nothing else. _msdcs.xyz.lan which there ar

High I/O NFS writes causes system hang

16 April 2025 @ 9:19 pm

Downloading from my NAS using NFS works for low volume, but during a large file transfer the system hangs and is non-responsive to input using SSH or GUI. After transfer is complete, it will act normal. The system doesn't reboot, or crash. However, when copying files using SFTP/SMB/SCP/Rsync, the problem does not occur, only NFS. fstab: XX.XX.XX.XX:/mnt/BigMomma /mnt/BigMomma nfs auto,hard,intr,vers=4.2,rsize=4096,wsize=4096,noatime,fsc,rdirplus,tcp,actimeo=1800 Running Linux Mint 22.2 NFS to TrueNAS with Wireguard S2S tunnel. Small files work better when operating over WAN/VPN than large sizes. Optimal size is 1396 which is an ethernet packet size minus various overhead, but the IOPS increases. Anyone know the cause?

poundhost.com

VN:F [1.9.22_1171]
Rating: 6.7/10 (3 votes cast)

Cheap dedicated server hosting

tagadab.com

VN:F [1.9.22_1171]
Rating: 8.0/10 (1 vote cast)

Cheap developer VPS hosting from £10