explainshell.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Match linux command-line arguments to view their help text.

stackshare.io

VN:F [1.9.22_1171]
Rating: 8.0/10 (1 vote cast)

Dev / Production stacks for all to see. Handy tool to see what software is trending today.

aws.amazon.com

VN:F [1.9.22_1171]
Rating: 7.7/10 (3 votes cast)

Amazons’s cloud computing & web hosting service.

Amazon SageMaker Unified Studio supports aggregated view of data lineage

17 March 2026 @ 9:10 pm

Amazon SageMaker Unified Studio now provides an aggregated view of data lineage, displaying all jobs contributing to your dataset. The aggregated view gives you a complete picture of data transformations and dependencies across your entire lineage graph, helping you quickly identify all upstream sources and downstream consumers of your datasets. Previously, SageMaker Unified Studio showed the lineage graph as it existed at a specific point in time, which is useful for troubleshooting and investigating specific data processing events. The aggregated view now provides a complete picture of data transformations and dependencies across multiple levels of the lineage graph. You can use this view to understand the full scope of jobs impacting your datasets and to identify all upstream sources and downstream consumers. The aggregated view is available as the default lineage view in Amazon SageMaker Unified Stu

AWS Blu Insights is now AWS Transform for mainframe refactor

17 March 2026 @ 6:30 pm

AWS Blu Insights capabilities are now available as part of AWS Transform, enabling customers to launch mainframe refactoring projects from the AWS Transform console. This launch unifies all three mainframe modernization patterns — refactor, replatform, and reimagine — within AWS Transform for mainframe. Code transformation is now offered at no cost, replacing the previous lines-of-code based pricing model. With this launch, you can access AWS Transform for mainframe refactor directly from the AWS Transform console using your existing AWS credentials. The mandatory three-level certification requirement to access the Transformation Center has been removed, lowering the friction to exploring refactor projects. Self-paced training content remains available within the application for those who want to build deeper knowledge. AWS Transform for mainframe refactor is available in 18 AWS Regions. In reg

SageMaker Training Plans now enables extending of existing capacity commitments without workload reconfiguration

17 March 2026 @ 6:30 pm

SageMaker Training Plans allows you to reserve GPU capacity within specified time frames in cluster sizes of up to 64 instances. Today, Amazon SageMaker AI announces that Training Plans can now be extended when your AI workloads take longer than anticipated, ensuring uninterrupted access to capacity. You can extend plans by 1-day increments up to 14 days, or 7-day increments up to 182 days (26 weeks). Extensions can be initiated via API or the SageMaker console. Once the extension is purchased the workload continues to run un-interrupted without you needing to reconfgure the workload. SageMaker AI helps you create the most cost-efficient training plans that fits within your timeline and AI budget. Once you create and purchase your training plans, SageMaker automatically provisions the infrastructure and runs the AI workloads on these compute resources without requiring an

Amazon MSK expands Express brokers to Africa (Cape Town) and Asia Pacific (Taipei) regions

17 March 2026 @ 5:00 pm

You can now create provisioned Amazon Managed Streaming for Apache Kafka (Amazon MSK) clusters with Express brokers in Africa (Cape Town) and Asia Pacific (Taipei) regions. Express brokers are a new broker type for Amazon MSK Provisioned designed to deliver up to 3x more throughput per broker, scale up to 20x faster, and reduce recovery time by 90% as compared to standard Apache Kafka brokers. Express brokers come pre-configured with Kafka best practices by default, support all Kafka APIs, and provide the same low-latency performance that Amazon MSK customers expect, so they can continue using existing client applications without any changes. To get started, create a new cluster with Express brokers through the Amazon MSK console or the Amazon CLI and read our Amazon MSK Developer Guide for more information.

Amazon Bedrock AgentCore Runtime now supports shell command execution

17 March 2026 @ 4:34 pm

Amazon Bedrock AgentCore Runtime now supports InvokeAgentRuntimeCommand, a new API that lets you execute shell commands directly inside a running AgentCore Runtime session. Developers can send a command, stream the output in real time over HTTP/2, and receive the exit code — without building custom command execution logic in their containers. AI agents often operate in workflows where deterministic operations such as running tests, installing dependencies, or executing git commands need to run alongside LLM-powered reasoning. Previously, developers had to build custom logic inside their containers to distinguish agent invocations from shell commands, spawn child processes, capture stdout and stderr, and handle timeouts. InvokeAgentRuntimeCommand eliminates this undifferentiated work by providing a platform-level API for command execution. Commands run inside the same container, filesystem, and environment as the agent session, and can execute concurrently with agent invoca

Amazon Corretto 26 is now generally available

17 March 2026 @ 3:00 pm

Amazon Corretto 26, a Feature Release (FR) version, is now available for download. Amazon Corretto is a no-cost, multi-platform, production-ready distribution of OpenJDK. You can download Corretto 26 for Linux, Windows, and macOS from our downloads page. Corretto 26 will be supported through October 2026. HTTP/3 Support - Java applications can now use the latest HTTP/3 protocol, which is faster and more efficient than older HTTP versions (JEP 517) Ahead-of-Time Object Caching - Applications can start up faster by pre-caching commonly used objects, working with any garbage collector (JEP 516) Enhanced Pattern Matching - Developers can write cleaner code when checking

Amazon RDS enhancements for SQL Server Developer Edition

17 March 2026 @ 7:30 am

Amazon Relational Database Service (Amazon RDS) for SQL Server now supports Additional Storage Volumes, Resource Governor, and SQL Server 2019 with SQL Server Developer Edition. SQL Server Developer Edition is an ideal choice to build and test applications because it includes all the functionality of Enterprise edition, and is free of license charges for use as a development and test system, not as production server. You can use Additional Storage Volumes to your Amazon RDS for SQL Server Developer Edition instances, which provide you up to 256 TiB, 4X more storage. You can also use SQL Server Resource Governor, which lets you manage workload and resource consumption by defining resource pools and workload groups to control CPU and memory usage, enabling more realistic performance testing. Amazon RDS for SQL Server Developer Edition now also supports SQL Server 2019 (CU32 GDR - 15.0.4455.2), so you can mat

Simplified permissions for Amazon S3 Tables and Iceberg materialized views

17 March 2026 @ 4:00 am

AWS Glue Data Catalog now supports AWS IAM-based authorization for Amazon S3 Tables and Apache Iceberg materialized views. With IAM-based authorization, you can define all necessary permissions across storage, catalog, and query engines in a single IAM policy. This capability simplifies the integration of S3 Tables or materialized views with any AWS Analytics service, including Amazon Athena, Amazon EMR, Amazon Redshift, and AWS Glue. You can also opt in to AWS Lake Formation at any time to manage fine-grained access controls using the AWS Management Console, AWS CLI, API, and AWS CloudFormation. This feature is now available in select AWS Regions. To learn more, visit the S3 Tables documentation and the AWS Glue Data Catalog documentation.

Amazon Bedrock is now available in Asia Pacific (New Zealand)

17 March 2026 @ 1:35 am

Starting today, customers can use Amazon Bedrock in the Asia Pacific (New Zealand) Region to easily build and scale generative AI applications using a variety of foundation models (FMs) as well as powerful tools to build generative AI applications. Amazon Bedrock is a fully managed service that offers a choice of high-performing large language models (LLMs) and other FMs from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Mistral AI, OpenAI, Stability AI, as well as Amazon via a single API. Amazon Bedrock also provides a broad set of capabilities customers need to build generative AI applications with security, privacy, and responsible AI built in. These capabilities help you build tailored applications for multiple use cases across different industries, helping organizations unlock sustainable growth from generative AI while maintaining privacy and security.

Amazon CloudWatch introduces organization-wide EC2 detailed monitoring enablement

16 March 2026 @ 11:10 pm

Amazon CloudWatch now allows customers to automatically enable Amazon Elastic Compute Cloud (EC2) detailed monitoring across their AWS Organization. Customers can create enablement rules in CloudWatch Ingestion that automatically enable detailed monitoring for both existing and newly launched EC2 instances matching the rule scope, ensuring consistent metrics collection at 1-minute intervals across their EC2 instances. EC2 detailed monitoring enablement rules can be scoped to the whole organization, specific accounts, or specific resources based on resource tags to standardize the configuration across EC2 instances. For example, the central DevOps team can create an enablement rule to automatically turn on detailed monitoring for EC2 instances with specific tags, e.g., env:production, and ensure Auto Scaling policies respond quickly to changes in instance utilization. CloudWatch's auto-enablement capability is available in all AWS commercial regions. Detailed monitori

networkworld.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Information, intelligence and insight for Network and IT Executives.

Nvidia targets inference as AI’s next battleground with Groq 3 LPX

18 March 2026 @ 1:46 am

2026 is predicted to be the year that AI moves from pilot to production, becoming measurably useful across the enterprise. But while many businesses are ready, the underlying infrastructure doesn’t seem to be, particularly when it comes to next-stage inferencing. Nvidia says it has overcome these limitations, achieving what it calls a “milestone” in accelerated computing. The chip company today unveiled the Nvidia Groq 3 LPX inference accelerator for Vera Rubin GPUs. The combined architecture is optimized for “trillion-parameter models and million-token context” that Nvidia claims can deliver up to 35X higher inference throughput per megawatt, and up to 10x more reve

HPE, Nvidia expand AI partnership

17 March 2026 @ 6:43 pm

HPE and Nvidia have boosted their partnership, adding a new server blade, GPU support, enhancements to HPE’s turnkey private AI package, and services targeting enterprise customers with growing AI workloads. A considerable portion of the HPE-related news coming from Nvidia’s GTC event is aimed at the high end of the AI workload spectrum and targets service providers and neocloud operators. For example, HPE announced its Nvidia Vera Rubin NVL72 rack-scale system (pictured) that it says is capable of supporting in excess of 1

Nvidia: Latest news and insights

17 March 2026 @ 5:25 pm

More processor coverage on Network World:Intel news and insights | AMD news and insights With its legacy of innovation in GPU technology, Nvidia

2026 network outage report and internet health check

17 March 2026 @ 4:51 pm

ThousandEyes, a Cisco company, monitors how ISPs, cloud providers and conferencing services are handling any performance challenges and provides Network World with a weekly roundup of events that impact service delivery. Read on to see the latest analysis, and stop back next week for another update on internet and cloud traffic performance. Note: We have archived prior-year outage updates, including our reports from 2025, 2024,

Cato Networks unveils GPU-powered SASE with native AI security controls

17 March 2026 @ 12:55 pm

Cato Networks this week launched two additions to its secure access service edge (SASE) platform that the company says will address security challenges enterprises are facing now: protecting the AI tools end users rely on, while also using AI to defend against sophisticated threats. Cato Neural Edge deploys Nvidia GPUs across the

Chip wafer shortage will run through 2030 as AI demand overwhelms supply: SK Hynix chief

17 March 2026 @ 12:29 pm

The global shortage of semiconductor wafers will not ease before the end of the decade, SK Group Chairman Chey Tae-won said, delivering one of the most definitive long-range forecasts yet from the executive of the world’s leading supplier of high-bandwidth memory chips. Speaking to reporters on the sidelines of Nvidia’s GTC Conference in San Jose, California, Chey said the industry faces a wafer deficit of more than 20% and that at least four to five years of capacity building lie ahead before supply can match demand. “The current shortage could continue until 2030,” Chey said, according to a

Why Nvidia’s DGX Rubin NVL8 runs on Intel Xeon 6

17 March 2026 @ 11:22 am

Nvidia has selected Intel’s Xeon 6 processors as the host CPUs for its Nvidia DGX Rubin NVL8 systems. The DGX Rubin NVL8 is part of Nvidia’s next flagship AI system portfolio, designed to help companies accelerate agentic AI adoption. The DGX Rubin NVL8 systems are designed for large-scale AI workloads, combining eight Rubin GPUs with high-bandwidth memory and interconnects to support high-throughput inference and data movement. The systems are powered by Intel Xeon 6776P processors as host CPUs. The platform also uses NVLink technology to enable fast communication between GPUs for parallel processing. The Xeon 6 CPU will provide architectural continuity and scalability fo

Nvidia announces Vera Rubin platform, signaling a shift to full-stack AI infrastructure

17 March 2026 @ 10:35 am

Nvidia introduced its Vera Rubin platform, which combines compute, networking, and data processing into rack-scale deployments for large AI data centers, underscoring a shift in hyperscale environments toward more tightly integrated infrastructure. The company said the platform integrates its Vera CPU, Rubin GPU, NVLink 6 switch, ConnectX-9 SuperNIC, BlueField-4 DPU, and Spectrum-6 Ethernet switch, along with the newly added Groq 3 LPU, into a single system designed to operate as an AI supercomputer. The architecture is designed to support all stages of AI workloads, from large-scale training and post-training to real-time inference, and is aimed at so-called

Available’s $5B Project Qestrel aims to roll out 1,000 AI-ready edge data centers by year’s end

17 March 2026 @ 1:53 am

Hyperscaler data center projects are plagued by a host of issues: delayed time-to-market, capacity constraints, and supply chain issues. However, secure edge infrastructure provider Available Infrastructure aims to offer a different experience: A “nationwide fleet of cybersecure, private neocloud edge data centers.” Through a new $5 billion initiative, Project Qestrel, the company has an ambitious plan to bring 1,000 locations in 100 US cities and 30-plus states online by the end of this year. Each site will be equipped with high performance compute (HPC) infrastructure and AI inferencing capabilities, which the company says will bring sites online “in weeks to months, n

Cisco: Latest news and insights

17 March 2026 @ 1:49 am

Cisco (Nasdaq:CSCO) is the dominant vendor in enterprise networking, and under CEO Chuck Robbins, it continues to shake things up.  Cisco is focusing on strategic AI initiatives and partnerships across various regions to build and power AI data centers and ecosystems. This includes collaborations with major players like BlackRock, Global Infrastructure Partners, Microsoft and Nvidia to drive investment and scale AI infrastructure. The networking giant continue

forensicswiki.org

VN:F [1.9.22_1171]
Rating: 8.0/10 (1 vote cast)

Computer forensic tools and techniques used by investigators

cyberciti.biz

VN:F [1.9.22_1171]
Rating: 6.0/10 (2 votes cast)

online community of new and seasoned Linux / Unix sysadmins.

Unable to load the feed. Please try again later.

heartinternet.co.uk

VN:F [1.9.22_1171]
Rating: 8.3/10 (3 votes cast)

Hosting packages for an initial web presence

SSL Certificates are changing. Here’s what you need to know.

17 March 2026 @ 10:12 am

The rules around SSL certificates are changing across the whole internet. The good news is that for most customers, very little will change on your side. This is an industry-wide... The post SSL Certificates are changing. Here’s what you need to know. appeared first on Heart Internet.

Hosting VPS Linux vs Windows VPS

9 March 2026 @ 3:03 pm

The post Hosting VPS Linux vs Windows VPS appeared first on Heart Internet.

Domain Name Transfer Checklist: Everything You Need to Know

3 March 2026 @ 2:56 pm

The post Domain Name Transfer Checklist: Everything You Need to Know appeared first on Heart Internet.

Heart Internet Win Gapstars Innovation Award 2026

23 February 2026 @ 11:57 am

We’re incredibly proud to celebrate our Site Reliability Engineering team, who have won the Gapstars Innovation Award for their outstanding work improving platform stability, security, and visibility across our shared... The post Heart Internet Win Gapstars Innovation Award 2026 appeared first on Heart Internet.

A/B Testing Explained: A Practical Guide To Better Results | Part 1

20 February 2026 @ 8:32 am

If you want to improve your website you probably need to do A/B testing, otherwise known as split testing. Instead of guessing, A/B testing allows you to experiment more scientifically.... The post A/B Testing Explained: A Practical Guide To Better Results | Part 1 appeared first on Heart Internet.

How to enable two-factor authentication (2FA) on your Heart Internet account

28 January 2026 @ 12:37 pm

Account security matters, and switching on two-factor authentication (2FA) is a quick win. 2FA adds a second check during the sign-in process, so even if someone compromises your password, they still can’t get in.  To enable 2FA:  Step 1: Open your... The post How to enable two-factor authentication (2FA) on your Heart Internet account appeared first on Heart Internet.

How to Choose the Perfect Domain Name for Your Business

9 July 2025 @ 9:30 am

Get Your Name Right – The Internet Never Forgets Choosing a domain name might sound simple – until you realise it’s the online equivalent of naming your child. No pressure.... The post How to Choose the Perfect Domain Name for Your Business appeared first on Heart Internet.

What is a VPS? And is it Time You Got One?

25 June 2025 @ 9:30 am

Discover what a VPS server is, how VPS hosting works, and why it’s ideal for small businesses. Learn the benefits and explore VPS plans with Heart Internet. The post What is a VPS? And is it Time You Got One? appeared first on Heart Internet.

We’re Now Certified by the Green Web Foundation

11 June 2025 @ 9:30 am

💚 Hosting that works hard, treads lightly.   Big news: Heart Internet is now officially listed with the Green Web Foundation. That means our hosting services are recognised as being... The post We’re Now Certified by the Green Web Foundation appeared first on Heart Internet.

What is Web Hosting and Why Does Your Business Need It?

6 May 2025 @ 4:54 pm

Without web hosting, your website would not be visible or accessible to users! It is crucial to host your website with a website hosting service to ensure that your business... The post What is Web Hosting and Why Does Your Business Need It? appeared first on Heart Internet.

serverfault.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Common Server issues – FAQs and answers from those in the know

My VM won't turn on [closed]

17 March 2026 @ 6:01 pm

Since this morning, my two VMs, installed in different areas of Santiago, Chile, won't turn on. What can I do? A e2-standard-4 VM instance is currently unavailable in the southamerica-west1-a zone. Alternatively, you can try your request again with a different VM hardware configuration or at a later time. For more information, see the troubleshooting documentation.

Showing a webpage via SSH only

17 March 2026 @ 4:34 pm

I am setting up an Ubuntu server on EC2 that will be used as a production web server. I already installed PHP, Apache, and MariaDB. I also installed phpmyadmin. Please note phpmyadmin is located at /var/www/html/phpmyadmin (as a symlink). Assuming that the IP address of this server is 1.2.3.4 Then, if the user goes to https://1.2.3.4/phpmyadmin then this returns the phpmyadmin page (because as you are aware, the default page is served from /var/www/html). However, I want to limit this functionality, and be able to only access the phpmyadmin page if I create an SSH tunnel from my local computer. In other words, from my terminal on my local computer, I want open an SSH tunnel as such: ssh -i my-private-key.pem [email protected] -N -L 8888:127.0.0.1:80 So when using the browser, and entering

Kerberos and Hadoop UI

17 March 2026 @ 12:11 pm

I have a small number of servers among 100s which will not open the local hadoop (datanode) UI (port 1006). I use the NAMENODE UI to access datanodes and can see data on most but, for these few, I get 401 Unauthorized Access. This is not the same as no kerberos ticket when the message is 'Authorization required'. I tested all the other nodes and they function as expected. The browser is FF and, due to security measures (and no local admin access by me) I am unable to use another browser. I asked our hardware guys and they inform me: The server hardware is all the same. The nodes are all in the same datacentre. The nodes are in the same rack and on the same switch(es). From the OS side, ALL servers (working or not) have the same krb5.conf. I also checked timezones, times and NTP configurations. Everything is consistent across all servers, working or not. As for me, my authentication is via active directory (AS) using my general login and a window

opendkim-testkey fails on newly generated key

17 March 2026 @ 12:40 am

I generated my key using opendkim-genkey -s 2026 -b 1024 and fixed the ownership and permissions on the two files generated in /etc/dkimkeys/. I updated my local DNS and flushed the caches. ; <<>> DiG 9.18.39-0ubuntu0.24.04.2-Ubuntu <<>> -t txt 2026._domainkey.xcski.com ;; global options: +cmd ;; Got answer: ;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 22647 ;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1 ;; OPT PSEUDOSECTION: ; EDNS: version: 0, flags:; udp: 65494 ;; QUESTION SECTION: ;2026._domainkey.xcski.com. IN TXT ;; ANSWER SECTION: 2026._domainkey.xcski.com. 6690 IN TXT "v=DKIM1; h=sha256; k=rsa; " "p=MIGfMA0GCSqGSIb3DQEBAQUAA4GNADCBiQKBgQDU01FgEdmrNcuuPKYAAG7Ktt3TnSsIeza46y6+746tCZCYHmXdEOPPa+OtqKxlEH/dQkVEK/zHTh0elPChmIhWzQuypJTnLGBZAQQ7TILPe0Zewnf7sYUuKM9inFEFG2dIv0/G5BcwZhCsyBFBYNQFn5E7Dce0JIZ8U/ix28ekrwIDAQAB" ;; Query time: 0 msec ;; SERVER:

Route update on gateway IP address change

16 March 2026 @ 8:38 pm

I have a server running Linux with two network cards, eth1 and eth2. These two interfaces are connected to gateway1 and gateway2, respectively, and there are two default routes. For a specific host, I want to always use the interface eth2. To achieve this, I define a route to this host, by using "ip route add IP_of_host via IP_of_gateway2 dev eth2". The settings of eth2 are obtained via DHCP. Occasionally, the IP address of gateway2 changes. When this happens, does the linux kernel automatically update the IP_of_gateway2 in the previous route? Or do I need to delete the old route, and recreate a new one with the new IP address of gateway2?

How should I configure nginx on an Azure Ubuntu VM to access an HSM-protected private key in Azure Key Vault?

16 March 2026 @ 2:12 pm

Our client has a requirement that the private key for their SSL certificate be protected by an HSM. We will be doing this using the option in the Azure Key Vault, as the website will be hosted on an Azure Virtual Machine (running Ubuntu). However, the documentation on how to then configure the virtual machine is conflicting. Some sources, e.g. https://docs.azure.cn/en-us/virtual-machines/extensions/key-vault-linux state that using the Azure Key Vault Extension will suffice to allow the VM to access the vault and allow services on the VM, such as nginx, to access the private key. However, I've seen other articles and comments that suggest this would only work for keys that have been setup in the vault as exportable, which HSM-protected keys are not. Which is correct? What is the best practice for accessing an AKV private key from nginx?

ZONE_RESOURCE_POOL_EXHAUSTED error while creating instance in asia-south1-b

16 March 2026 @ 11:17 am

ZONE_RESOURCE_POOL_EXHAUSTED sabbpe-uat-app-server-gv-instance-group-ncq5 asia-south1-b Creating Mar 16, 2026, 4:19:09 pm UTC+05:30 Instance 'sabbpe-uat-app-server-gv-instance-group-ncq5' creation failed: The zone 'projects/sabbpe-uat-free/zones/asia-south1-b' does not have enough resources available to fulfill the request. Try a different zone, or try again later. We are encountering a ZONE_RESOURCE_POOL_EXHAUSTED error while attempting to create an instance in our project. Project ID: sabbpe-uat-free Instance Group: sabbpe-uat-app-server-gv-instance-group-ncq5 Zone: asia-south1-b Timestamp: Mar 16, 2026, 4:19 PM IST Error Message: "Instance creation failed: The zone 'projects/sabbpe-uat-free/zones/asia-south1-b' does not have enough resources available to fulfill the request." This instance is part of our application deployment and is required for maintaining our service availability. Request:

Unable to purge unused Ubuntu Linux kernels

15 March 2026 @ 10:11 pm

How can I proceed to avoid the next error? $ sudo apt-get autoremove --purge Reading package lists... Done Building dependency tree... Done Reading state information... Done 0 upgraded, 0 newly installed, 0 to remove and 21 not upgraded. 2 not fully installed or removed. After this operation, 0 B of additional disk space will be used. Setting up linux-modules-nvidia-590-6.17.0-19-generic (6.17.0-19.19~24.04.2) ... linux-image-nvidia-6.17.0-19-generic: constructing .ko files /usr/bin/ld.bfd: warning: --package-metadata is empty, ignoring /usr/bin/ld.bfd: warning: --package-metadata is empty, ignoring /usr/bin/ld.bfd: warning: --package-metadata is empty, ignoring /usr/bin/ld.bfd: warning: --package-metadata is empty, ignoring /usr/bin/ld.bfd: warning: --package-metadata is empty, ignoring /usr/bin/ld.bfd: warning: --package-metadata is empty, ignoring /usr/bin/ld.bfd: warning: --package-metadata is empty, ignoring /usr/bin/ld.bfd: cannot open linker script file /usr/s

How to mitigate DDoS (syn flood) attack?

15 March 2026 @ 2:01 pm

I got about 30K-50K pps syn flood with bandwidth of ~ 10-20 Mbps from a total of 200M network link. Due to it, I have above 90% packet loss to my VPS. I had nf_conntrack table full error, that was solved by raising nf_conntrack_max. Now there is ksoftirqd process consuming 75% CPU and high packet loss still prevents from normal server functioning. syncookies is set to 2. After iptables -I INPUT ! -i lo -p tcp --dport 80 -j DROP packet loss dropped to 2%, ksoftirqd consuming 33% CPU, but I drop also legit traffic, also iptraf shows bandwidth increased 10x to > 130 Mbps, 250K pps!!! Why? Any ideas how to drop malicious traffic inside the VPS to decrease packet loss with no bandwidth over usage? There is no external firewall. I've tried to block the traffic by country using nftables, but it did not solve packet loss problem. This test was performed at different server in a cloud where my domain A record was pointed (and DDoS attack target migrated to that IP), b

WireGuard VPN server in Cudy WR3000 router doesn't work, but OpenVPN does

11 March 2026 @ 7:56 am

I set up OpenVPN server on a Cudy WR3000 router, but I can't get WireGuard to work. The WireGuard handshake on the client shows "Sent" bytes but "0 Received" bytes. What I tested: OpenVPN Works: I enabled the OpenVPN server on the Cudy using port 1194. After forwarding 1194 on the ISP router, it works perfectly. This proves my Static IP and Port Forwarding logic are correct. Cross-Port Testing: I tried moving the WireGuard Listen Port to 1194, instead of default (after disabling OpenVPN), but still no handshake. MTU Adjustments: I lowered MTU to 1280 on both Server and Client to account for potential fragmentation/ISP overhead. Peer Settings: On the Cudy, I set the Peer "Remote Subnet" to 0.0.0.0/0 and "Allowed IPs" to 0.0.0.0/0. My .conf file of the cliend as automatically generated from cudy is:

poundhost.com

VN:F [1.9.22_1171]
Rating: 6.7/10 (3 votes cast)

Cheap dedicated server hosting

tagadab.com

VN:F [1.9.22_1171]
Rating: 8.0/10 (1 vote cast)

Cheap developer VPS hosting from £10