StackOverflow.com

VN:F [1.9.22_1171]
Rating: 8.5/10 (13 votes cast)

Random snippets of all sorts of code, mixed with a selection of help and advice.

Power Query: Is there a workaround for a website that limits its data to 30 rows?

17 January 2026 @ 7:35 am

I'm trying to scrape a betting website (https://sports.williamhill.com/betting/en-gb/football/matches/date/today/match-betting**)** that lists football (soccer) games by time but in Power Query the data is limited to only 30 rows. I'm guessing that I need to learn coding (Python etc) but just wanted to have it confirmed that Power Query (into Excel) cannot bypass this website restriction. Thanks

C: what is the practical reason to declare `const Uint32 x = ((Uint32)1 << 12)`, rather then simply `.. = 4096` or `.. = (Uint32)4096`?

17 January 2026 @ 7:34 am

This is a real example from 7z latest source code; it is in the file C/LzmaEnc.c, inside a function LzmaEncProps_Normalize(), at lines 70-110, namely, on line 86: void LzmaEncProps_Normalize(CLzmaEncProps *p) { /* ... */ if (p->dictSize > p->reduceSize) { UInt32 v = (UInt32)p->reduceSize; const UInt32 kReduceMin = ((UInt32)1 << 12); /* <--- THIS LINE --- */ if (v < kReduceMin) v = kReduceMin; if (p->dictSize > v) p->dictSize = v; } /* ... */ } The question is, why writing ((UInt32)1 << 12), when, in my understanding, it merely evaluates to the value 4096? Why not just (UInt32)4096, taking into account that we need an explicit cast?

AWS RDS PostgreSQL DB - WriteLatency spikes every 3hrs

17 January 2026 @ 7:00 am

Our PostgreSQL (14.17) DB has WriteLatency spikes every 3hrs (12:10PM, 3:10PM, 6:10PM), with no corresponding increase in WriteIOPs or WriteThroughput. DiskQueueDepth is 1:1 with WriteLatency. No recent changes in our application logic that would be causing this. The spikes started occurring around the 8th of Dec 2025. Ruled Out Suspects: Increased No of Rows: The DB did not have any significant increase in rows around the time the behaviour started. StorageIOPs bottleneck: We are using gp3 SSDs with 3000 IOPS provisioned. We are also no where near filling our storage. Post v11, PostgreSQL RDS instances can have write latency spikes (Average) every five min. This is usually related to the DB not being busy enough, causing more expensive writes like checkpt (which runs every 5 min by default) to inflate the Average calc and trigger the metric alarm (

MKL module not found while trying to run Atomate2 lithium insertion workflow on VASP

17 January 2026 @ 6:40 am

I am running an atomate2 workflow for lithium insertion into my material on bridges2 but VASP is not running at all. I am running atomate0.0.22 in python3.10 since 0.0.23 had issues with python3.11 and a pymatgen library. I got through a long string of module issues but now am stuck because I have the correct mkl module but it says mkl not found and VASP doesn't run at all (no vasprun.xml). I am relatively new to computational workflows so I am sure I made a mistake somewhere but have been spending the past several days unsuccessfully resolving it. Some troubleshooting I've tried and module info are below if it helps figure out the issue. My python code: #!/jet/home/PATH """ Automated Li insertion workflow for MOFs using Atomate2 v0.0.22. """ from pathlib import Path from pymatgen.core import Structure from atomate2.vasp.sets.core import RelaxSetGenerator, StaticSetGenerator from atomate2.vasp.jobs.core import RelaxMaker, StaticMa

How does WordPress store custom post type data and metadata in the database?

17 January 2026 @ 6:40 am

I am a beginner learning WordPress and trying to understand how WordPress stores data internally in the database. I know that WordPress uses MySQL and that default posts and pages are stored in the wp_posts table. I would like to understand: - When a custom post type is created, is it stored in the same wp_posts table? - How does WordPress differentiate between posts, pages, and custom post types? - Where are custom fields or metadata related to a custom post type stored? - Are additional database tables involved, or does WordPress rely mainly on wp_posts and wp_postmeta? I have read some documentation, but I am looking for a clear explanation of the database structure behind custom post types. Any explanation or references to official documentation would be helpful.

How to convert a region to a Polygon?

17 January 2026 @ 6:37 am

I have a matrix and I can extract color regions as std::vector<std::pair<int, int> > and I want to convert them into Polygon Boost objects. I have yet no idea how to proceed. Here is a matrix example with a region of 3 and 0 and the matching searched polygon. Matrix example Here is a sample code: #include <boost/geometry.hpp> #include <boost/geometry/geometries/point_xy.hpp> #include <boost/geometry/geometries/polygon.hpp> #include <Eigen/Core> template <typename Matrix> bool validIndex(Matrix const& a, std::pair<int, int> const& at) { return (0 <= at.first && at.first < a.rows() && 0 <= at.second && at.second < a.cols()); } inline std::vector<std::pair<int, int> > neighbors

Parallelizing REST-API requests in Databricks

17 January 2026 @ 6:37 am

I have a list of IDs and want to make a get request to a REST-API for each of the ids and save the results in a dataframe. If I loop over the list it takes far too long so I tried to parallelize using ThreadPoolExecutor which reduced the execution time significantly. But then I read about pandas udfs and rdds and wondered if I could improve my approach even further. Since I have never really worked with either I cannot tell which approach is the best for my use case. The approaches I thought about were rdds, a pandas udf which takes the id column as a Pandas Series as Input and returns a Pandas Series of the resulting JSONs and a Pandas udf which takes the iterator of the Pandas Series as the input (what exactly is the difference between using iterator and series?). Or is it possible to use the whole dataframe as Input for the Pandas UDF and return the desired outcome df? Does anyone know what the best practice for my use case would be and could go a little bit into detail about the

How to attach Python code execution to an Azure AI Foundry Agent after it generates a payload?

17 January 2026 @ 6:33 am

I am using Azure AI Foundry Agents (via the azure-ai-projects Python SDK) to generate structured JSON payloads using agent instructions. My current setup works like this: An Azure AI Foundry Agent is created using PromptAgentDefinition The agent: Reads instructions Generates a JSON payload (e.g., a quote request) Optionally calls OpenAPI tools I invoke the agent using AIProjectClient.agents.runs.create(...) I receive the agent output as text / JSON This part is working correctly. What I want to achieve After the agent generates a structured payload (for example, a quote request JSON), I want to: Pass that payload into custom Python code Continue execution of my own Python logic

Citrix ADC/Netscaler Logs

17 January 2026 @ 6:32 am

Does anyone know if there is a way to manipulate logs sent through syslog from citrix netscaler/adc. I'm trying to remove some content from the logs before it gets sent to the 3rd party receiver for privacy reasons. Also, does using the below command as mentioned in https://docs.netscaler.com/en-us/citrix-adc/current-release/system/audit-logging/configuring-audit-logging.html (using RFC5424) enable octet framing (each log message includes a Octet Count in the beginning that states how long the message is) add audit syslogAction [-serverPort ] -logLevel [-dateFormat ( MMDDYYYY | DDMMYYYY )] [-transport ( TCP | UDP )] [-syslogcompliance ] Appreciate any guidance

How to properly add LocalBusiness schema to a WordPress service website?

17 January 2026 @ 6:28 am

I have a WordPress website for a local service business and I want to add LocalBusiness schema so Google can better understand the business details (type of service, phone number, service area, etc.). I’m not sure what the best approach is for a simple site: Use an SEO plugin’s built-in schema Add JSON-LD schema manually Use a separate schema plugin For a small local service website, which method is usually recommended and why? Also, should the schema focus on one main city or multiple service areas? I want to keep the setup clean and avoid unnecessary complexity. Here is my website https://junkremovalasheville.org/