Random snippets of all sorts of code, mixed with a selection of help and advice.
Failure to infer one or more parameters on Middleware Singleton
14 October 2025 @ 4:56 pm
Asp Net Core Mini - .Net 8
I'm trying to add a singleton service of ConcurrentDictionary<> type and use it inside a middleware.
Got "Failure to infer one or more parameters" on session parameter when firing a request.
dataSource parameter is working fine when issuing database calls.
var builder = WebApplication.CreateBuilder(args);
builder.Services.AddSingleton<ConcurrentDictionary<String, BigInteger>>();
builder.Services.AddSingleton<NpgsqlDataSource>(provider =>
{
return NpgsqlDataSource.Create("Host=localhost;Username=postgres;Password=postgres;Database=postgres");
});
var app = builder.Build();
app.UseHttpsRedirection();
app.Use(async (context, next) =>
{
var sessions = context.RequestServices.GetRequiredService<ConcurrentDictionary<String, BigInteger>>();
var dataSource = context.RequestServices.GetRequiredService<NpgsqlDataSource>
How do I print to application output window when debugging python in Qt Creator IDE?
14 October 2025 @ 4:56 pm
I'm debugging a python project (pyproject) in Qt Creator using built-in debugger.
Python print() outputs only to Debugger Log window, where it's mixed with a lot of actual debugger output making it very hard to find the output.
Is that behavior expected or there's an issue with my environment?
Is it possible to somehow output text to Application Output or at least a terminal window?
Additional details
I've tried running QtCreator 4.11.0(apt) and 13.0.2(snap) on Ubuntu 20.04.6, problem exists on both
If I run the project without debugging, python print() outputs to Application Output window correctly
The built-in python debugger runs pdb via Qt's pdbbridge.py and otherwise works ok: breaks on breakpoints, shows variables
sys.stderr.write("test123") also outputs only to Debugger log
Entity with compound primary key also create index on single column
14 October 2025 @ 4:55 pm
I just added an EF entity called PurchaseOrderProduct, which creates a many-to-many link between my PurchaseOrders and Products.
[PrimaryKey(nameof(PurchaseOrderId), nameof(ProductId))]
public class PurchaseOrderProduct
{
public int PurchaseOrderId { get; set; }
public PurchaseOrder PurchaseOrder { get; set; }
public int ProductId { get; set; }
public Product Product { get; set; }
}
However, I don't quite understand the generated migration script. I want the primary key to be a compound key based on the PurchaseOrderId and ProductId columns. But why does it also create an additional IX_PurchaseOrderProducts_ProductId index on my ProductId column?
/// <inheritdoc />
protected override void Up(MigrationBuilder migrationBuilder)
{
migrationBuilder.CreateTable(
name: "PurchaseOrderProducts",
columns
Can I configure Linux or my Linux ELF file to deliberately disallow unaligned accesses on arm64?
14 October 2025 @ 4:55 pm
AFAIK, I am under the impression that although aarch64 usually supports unaligned memory accesses, they are slower than aligned accesses. If that is incorrect, please let me know.
arm32 has a way to trap on unaligned memory accesses via /proc/cpu/alignment (https://stackoverflow.com/a/16549476/2033557), and x86/x86_64 has a way to trap unaligned memory accesses via the AC processor flag (https://stackoverflow.com/a/17748435/2033557). Does aarch64 have a similar option, either in Linux configuration or in the ELF file itself, to trap or at least warn on unaligned accesses?
Note that we are not using C or libc, so we don't need to worry about interfering with that.
Validating pointer-based Delta comparison architecture using flatMapGroupsWithState in Structured Streaming
14 October 2025 @ 4:52 pm
I’m leading an implementation where we’re comparing events from two real-time streams — a Source and a Target — in Databricks Structured Streaming (Scala).
Our goal is to identify and emit “delta” differences between corresponding records from both sides based on a common naturalId.
Here’s the high-level architecture we’ve designed:
Both Source and Target streams (from Kafka/Event Hubs) are read as structured streaming datasets.
Each event is parsed, hashed (SHA-256), and persisted as full JSON to Delta Lake (for durability, auditability, and replay).
Only lightweight metadata (key, hash, timestamp, Delta pointer) is kept in Spark state.
We use flatMapGroupsWithState with event-time timeout + watermarking to hold state per key until both sides arrive.
Once both Source and Target events for a given key are available, we fetch their corresponding JSONs from Delta using the stored pointers, perform the comparison, emit a
inversion_clear H as pat
14 October 2025 @ 4:49 pm
How can I write a tactic to do inversion_clear H as pat which expands to inversion H as pat; clear H?
Tactic Notation "inversion_clear" ident(H) "as" simple_intropattern :=
inversion H as simple_intropattern; clear H.
yields Error: Disjunctive/conjunctive introduction pattern expected.
No Longer Getting Card Management History from API?
14 October 2025 @ 4:49 pm
I've been using this documentation: https://jackhenry.dev/open-api-docs/admin-api/api-reference/v0/history/history-events-retrieval/
For the last several months, I have been using this call:
/a/history/api/v0/institutions/{institutionId}/person-events?from={from}&to={to}&personId={personId}&change=CardStatusChanged
I'm trying to identify if a given customer has used card controls in the last 24 hours. For example, if you lock and then unlock a card, it shows up in this history.
In the past, this endpoint has worked fine. I know it was working up to 9/16/2025. Since 9/22/2025, this is no longer working. When I query this API, the response contains other history events, but nothing to do with card controls.
Has something changed in the API? Is there somewhere else I can find this information? T
What should I do if such restriction are being applied to my code
14 October 2025 @ 4:48 pm
So basically there is this code i have to write staying within the restrictions, but i cant find a way to override it
Let me show you
this is the code snippet
cout<<"Enter your Role (S for Scavenger, M for Medic, E for Engineer,): ";
cin>>role;
(role == 'S' || role == 'M' || role == 'E')?0:(cout<<"Invalid Role Entered, Exiting Program"), exit(0), 0;
[This code moves forward if input is valid but terminates if it is invalid]
Task----> The program should terminate completely and not take any further input if the input given to this variable is invalid... (role is char data type)
Restrictions:
Must use ternary operator (no if or switch or loop)
can't use exit() or void
cant use other libraries (other than iostream)
(return 0 can't be used in ternary operator)
Is there a way to override this???
I'm expecting an answer
Trigger a workflow if previous workflow exists and success, or not exists
14 October 2025 @ 4:47 pm
I am trying to find a way to trigger this workflow in different conditions:
the workflow Build Docker Image has been triggered and it's successful
the workflow Build Docker Image has not been triggered on the push
This Build Docker Image workflow depends if Dockerfile has been changed, so it can not be triggered, it means sometimes it doesn't not exist.
Here is my yaml:
name: CI - Tests & Quality
on:
push:
branches: [ main, develop ]
pull_request:
branches: [ main, develop ]
workflow_run:
workflows: [ "Build Docker Image" ]
types: [ completed ]
workflow_dispatch:
concurrency:
group: ${{ github.workflow }}-${{ github.head_ref || github.run_id }}
cancel-in-progress: true
env:
DOCKER_IMAGE: ${{ secrets.DOCKERHUB_USERNAME }}/vision
jobs:
verify-image:
name: Verify Docker Image
runs-on: ubuntu-latest
if: |
github.event_name !
I am using quectel module eg8000-cn with its own sdk please help I don't really understand why two releases don't happen is there a limit?
14 October 2025 @ 4:46 pm
The problem I am having right now is the queue release keeps failing in this I have used two queue releases one for socket connection and another for sending the string
the queue release keeps failing the one after the timer reaches 30 the socket is getting connected but the sending string doesn't get executed because the queue releases
void ql_pim_test_cb(unsigned int param)
{
static int timer_cnt=0;
timer_cnt++;
test_log("timer_cnt:%d\r\n",timer_cnt);
if(timer_cnt>=config_sys_data.PIN)
{
timer_cnt=0;
test_log("timer_cnt:%d\r\n",timer_cnt);
queue_send_msg=SOCK_SEND_DATA;
// if (socket_connected)
// {
// queue_send_msg = SOCK_SEND_DATA;
// }
// else
// {
// queue_send_msg=SOCK_CONNECT;
// }
int ret=ql_rtos_queue_release(sock_task_queue, sizeof(queue_send_msg), &queue_send_msg, QL_NO_WAIT);
if(ret<0)