Random snippets of all sorts of code, mixed with a selection of help and advice.
Neo4j Full-Text Search extremely slow on large dataset (300M nodes)
30 March 2026 @ 6:28 pm
I'm working with a graph containing around 300 million nodes of a single label (Person). I have created a full-text index on the name property to handle text searches.
My goal is simply to get the total count of people matching a very common name (e.g., "PEDRO"). However, my query is extremely slow and often results in timeouts.
Here is the query I am using:
CALL db.index.fulltext.queryNodes("pessoa_nome_fulltext", "PEDRO") YIELD node
RETURN count(node) AS total_matches;
What's causing this problem? What do I need to do in order to fix it?
Unable to understand certain logic in the solution
30 March 2026 @ 6:10 pm
I am new to stackoverflow, so Sorry if I made any errors while asking this question.
So I was practicing this DSA question and my university provided a solution too, unfortunately I am unable to understand why we are taking the range(M-N+1) instead of just range(P+1).
Question:This is the link of the Question.
The solution provided by university:
def find_Min_Difference(L,P):
L.sort()
N=P
M=len(L)
min_diff = max(L)-min(L)
for i in range(M-N+1):
if L[i+N-1]-L[i]<min_diff:
min_diff=L[i+N-1]-L[i]
return min_diff
L=eval(input().strip())
P= int(input())
print(find_Min_Difference(L,P))
Now we are we running the loop in M-N+1?
calling flutter from Python with --name
30 March 2026 @ 6:07 pm
I use this in a bash script:
flutter test --coverage --branch-coverage ut --name tp001
And it works when I invoke the script: do_ut tp001 It only runs tests in the group "tp001". Great, This is what I want.
But I would like to run that command from a python script:
cmd = ['flutter', 'test', '--coverage', '--branch-coverage', 'ut', '--name', 'tp001']
print('HERE0', cmd2)
result = subprocess.run(cmd, stdout=subprocess.PIPE, stderr=subprocess.STDOUT, text=True)
And I get ALL the test cases not just the tests in group tp001:
HERE0 ['flutter', 'test', '--coverage', '--branch-coverage', 'ut', '--name', 'tp001']
DBG result:
00:00 +0: loading languages/test_flutter/ut/tp010_model_test.dart
00:01 +0: loading languages/test_flutter/ut/tp010_model_test.dart
<snip>
How to do composite unique constraints using SQLAlchemy and Alembic?
30 March 2026 @ 6:06 pm
I'm trying to figure out how to add a composite unique to a table where the combination should be unique, but the individual fields can be the same
How my migration looks: (uses batch ops because the dev database is sqlite)
def upgrade() -> None:
"""Upgrade schema."""
with op.batch_alter_table("authors") as batch_op:
batch_op.create_unique_constraint(
"uq_author_fullname", ["firstname", "lastname"]
)
pass
def downgrade() -> None:
"""Downgrade schema."""
with op.batch_alter_table("authors") as batch_op:
batch_op.drop_constraint("uq_author_fullname", type_="unique")
pass
What Alembic creates (I assume I need to get rid of UNIQUE (lastname) and UNIQUE (firstname)
How to make sure users read and modify only their own data in Supabase?
30 March 2026 @ 6:05 pm
I'm connecting an Android app to Supabase. The Supabase project includes users and tasks tables. I would like to allow users to read, update and delete their own tasks only. From what I understand, the Supabase documentation states that applying appropriate Row Level Policies on the tables will limit who has access to what.
So if I apply the following policy to the tasks table, the android app will select and access only the rows of the logged in user, is that right?
create policy "Anonymous and permanent users can read their own tasks only"
on public.tasks
for select to authenticated
using ((select auth.uid()) = user_id);
My question is, when I define a database function to be executed on the tasks table, does it need to include a conditional clause like where tasks.user_id = user_id and be passed the user_id as a parameter like in the example below?
How does Neovim populate the runtimepath at launch?
30 March 2026 @ 6:04 pm
I am trying to upgrade from Neovim version 0.10.0 to 0.12.0. However, the runtimepath is still configured for version 0.10.0. When I run lua print(vim.inspect(vim.api.nvim_list_runtime_paths())) the output is as follows:
{ "/home/lucas/.config/nvim", "/home/lucas/Application-Data/nvim/0.10.0/nvim-linux64/share/nvim/runtime", "/home/lucas/Application-Data
/nvim/0.10.0/nvim-linux64/share/nvim/runtime/pack/dist/opt/matchit", "/home/lucas/Application-Data/nvim/0.10.0/nvim-linux64/lib/nvim" }
This is a problem because, when I try to open (say) a lua file, I get a bunch of errors corresponding to deprecated functions.
I've tried to manually delete the extra runtimepath entries using lua vim.opt.runtimepath:remove(), but they are repopulated each time I launch Neovim.
It's been a while since I set the whole thing up, so I'm wondering how that is set on launch, and what I can modify so t
.venv not activating on vsc linux
30 March 2026 @ 5:56 pm
mansur@mansur:~/Documents/CS$ source /home/mansur/Documents/CS/.venv/bin/activate
bash: /home/mansur/Documents/CS/.venv/bin/activate: No such file or directory
I switched to linux like 1 month ago and it seemed pretty nice because his privacy and comfort in programming. Then I wanted to try to include some libraries in my python project, but I couldnt install them. I think that this problem is because of my .venv error.
Any ideas why this happens?
SOLUTION
thanks for answering to my question but I found the solution. Here is a little documentation how to do it in the CORRECT way.
Install the essentials
python3 --version
pip3 --version
if not:
Debian/Ubuntu
sudo apt update
sudo apt install python3 python3-pip python3-venv
Create a virtual environment (VERY important)
python3 -m venv .venv
Why does my on-prem OCR + RAG pipeline design lead to poor retrieval performance?
30 March 2026 @ 5:51 pm
I am building an on-prem document processing system using:
- OCR for scanned PDFs
- Embeddings for indexing
- RAG for retrieval and question answering
However, I’m seeing issues with retrieval quality and latency when combining OCR output with embeddings.
For example:
- OCR text is noisy and affects embedding quality
- Retrieval results are inconsistent across similar queries
- Performance degrades with larger document sets
What are the common causes of these issues in such a pipeline, and how can they be addressed?
Micro benchmarking: how to measure multi-threaded CPU time?
30 March 2026 @ 5:45 pm
I'm attempting to benchmark a custom queue implementation with X consumer and Y producer threads. A single run takes on average ~68ms, and I've noticed that my benchmarking library sometimes produces false negative performance regression warnings on code that hasn't been changed. I believe that this is likely due to adverse thread scheduling, or due to thread creation taking slightly longer than normal.
I've decided that measuring CPU time would be a better way to benchmark this code, the problem is that I'm not sure how to measure it. To my understanding using CLOCK_PROCESS_CPUTIME_ID would return the sum total of all thread CPU time usage, which is fine when comparing run-to-run, but it can be a little weird to look at as a human. I'm also not sure how I would go about getting related numbers as throughput in items/s without just using wall clock time again.
Battery capacity affected by Temprature?
30 March 2026 @ 5:14 pm
I ran this command yesterday and today:
upower -i /org/freedesktop/UPower/devices/battery_BAT1
and yesterday it said capacity is 91.something% and now its at 92.something%.
since right now i'm in a cold environment (room temprature is 12C), i was wondering if the actual capacity is affected by the temprature or if the reading may be affected, and wheather having/using my laptop in cold envirenments decreases battery life.
if this is the wrong place to ask this, please direct me to an apropiate site.