Random snippets of all sorts of code, mixed with a selection of help and advice.
How to prioritize technical debt vs feature development in Agile sprint planning?
27 April 2026 @ 10:21 am
We follow a standard Agile sprint cycle (2-week sprints) using a backlog that includes both feature requests and technical debt tasks.
A recurring challenge is deciding how much capacity to allocate to technical debt versus new feature development during sprint planning. If we focus too much on features, technical debt accumulates and slows future development. If we prioritize technical debt heavily, feature delivery gets delayed.
Currently:
We use a single backlog for both features and technical debt
Prioritization is mainly driven by business requirements
There is no fixed rule for allocating effort between the two
Our question is:
What specific, repeatable method can be used during sprint planning to allocate capacity between technical debt and feature work?
Particularly interested in methods used during sprint planning & how teams decide when technical debt
Where can I find documentation, benchmarks, and implementation details for the Async HBase Client?
27 April 2026 @ 10:21 am
I am currently exploring the Async HBase Client (official version) and trying to understand its capabilities, performance benefits, and internal architecture. However, I am having trouble finding comprehensive resources.
Could anyone point me in the right direction regarding the following points?
1. Documentation: Is there any official documentation, user guide, or API reference specifically focused on how to properly use the Async HBase client?
2. Benchmarks: Are there any known benchmarks that demonstrate the performance improvements (e.g., throughput, latency) of the Async client compared to the traditional synchronous client?
3. Implementation Details: Is there any design document, architectural overview, or blog post that explains the underlying implementation and how the asynchronous operations are handled under the hood?
Any links to official docs, JIRA tickets, mailing list
How to adapt 'import random'?
27 April 2026 @ 10:15 am
I want to code something like a small game. Firstly I coded it by using Python and now I want to convert to C++. But on my Python code I use import random and randint, etc. What's the C++ version of that?
Github action deletes entire server
27 April 2026 @ 10:13 am
I have a GitHub action which deploys a vue project on a shared host. But this script wipes out the entire server including mails, ssh keys, other project just absolutely everything that is on this server.
I don't understand how that is possible?
!!!! Do not use this code !!!
name: Deploy to Staging to server
on:
push:
branches:
- STAGING # Trigger the deployment on push to the 'staging' branch
concurrency:
group: deploy-staging
cancel-in-progress: false
jobs:
build:
runs-on: ubuntu-latest
steps:
# Step 1: Checkout the repository
- name: Checkout code
uses: actions/checkout@v4
# Step 2: Set up Node.js environment
- name: Setup Node.js
uses: actions/setup-node@v4
with:
node-version: '22'
cache: 'npm'
# Step 3: Install dependencies
- name: Install dependencies
run: npm ci
# Step 4: Build the Vue app
- name: Build Vue app
Call different bpf helpers depending on kernel version?
27 April 2026 @ 10:12 am
I need to get the current system time in my BTF probe. How can I call bpf_ktime_get_boot_ns() on kernels 5.8 and newer, and bpf_ktime_get_ns() on older kernels where bpf_ktime_get_boot_ns is not yet available ?
If possible I'd like to avoid building sets of probes for multiple kernel versions, since there are other similar helpers not available in all kernels, like bpf_probe_read_kernel_str().
I need to get the time from multiple probes, so maybe defining different functions that wrap the bpf_ktime... helpers might work but it doesn't look trivial to implement.
And disassembling the bpf bytecode and patching the call opcodes also doesn't look very appealing.
How to correctly specify models for mediation analysis with cmest in CMAverse (R packages)?
27 April 2026 @ 10:12 am
I am currently performing a mediation analysis using the cmest function from the CMAverse package on R. I noticed that the results differ depending on the estimation and inference options used (see figure below), especially for the PNIE.
With estimation = "paramfunc" and inference = "delta", the confidence intervals are symmetric, as shown in the forest plots.
However, with estimation = "imputation", inference = "bootstrap", and boot.ci.type = "bca", the confidence intervals become asymmetric, the estimates are no longer centered in the forest plot, and the statistical significance changes (PNIE become significant).
Could someone help clarify:
The main differences between these approaches and why am I getting different results?
Whether asymmetric confidence intervals in this context are
Cantera module not found
27 April 2026 @ 10:06 am
I have been successfully using Cantera 3.2 in Spyder. I closed Spyder and when I re-opened it, Cantera wasn't found. I tired reinstalling Cantera using pip at the cmd prompt, but Pyhon still doesn't find the module that it was finding yesterday...
Feedback on development
27 April 2026 @ 10:06 am
this is my first time publishing on Stack Overflow, so I don't know if this is allowed, but the context is that I am developing a website for my final year of college and as part of the criteria I am required to gather feedback. So, if anyone can spare a few minutes to go through the site and answer the questions, it would be greatly appreciated
Link to site - https://greenfield-local-hub.xo.je/index.php
Link to GitHub repo - https://github.com/Artsexam07/Master-codebase
Link to survey - https://forms.office.com/e/Z53ddbzwT3
Thank you
Regarding's MONAI WarmupCosineSchedule with AdamW, should scheduler.step() be called per batch, and does optimizer lr define the peak lr?
27 April 2026 @ 10:05 am
I am training a PyTorch segmentation model and using:
torch.optim.AdamW
monai.optimizers.WarmupCosineSchedule
My optimizer:
optimizer = torch.optim.AdamW(
model.parameters(),
lr=1e-4,
weight_decay=1e-5,
betas=(0.9, 0.999),
eps=1e-8,
)
My scheduler:
from monai.optimizers import WarmupCosineSchedule
scheduler = WarmupCosineSchedule(
optimizer=optimizer,
warmup_steps=1000,
t_total=10000,
end_lr=0.0,
cycles=0.5,
warmup_multiplier=0.01,
)
My training loop:
for batch in train_loader:
optimizer.zero_grad()
outputs = model(images)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
scheduler.step()
My first quesiton is as follows: Since this scheduler is step-based (similar to HuggingFace warmup cosine schedules), is it correct to call:
How to make developers accountable?
27 April 2026 @ 10:00 am
I’m looking for practical ways to make developers more accountable for the code they write and the decisions they make within a team or organization.
In many projects, issues like missed deadlines, poor code quality, lack of documentation, or unaddressed bugs often don’t have clear ownership. This can lead to frustration, technical debt, and difficulty maintaining systems over time.
What strategies, tools, or processes have you found effective in improving accountability among developers? For example:
Are there specific workflows (code reviews, pull request policies, CI/CD practices) that help?
How do you ensure ownership of features or bugs without creating a blame culture?
What role do documentation, testing, or issue tracking systems play?
Are there any team structures or management approaches that work particularly well?
I’m especially interested in approaches that balance a