StackOverflow.com

VN:F [1.9.22_1171]
Rating: 8.5/10 (13 votes cast)

Random snippets of all sorts of code, mixed with a selection of help and advice.

.nut script does not pass data to SQL database when params[3] in the script is set to 1

27 December 2025 @ 2:42 am

I have a problem with a .nut file that's part of a game server that's written in c++. I did not write these server files, they're part of an open source project that is no longer available to download, otherwise I would link to the github page. The .nut script is supposed to create/modify an entry in an SQL database, with the following information: UID: ID of the entry Character: the ID of the player attached to this entry, if any. RelatedTo: The ID of the player's clan, if any. Type: The ID of the counter. Counter: The value of the counter. PreExpireType: I'm not 100% sure what this value does. GroupCounter: I'm not 100% sure what this value does. TimeStamp: Timestamp of when counter was changed. The .nut script that's called by the server's event system looks lik

C# TSS Interacting with the TPM

27 December 2025 @ 1:35 am

public static AESTPMKey OpenOrCreateAesRootKey() { const uint persistentHandleValue = 0x81000001; var persistentHandle = new TpmHandle(persistentHandleValue); // 1️⃣ Connect to TPM var tpmDevice = new TbsDevice(); tpmDevice.Connect(); var tpm = new Tpm2(tpmDevice); bool recreateKey = false; // 2️⃣ Check if persistent key exists try { var existingPub = tpm.ReadPublic(persistentHandle, out _, out _); if (existingPub.parameters is SymDefObject sym && sym.Algorithm == TpmAlgId.Aes) { MessageBox.Show("Persistent AES key exists. Reusing key.", "Info", MessageBoxButton.OK, MessageBoxImage.Information); return new AESTPMKey(tpm, persistentHandle); } else { // Existing key is not AES → evict tpm._AllowEr

Failure to enumerate USB device with DWC2 USB host

27 December 2025 @ 1:33 am

I am working on an embedded RTOS system for a 32-bit MIPS SOC, which supports DWC2 OTG. (It can run Linux well.) I ported an open-source USB stack to this system by following the porting guidance. const struct dwc2_user_params param_test = { .phy_type = DWC2_PHY_TYPE_PARAM_UTMI, .phy_utmi_width = 16, #ifdef CONFIG_USB_DWC2_DMA_ENABLE .device_dma_enable = true, #else .device_dma_enable = false, #endif .device_dma_desc_enable = false, .device_rx_fifo_size = (2048 - 16*16), .device_tx_fifo_size = { [0] = 16, // 64 byte [1] = 16, // 64 byte [2] = 16, // 64 byte [3] = 16, // 64 byte [4] = 16, [5] = 16, [6] = 16, [7] = 16, [8] = 16, [9] = 16, [10] = 16, [11] = 16, [12] = 16, [13] = 16, [14] = 16, [15] = 16 }, .device_gccfg = 0, .host_gccfg = 0, .host_rx_fifo_size = 1096, // (reg: 0x24) .host_nperio_

Resolving itanium-abi demangling for a template <type> with a <substitution>

27 December 2025 @ 1:20 am

The Itanium ABI BNF gives two possible demangling resolutions for a <type> entity that is given as <substitution><template-args>. (eg: in "_Z1aSaIcE", "SaIcE" must be a <type> and it's made up of a <substitution> ("Sa"), plus a (single-term) <template-args> ("c" = char)). <type> = "<substitution><template-args>" can be demangled as either: (A) <type> <template-template-param><template-args> <substitution><template-args> (B) <type> <class-enum-type> <name> <unscoped-template-name><template-args> <substitution><template-args>

How to automate versioning via CI in a Python project?

27 December 2025 @ 1:07 am

I have a Python project using pyproject.toml and I want to figure out the best method to automate incrementing the version. The requirements are: Each time the package is modified (e.g.: src/**, pyproject.toml, or requirements.txt), automatically increment the patch version and publish a new package. Allow for manually incrementing the major/minor versions when necessary My solution so far has been to have a $NEXT_VERSION variable set in the repo settings. This defines what version to use the next time the package is published. Then my pyproject.toml contains: [tool.hatch.version] source = "env" variable = "NEXT_VERSION" Each time I merge a change in the specified fil

printf() not working on colab while running a CUDA c++ code

26 December 2025 @ 9:58 pm

This is my first time working with CUDA programs. So I just wrote a simple hello world program. #include <stdio.h> __global__ void hello(){ printf("Hello block: %u and thread: %u\n", blockIdx.x, threadIdx.x); } int main(){ hello<<<2,2>>>(); cudaDeviceSynchronize(); } I compiled this using nvcc hello.cu -o hello and ran it using ./hello. %%shell nvcc --version nvcc: NVIDIA (R) Cuda compiler driver Copyright (c) 2005-2024 NVIDIA Corporation Built on Thu_Jun__6_02:18:23_PDT_2024 Cuda compilation tools, release 12.5, V12.5.82 Build cuda_12.5.r12.5/compiler.34385749_0 I'm running it on Google Colab using the T4 GPU. When I run the code, I do not get any output printed. Any ideas on how to fix it?

How does attribute access (.) actually work internally?

26 December 2025 @ 9:33 pm

When I write obj.attribute in Python, what is the sequence of operations and lookups that Python performs to resolve this attribute? A step-by-step would really help understand how Python's objects work.

Being too reliant on AI, roast the crap out of my coding please

26 December 2025 @ 8:30 pm

Alright, I feel like this needs to be said. Lately I've become way too dependent on AI as a junior developer, and honestly, it's starting to hurt my growth. Instead of properly thinking through problems, I’ve been outsourcing my brain — and that's on me. That’s exactly why I want this code to be absolutely roasted. No sugarcoating, no polite feedback. I haven't touched C# in a while, and combined with relying too much on AI, my fundamentals have clearly gotten rusty. This exercise is meant to force me to confront that and actually improve as a programmer. Here's the code. using System; namespace MyApp { internal class Program { static void Main(string[] args) { Generator generator = new Generator(); generator.numberGenerator(); } } class Generator() { public void numberGenerator() { Console.WriteLine("Give the difficulty of the game: +" +

Why does statement expression ({...}) not exist in the C standard? [closed]

26 December 2025 @ 7:44 pm

I have encountered an issue in the C language. In GCC and Clang there is a feature called statement expression which is often used in macros. Its main feature is that several statements can be placed in a single block inside a macro, and the difference with do-while(0) is that with do-while the macro cannot have a return value, but in ({ ... }) the last member is chosen as the return value. On the other hand, functions are not always a suitable replacement, because macros have the ability to accept arguments of unspecified type, and with constructs like _Generic one can call a specific function based on the input type. Consider the following example: #define FIND (haystack, needle, n_occurrence) ({ \ STR_EXPECT_STRING_OR_ARR_PTR(haystack); \ STR_EXPECT_STRING_OR_ARR_PTR(needle); \ internal_find_fnc_arr (INTERNAL_AUTO_CHANGE(haystack), INTERNAL_AUTO_CHANGE(needle), n_occurrence); \ }) The function prototype is:

Ranking and Comparing Beta Distributions [closed]

26 December 2025 @ 7:36 pm

What I have: I built a Thompson sampler which given a data set it classifies data with a key and determines if it is considered a success or not, thus having for each key an alpha and beta value for a Beta distribution. With this it is easy given n keys take a random sample of each one and pick the one with the highest value (this is what the method was made for). The Problem: Given this configuration of n beta distributions (built with the observed a & b values), I want to rank them from best to worse and calculate a value that represents "how much better (i.e how much probable is to obtain a favorable value)" one distribution is compared to the others (so I can say something like "this distribution is x times better than this other one"). My Solution: My first approach was ranking them by expected value. The problem with this is that or beta distributions the size o