Random snippets of all sorts of code, mixed with a selection of help and advice.
How to configure automatic data expiration (TTL) for time-series containers in GridDB using the Java API?
13 March 2026 @ 3:48 am
I am experimenting with GridDB for an IoT application where sensors continuously send temperature and humidity readings.
Each record contains a timestamp and the measurements. The container is defined in my Java application like this:
class SensorData {
@RowKey
Timestamp timestamp;
double temperature;
double humidity;
}
Sensor data is inserted continuously using the Java API:
Container<String, SensorData> container = store.getContainer("sensor_data");
SensorData row = new SensorData();
row.timestamp = new Timestamp(System.currentTimeMillis());
row.temperature = 22.5;
row.humidity = 60;
container.put(row);
Because the system stores continuous sensor readings, the dataset will grow very quickly over time. In the production environment we only need to keep the last 30 days of data,
Spring batching rollback issue
13 March 2026 @ 3:44 am
I have a do data migration using spring batching
And created a processor of 10 steps in which first 3 steps are api calls that create the new txn in the system with the data from the source
But if the rollback took place after that i whole process starts from stage 1 leading duplication of the transaction and api is not idempotent
Please help how can i overcome this problem
How to debug/mitigate RocksDB WAL sync stalls (P99.9 latency) on NVMe when disk I/O is not saturated?
13 March 2026 @ 3:09 am
I am investigating a performance bottleneck in a C++ application using RocksDB v8.5.x. We are seeing extreme P99.9 latency spikes (up to 500ms) specifically during the Write-Ahead Log (WAL) flush and sync operations, even though the underlying NVMe RAID-0 array reports low utilization (~15%).
From a development perspective, I’ve narrowed it down to rocksdb::WritableFileWriter::Sync or fflush calls, but I am struggling to understand why the WAL write path is stalling when the device bandwidth and IOPS are nowhere near their limits.
The Setup & Configuration
RocksDB Options:
sync=true (for critical write paths)
use_direct_io_for_flush_and_compaction = true
wal_bytes_per_sync = 0 (default)
Environment: Ubuntu 22.04 (Kernel 5.15), ext4 (noatime, data=ordered).
Storage: RAID-0 Samsung 980 Pro NVMe.
Agent activation failed when trying to start a Live Share session in Visual Studio 2026
13 March 2026 @ 3:03 am
I'm trying to start a live share session in VS 2026 to collaborate on a C++ project with my classmates, but I keep getting the same error message: "Failed to create a collaboration session. Agent activation failed."
It only shows up when I hit Start a Session. However, I haven't gotten a chance to go any further than the Invite Link window when choosing the Join a Session option.
After quite a few hours of trial and fail, I've narrowed down the location of the error: specifically the User Account setting under Tools > Options > Live Share > General > Authentication. It is currently set to Personalization Account, creating an error window saying "Agent activation failed" when I try to change it.
All the forums I can find are a minimum of 3 years old, where only one seems to mention a similar problem as me with no clear solution:
Implementing the Repository Pattern with complex Sequelize associations
13 March 2026 @ 2:41 am
I am working on a university project using Node.js, TypeScript, and Sequelize. I've implemented a Repository layer to decouple my database logic from the Service layer.
Initially, my repository was very clean:
static async create (payload: EstateInput): Promise<EstateOutput> {
const estate = await Estate.create(payload);
return estate;
}
Now, the requirements have grown. When creating an Estate, I need to handle several associations simultaneously:
Address: 1:1 association (nested creation).
Amenities: N:M association (linking existing IDs or creating new ones).
Photos: 1:M association (multiple uploads).
If I move this logic to the Service Layer, the service starts "knowing" too much about Sequelize internals (like transaction, include, o
Set daemonset environment variable value based on node label value
13 March 2026 @ 2:22 am
Suppose a node with label grafana-map= and my container requires environment variable key INSTANCE=
I expect kind like this...
- name: INSTANCE
valueFrom:
fieldRef:
fieldPath: spec.nodeName.grafana-map.value
Create Iceberg table in HDFS
13 March 2026 @ 2:09 am
I have installed hadoop-3.4.2 and hive-4.1.0.
Hive is configured to store tables at /user/hive/warehouse, using Beeline I was able to create several tables. Now I'm trying to create Iceberg table with the following statement:
CREATE TABLE x (i int) STORED BY ICEBERG;
The x directory is created on HDFS (without Iceberg files) but I'm getting the following error:
0: jdbc:hive2://localhost:10000> CREATE TABLE x (i int) STORED BY ICEBERG;
INFO : Compiling command(queryId=spark_20260313010955_97c79a9f-5d00-4879-bbd4-2ab19d4bc6fe): CREATE TABLE x (i int) STORED BY ICEBERG
INFO : Semantic Analysis Completed (retrial = false)
INFO : Created Hive schema: Schema(fieldSchemas:null, properties:null)
INFO : Completed compiling command(queryId=spark_20260313010955_97c79a9f-5d00-4879-bbd4-2ab19d4bc6fe); Time taken: 1.574 seconds
INFO : Concurrency mode is disabled, not creating a lock manager
INFO : Executing command(queryId=spark_2026
How to correctly extract the CLS token from a Keras Hub ViT backbone, and clarify preprocessor usage and pretraining dataset?
13 March 2026 @ 2:08 am
I’m working with a Vision Transformer (ViT) backbone from Keras Hub and building my own classification head. My code looks like this:
python
def get_vit_model(model_variant='vit_base',
input_shape=(256, 256, 3),
num_classes=3,
train_base_model=True):
preset_path = "/home/ahmed/ct_brain_project/models"
back_bone = keras_hub.models.Backbone.from_preset(preset_path)
back_bone.trainable = train_base_model
inputs = layers.Input(shape=input_shape, name='input_layer')
features = back_bone(inputs, training=train_base_model)
# Extract CLS token
cls_token = features[:, 0, :] # (batch, embed_dim)
x = layers.Dense(128, use_bias=False)(cls_token)
# rest of code of the classification head
model = Model(inputs=inputs, outputs=outputs)
return model
From the config I downloaded (vit_base_patch16_224_imagenet from
Fix narrowing warning [duplicate]
12 March 2026 @ 8:51 pm
Building my program on gcc (some old version), I got:
narrowing conversion of '(schemaName.std::__cxx11::basic_string<wchar_t>::length()+ 2)' from 'std::__cxx11::basic_string<wchar_t>::size_type' {aka 'long unsigned int'} to 'int'[-Wnarrowing]
How to fix it? Or I should just silence it?
This is the line in question:
int length1[2] = { schemaName.length() + 2, tableName.length() + 2 };
length1 array will be passed to PQprepare: https://www.postgresql.org/docs/current/libpq-exec.html, which is looking for an int[].
P..S.: Is there a way to turn this on on MSVC 2017?
Designing a database for a login and brute force attack detection system"
12 March 2026 @ 4:23 pm
The goal of this project is to build a user authentication system (Web Login) capable of self-protection against brute force attacks by combining technical measures such as rate limiting, account locking, and traffic monitoring.
So, as the title says, because of being a students in a college, I'm wondering what entities for it, I hope I can see your point of view and your experience for this problem. Also it is not a homework, I'm doing it for experience and skills. Thanks for your help!