Flickr.com

VN:F [1.9.22_1171]
Rating: 8.5/10 (2 votes cast)

Almost certainly the best online photo management and sharing application in the world

Hot

16 May 2025 @ 12:03 pm

KARMAZZ Bean

29 January 2025 @ 9:19 pm

Cyclades

26 January 2025 @ 8:29 am

Hot Water Beach Corumandel NZ

9 January 2025 @ 7:31 pm

Mechanics at Work (2/2)

5 July 2024 @ 4:38 am

Hungry Bee

28 October 2023 @ 12:38 pm

Sunset

18 October 2023 @ 12:53 am

Firefighters on a Quick Break (1/2)

29 August 2023 @ 8:19 am

Rollercoaster in Tokyo

7 August 2023 @ 2:20 pm

MySpace.com

VN:F [1.9.22_1171]
Rating: 4.0/10 (1 vote cast)

Social mix of music, videos and friends
myspace.com

Facebook.com

VN:F [1.9.22_1171]
Rating: 4.0/10 (2 votes cast)

The largest corporate social network on the web, with censorship at it’s heart.

x.com

VN:F [1.9.22_1171]
Rating: 8.8/10 (27 votes cast)

The home of free speach. Social out bursts, news and comments all within 140 characters. Musk has saved twitter from BOTS and corporate censorship.

Metacafe.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Online Video Entertainment – Free video clips for your enjoyment
Metacafe.com

StackOverflow.com

VN:F [1.9.22_1171]
Rating: 8.5/10 (13 votes cast)

Random snippets of all sorts of code, mixed with a selection of help and advice.

When should I use threads or processes with asyncio instead of trying to make everything awaitable?

2 February 2026 @ 6:41 am

I am building a Python service using asyncio and I am trying to keep my code fully async. However, my real workload includes CPU-heavy tasks like parsing large files and data transformations, plus some third-party libraries that are blocking and do not provide async APIs. I know I can use loop.run_in_executor or asyncio.to_thread for blocking operations, and I could also use multiprocessing. But in practice, how do experienced developers decide which parts should stay fully async, which should use a thread pool, and when it is better to isolate work into a separate process or service? I am looking for practical decision rules and patterns that work in production, not just toy examples. What are the tradeoffs?

How to send object references to function via std::span

2 February 2026 @ 6:40 am

I have this function: bool TmrwRenderImpl::copyTexture( std::optional<std::pair<std::unique_ptr<tmrw::opengl::TmrwPassRenderer>, std::unique_ptr<tmrw::opengl::TmrwPassRenderer>>>& pass_copier, const opengl::TmrwTexture& src_texture, const opengl::TmrwTexture& dst_texture, bool flip) { tmrw::opengl::TmrwPassRenderer* renderer = flip ? pass_copier->second.get() : pass_copier->first.get(); std::string uniform_texture = "src_texture"; //std::span<const std::string&> named_uniform_values = { uniform_texture }; return renderer->Render({ std::cref(src_texture) }, { std::cref(dst_texture) })); //, named_uniform_values)); } Here renderer->Render should accepc std::span object references, I tried to declare it in many ways: 1. virtual bool Render(std::span<std::reference_wrapper<const TmrwTexture>> src_texture, std::span<

Implementing Callouts and Cards and Accordian [closed]

2 February 2026 @ 6:24 am

I’m planning to add custom block components to Tiptap—specifically callouts (info, success, warning, tip), and later cards and accordions. Since Markdown doesn’t natively support these kinds of blocks, I’m looking for guidance on the cleanest long-term approach. Overview of what I’m trying to do Use Tiptap to author content with callouts and other custom block components Persist the content as Markdown (for portability and static rendering) Render the same Markdown consistently on hosted/static sites Current idea (high level) At a conceptual level, I’m considering representing callouts in Markdown using a custom, HTML-like wrapper, while keeping the inner content as standard Markdown. For example: Markdown content inside *italic* and **bold** On the rendering side, this would be transformed into a styled HTML block (e.g. a with classes), while ensuring the inner Markdown remains untouched and renders normally.

Why isn't my ESP32 sending the full payload to my python server [closed]

2 February 2026 @ 5:34 am

I'm recently working on a project that has an ESP32 as the client and Python FastAPI as the server. After the connection, there's an authentication process, which requires them to communicate using WebSocket (messages are in JSON format). The problem is that the ESP32 said it sent out the message: {"type":"auth_response","id":"CHY","data":{"hash":"f6bb6993e5a44bcde621d6bdd4a3dcb7fcfa5ac0b9a8137d88bed81510849fed","time":19726}} but the server only received {. The server didn't receive the whole message sent from the ESP32. I was expecting the server to receive the whole message, but it didn't. I've used Postman to test the server, and it's working fine, so I suspect the problem lies in the ESP32. This is the code for ESP32. void webSocketEvent(WStype_t type, uint8_t *payload

Sending emails through local proxy

2 February 2026 @ 5:22 am

I need to bypass Iran's internet censorship to send emails using R's emayili package. The standard gmail() function fails because the regime blocks the default SMTP protocol. My goal: Use local HTTP/HTTPS/SOCKS proxies to route emayili's email traffic through a local proxy server (192.168.1.50:8080). Current setup that doesn't work: message_content <- "something" smtp <- gmail( username = sender_email, password = sender_email_pass # Gmail app password ) smtp(message_content) # Fails due to blocking What I've tried without success: # Direct proxy specification smtp <- gmail( username = sender_email, password = sender_email_pass, proxy = "https://192.168.1.50", proxyport = 8080 ) # Colon-separated format smtp

How to systematise a checking of "a quite equals to b" (either with round method or | a - b | ~ 0) for any value of a?

2 February 2026 @ 4:53 am

My context I have implemented an equals(Object o) method for a speed object (Vitesse here) that when it faces a comparand being in m/s when it is itself in km/h (in example) puts the comparand in km/h too before comparing the values. And I'm testing it in this test where it is triggered by the last assertEquals(...) statement: @Test @DisplayName("Ex E1, Benson Ch. 3") void ex_Benson_Ch3_E1() { LOGGER.info("Asafa Powell (Jamaïque) court 100m en 9.74s. Quelle est sa vitesse moyenne ?\nSerait-il en infraction dans une zone scolaire où la vitesse est limitée à 30 km/h ?"); Vitesse vitesseMoyenne = new Vitesse(100, 9.74); // Par défaut, en mètres secondes assertEquals(10.3, vitesseMoyenne.vitesse(), 0.1, "La vitesse moyenne du coureur n'est pas la bonne, en m/s"); // Vérifier les conversions, étape

Insert Data from Java to MySql

2 February 2026 @ 4:37 am

I don't know how to convert Java string numbers to insert them into MySQL table. The field 'peso' in MySQL is declared as float. public void Conferma() { // Creo un oggetto Prodotto Prodotti prod = new Prodotti(txtCod.getText(), txtDescr.getText(), txtDicAdr.getText(), txtUdm.getText(), Float.parseFloat(txtPeso.getText())); // inserisce il record nel database try (Connection conn = DriverManager.getConnection("jdbc:mysql://localhost:3306/DDTFornitori", "root", "mA19720306#")) { // Crea l'istruzione PreparedStatement insStmt = conn .prepareStatement("insert into tblProd (idprod, descr, dicituraADR, udm, peso) values(?,?,?,?,?)"); // Specifico il valore dei parametri insStmt.setString(1, prod.getCodice()); insStmt.setString(2, prod.getDescrizione()); insStmt.setString(3, pro

How do I bypass "erasableSyntaxOnly" error when using React Testing Library with Jest

2 February 2026 @ 4:06 am

I'm trying to use React Testing Library with Jest and I'm not sure how to address it. I'm trying to test React components with Jest and RTL and I keep getting caught with erasableSyntaxOnly errors. Here are my configurations: package.json: { "name": "06-jest-setup", "private": true, "version": "0.0.0", "type": "module", "scripts": { "dev": "vite", "build": "tsc -b && vite build", "lint": "eslint .", "preview": "vite preview", "test": "jest" }, "dependencies": { "react": "^19.2.0", "react-dom": "^19.2.0" }, "devDependencies": { "@babel/core": "^7.29.0", "@babel/preset-env": "^7.29

R: How to ensure that text does not intersect with a ggplot graph?

2 February 2026 @ 2:32 am

I have the following R code that makes a visualization using ggplot: library(ggplot2) set.seed(42) n <- 100 time <- 1:n baseline <- 10 + rnorm(30, sd = 2) intervention <- 10 + seq(0, 5, length.out = 40) + rnorm(40, sd = 2) post <- 15 + rnorm(30, sd = 2) values <- c(baseline, intervention, post) df <- data.frame(time = time, value = values) intervention_start <- 31 intervention_end <- 70 shaded_region <- data.frame( xmin = intervention_start, xmax = intervention_end, ymin = -Inf, ymax = Inf, Period = "Intervention Period" ) vlines <- data.frame( xintercept = c(intervention_start, intervention_end), Line = "Intervention Start/End" ) ggplot(df, aes(x = time, y = value)) + geom_rect(data = shaded_region, aes(xmin = xmin, xmax = xmax, ymin = ymin, ymax = ymax, fill = Period), inherit.aes = FALSE, alpha = 0.3) + geom_line(aes(color = "Observed Data"), linewidth = 0

Can arbitrary precision integer increment in Brainfuck be done in O(1) code size?

2 February 2026 @ 1:29 am

I was fiddling with the Brainfuck esolang over the past few days, and tried to implement an increment operation on an N-byte-wide integer in big-endian format. Note that I am imposing a structural constraint on the data. Consider BF tape on the left and right side of this int to be infinite and all initialized to 0. The exact integer value is not known and the data pointer starts at the LSB. The very important additional constraints are that the bytes other than the N integer bytes must be reset before end and the pointer must end at LSB. I have managed to establish 2 algorithms which generate O(N) instructions for this increment, using ripple carry, which are structurally very different. Unrolled: We hardcode the int length, by opening N - 1 loops which after increment, check if the current position is zero (overflow occurred) and if then so, move forward, which we then close at the exact same position it was opened and from there we m

HootSuite.com

VN:F [1.9.22_1171]
Rating: 7.0/10 (2 votes cast)

Professional Twitter, FaceBook, MySpace & Linkedin client online

blogTV.com

VN:F [1.9.22_1171]
Rating: 5.0/10 (1 vote cast)

Watch Live Internet TV and webcam video chat
blogTV.com

Delivr.com

VN:F [1.9.22_1171]
Rating: 6.0/10 (1 vote cast)

Effortless sharing with a tricked-out, mobile-friendly URL.

Scribd.com

VN:F [1.9.22_1171]
Rating: 7.0/10 (1 vote cast)

Scribd is the largest social publishing company in the world
Scribd.com