00000000000000000001b3ff8b13e57c3ec1eca3ba7d2937edbd9f219eb2d9f3 (AntPool) at height 941881
00000000000000000000c81cbf94a12ca498e72eb8530f7061c8746cf9687b2e (ViaBTC) at height 941882
While Foundry mined the following blocks to win the race:
00000000000000000000bd4930a5982911e7749eb491886206e71abdc1ec0cc6 (Foundry) at height 941881
00000000000000000000724eac69a18c6699c9f7aaab24bcf18beb2723ccadd2 (Foundry) at height 941882
000000000000000000009c9acd0bc3207fa181f79f8573bf27d8a81d1ef3aa8e (Foundry) at height 941883
I found it interesting that boerst’s stratum-work did not see any stratum jobs from Foundry mining on their winning blocks. See e.g. 941882 on stratum.work.
Maybe they were only mining on them in one region, and stratum-work is connected to a different one, or they kept quite about them (i.e. selfish mining), or it’s a bug on stratum-work
Yup, I am just hoping these pool operators are running sensible clocks - if not using ntp.
I looked into this cause I am interested in the uncles probability in p2poolv2. If we look at the fork monitor, it looks like if blocks are within 15-30 seconds of each other it results in a fork. This makes me think if we can get the mean/stddev on the fork blocks timestamps deltas Again, assuming clock drift is not too high.
It’ll be fascinating to see how this delta changes between different pairs of pools. Detect friends and enemies
Yeah, but they should also send a new job immediately on seeing a new block. Unless of course there are some strategies they use to detect how to react to blocks from different pools. Might be interesting to uncover
Thanks for that link to the data. We don’t see the pool name there, right? Do you think we can include that for the data from now on? Or it might be in another data set somewhere that I can join. We want to get the pool names for the orphan blocks too.
Sorry, am new to all your work, so probably asking naive questions.
I’m not sure what the maths would be to figure out what to expect there – naively I would have thought you’d square the frequency of one-block reorgs (since you need two blocks in a row to be found almost simultaneously to avoid everyone agreeing on a single tip) which would lead to expecting about a million blocks between 2-block reorgs if we’re seeing 1-block reorgs every thousand blocks, but perhaps you get different results if you model internal delays getting jobs/work distributed between the pool and ASICs separately from network block propagation.
My timings for the headers in question were:
2026-03-23T15:49:53.982265Z Saw new cmpctblock header hash=00000000000000000001b3ff8b13e57c3ec1eca3ba7d2937edbd9f219eb2d9f3 height=941881 peer=14899 peeraddr=45.32.83.173:54302
2026-03-23T15:51:47.008245Z Saw new header hash=00000000000000000000c81cbf94a12ca498e72eb8530f7061c8746cf9687b2e height=941882 peer=38718 peeraddr=124.156.199.113:36184
2026-03-23T15:55:03.500660Z Saw new header hash=000000000000000000009c9acd0bc3207fa181f79f8573bf27d8a81d1ef3aa8e height=941883 peer=36252 peeraddr=65.109.99.249:35498
I didn’t see Foundry’s racing blocks until the reorg. (Which might suggest that 1-block stale blocks are much more common than 1-in-a-thousand, but they just aren’t visible)
FWIW, I’m not collecting Foundry’s templates on https://stratum.work/ - I could never get a working account from them.
From the pools that I do collect, I can see that none of the other pools were sending work building on top of Foundry’s 941881 and 941882 blocks. Everyone else was seemingly following the AntPool/ViaBTC chain.
I looked a bit at the header and block arrival times in the debug.logs of my monitoring nodes. Here, “header” means we either got an INV for this block, saw a cmpctblock or a header. Full block means we received the block or successfully reconstructed the block from a compact block.
Foundry released their 1st block (header) immediately after the others’ 2nd block was found. According to their own timestamp on their 1st and 2nd blocks, they found them a few seconds after the others’ 1st and 2nd blocks. They released their 2nd and 3rd at the same time, exactly when the 3rd block was found, winning the race. This is effectively a selfish-mining attack by definition even if it was on “accident” or network hiccup (to the extent “attack” doesn’t imply intentional). But it all seems to point to intentional.
Intentional or not, other miners should ignore 2 of Foundry’s blocks. Without enforcing accurate timestamps, this how selfish mining can be made a “nothing burger”. >50% needs to collude to disincentivize future attacks. If Foundry stops displaying it’s name to prevent this, the only solution is to enforce more accurate timestamps. For example, the rule would be to ignore a header for 600 seconds if its timestamp is more than 15 seconds different from local time (rounded to the nearest 10 s interval) when it arrives. This works because the attacker has to assign a timestamp before he knows when he will need to release it. People complain that this can cause a real problem in real network partitions and it’s true a chain split would last longer if that’s the case, but it will eventually resolve itself based on PoW and which partition has the most hashrate. Rounding to the nearest 10 second interval prevents an attack on the interval. Clocks shouldn’t be off more than 2 seconds which is the other big complaint. Also, miners would have to update the timestamp every 2 seconds.
Concerning not being able to get Foundry’s templates, Grok says:
Yes, restricting public or easy access to block templates can serve Foundry (or any large pool) if they were engaging in illicit or strategically deviant behavior like selfish mining or block withholding—primarily by reducing detectability and transparency.