Many connections to bitproject.io nodes?

edit (@b10c): the title was originally “/Satoshi:29.1.0(dont-spam-me-bro)/ nodes?”.

Noticed these connections on both my outbound and inbound direction on various nodes: “/Satoshi:29.1.0(dont-spam-me-bro)/”

They appear to behave fairly normally, relaying 0-fee TRUC transactions in a package (precluding stock Knots 29.1), but I don’t know anything else about them.

https://blockchair.com/bitcoin/nodes reports 1200 of these

I couldn’t find any results with google-fu on github.

Let me know if there’s any information I can gather. Would like to know who’s propping these up.

2 Likes

oh. 3 for 3 looking these up

edit for clarity (@b10c): whois mentions bitprojects.io (which is known from Antoine Poinsot - Misbehaving nodes investigation). Seems like they are back, now with a custom user agent. Marking this as solved.

I noticed I have two of my outbound slots of len (demo.peer.observer) connected to them. instagibbs mentioned to me he has 3 of his outbound slots connected to them on one of his nodes. This is worrying, as filling outbound slots should be hard to avoid eclipse attacks.

ASMap would help for this.

1 Like

The last time I saw dont-spam-me-bro connecting to me was on 2024-10-24. They now appear as /Satoshi:29.2.0(not-your-file-server)/Knots:20251010/.

1 Like


back, in knots form

Doing IBD on a fresh node, I see 4 connections to /Satoshi:29.2.0(not-your-file-server)/Knots:20251010/ nodes. Let’s see if they remain after IBD..


$ bitcoin-cli -netinfo 3
Bitcoin Core client v30.0.0 - server 70016/Satoshi:30.0.0/ - services nwcl2

↔   type   net   serv  v  mping   ping send recv  txn  blk  hb addrp addrl  age id version
out  block  ipv4   nwl2  1     19   7465    1    1    *    0         .         24  8 70016/Satoshi:29.2.0(not-your-file-server)/Knots:20251010/
out   full  ipv4   nwl2  2     22     50    1    1         0      1027         24  3 70016/Satoshi:29.1.0/
out   full  ipv4   nwl2  2     27    511    1    0         0      1008         24  0 70016/Satoshi:29.2.0(not-your-file-server)/Knots:20251010/
out   full  ipv4   nwl2  2     59   2039    0    0         0      1006         24  6 70016/Satoshi:29.2.0(not-your-file-server)/Knots:20251010/
out   full  ipv4    nwl  1     78     90    0    0         0      1028         24  4 70016/Satoshi:25.0.0/
out   full  ipv4   nwl2  2     81   7772    0    1         0      1006         24  7 70016/Satoshi:29.2.0(not-your-file-server)/Knots:20251010/
out   full  ipv4   nwl2  2    105   1195    0    0         0      1027         24  2 70016/Satoshi:27.1.0/
out   full  ipv4   nwl2  2    109   2033    1    0         0      1038         24  1 70016/Satoshi:28.0.0/
out  block  ipv4   nwl2  1    128    948    0    0    *    0         .         23  9 70016/Satoshi:29.0.0/
out   full  ipv4    nwl  1    156    156    1    0         0      1023         24  5 70015/Satoshi:0.20.0/
ms     ms  sec  sec  min  min                  min

ipv4    ipv6   total   block

in          0       0       0
out        10       0      10       2
total      10       0      10

edit: that node is not running with ASMap enabled.

I’m not sure ASMap would be a substantial improvement over the current /16 mechanism. There is 10 peers to choose from 10k /16’s available. That’s roughly a (n / 10_000)^10 probability to connect to an attacker controlling n /16’s.

That said, it seems that it’s not what happens in practice. The Bitprojects guy controls only 12 /16’s. That a node would pick any of those 12 for 4 of its slots if it has heard about all /16’s available is essentially null. Could it be that somehow he found a way to bias DNS seeds towards his nodes, such as a node joining the network does not hear about many more /16’s than his?

1 Like

Should we expect this effect to dissipate as a node has longer running time?

I’ve set up some data collection on the number of outbound connections I have to the bitproject IPs (also inbound, but don’t seem to have gotten any inbounds from them).

Currently, three of my nodes have two of their outbounds to bitprojects:

Seeing three out-bounds from alice to bitprojects and four from paul (which is a new newly set up node).

Or maybe his addresses get rumoured a lot more? If the addresses a node will return to a GETADDR are randomly sampled among all known addresses, instead of first sampling by /16, then it seems that his nodes would be overly represented.

We don’t sample based on /16 - we sample based on address (and abort/get another sample if another outbound is from the same range). For example, if we have an addrman with 99 addresses from /16 range A and 1 address from /16 range B, we’ll pick an addr from A for our first peer with 99% probability.

So controlling a large number of addresses from a given /16 range does increase chances of being picked even if other nodes will only make one connection to that range.

1 Like

FWIW they don’t seem to be advertising with the custom subver anymore. Also they’re having major latency issues.

I discussed this with a few people out of band, but just to share here this explains why we are seeing so many peers connect to the Bitprojects guy. My previous assumption on how this logic works:

Was incorrect, as Martin explained above. So it seems that if he was controlling just a few more /16’s, as well as a few thousands more “node” IPs then he would have a decent chance of eclipsing a node.

I find it surprising that AddrMan doesn’t work the way i expected it to, because it would make the probability of controlling all of a node’s outbound connections decrease exponentially in the number of connection slots. As far as i understand (and what i got from a quick discussion at the office) this is due to architectural limitations in AddrMan’s implementation. I’ve started looking into what it would take to implement what i suggest.

2 Likes

This seems quite problematic and defeats almost any gains asmap affords?

2 Likes

So it seems that if he was controlling just a few more /16’s, as well as a few thousands more “node” IPs

FWIW, getting control of a /16 and all its IPs costs millions of USD. But using IPv6 should make this much easier to scale.

This seems quite problematic and defeats almost any gains asmap affords?

Why would it defeat all the gains? You can easily get hundreds of IPs across 13 different /16 on AWS or any other larger hoster. Getting many ASNs and making hundreds of IPs available through them is a much higher bar. Also, there are less than 100k ASNs in use in global routing and getting a large number of them without a legitimate use-case would a considerable bureaucratic hurdle.

The issue is that with the current peer selection algorithm you don’t need a large number of them, just as many as we make outbound connections by default, or slightly more. The bitprojects guy already controls 12 /24’s spread across /16 boundaries and 3 or 4 AS’s. It does not seem completely unrealistic to me that he would be able to spin up a large number of (maybe fake) nodes in more than 10 net groups (either /16’s or AS’s).

1 Like

you don’t need a large number of them

more than 10 net groups

And I am saying that 10 is a large number for ASNs at least relatively speaking. There are 2^96 /32s in IPV6 almost all available to use if you just put down a little bit of money.

2 Likes

An updated screenshot of my dashboard:

2 Likes

A lot less outbound connections to bitprojects at the moment but they seem to be still active.