.. timeouts in some cases.
In case it isnt' obvious i'm envisaging a utxo whose script (in taproot) looks like:
10/10 no timelock or
9/10(1) TL 1, OR 9/10(2) TL 2 ....
and so on , possibly even for every combination (about 1000 in total).
As we can see we need a preferential ordering for users to do that, since all the 9/10s can't have the same timelock in case they conflict (is that right?).
These are just some offhand thoughts for now. Has this been looked into?
... sensibly think about large trees of conditions.
So suppose the nature of the contract was such that states can be reached where, say, 2/10 of the participants no longer care/have no incentive. It's logical that they would no longer keep themselves "live" (e.g. run the software).
At a first glance it looks like having clauses "only if M out of N care" can only work with two limitations:
* The participants have a preferential ordering
* Staggered timeouts will mean very long .. (2/n)
The multisig "only if you care" paradigm:
Say 10 participants engaged in off chain contracting; as we know there is the "replace by agreement" paradigm: contract is "represented" on chain by a 10/10 multisig, this requires backout, at least for a funder, but in general a lot more than that.
Has anyone tried to formalize how backouts can work for multiple parties based on liveness of other parties?
I mention this in the context of Taproot (ie MAST), because now we can .. (1/n)
Related: BIP78 JM receiver compatibility with btcpayserver sender now confirmed after merge of https://github.com/JoinMarket-Org/joinmarket-clientserver/pull/708
(0.7.2 fix release prob will come very soon - this bug was on our side .. well, *me*, to be specific :) .. we also have QR code support for payjoins too now).
... I think it is getting more useful/practical. But tbh it just exists mainly as demonstration until one of the bigger user wallets does it (Electrum seems the obvious one).
Just to flesh out something above a bit more clearly:
Key 1 sends R_1,1, R_1,2, R_1,3
Key 2 sends R_2,1, R_2,2, R_2,3
But Key 1's actual nonce will be something like:
H(1,message, pubkey, *all the R_ij of both parties*)R_1,1
H(2,message, pubkey, *all the R_ijof both parties*)R_1,2
H(3,message, pubkey, *all the R_ij of both parties*)R_1,3
... just fixing everything to prevent "separation" as mentioned.
Notice this is still 2 round, but not 3.
Still reading the paper.
.. into signing a message they didn't intend to, *if* we tried to use a 2 round protocol, i.e. not bother to commit to the nonces upfront (3 round)), is because Wagner's attack kind of looks like:
a + b + c + d = e
where the attacker is getting a,b,c,d as hash outputs he can't predict, but trying to make them add up to 'e', which is some number.
As the paper points out, the attack only works because e is some fixed constant; if it varies with a,b,c,d it of course can't work.
... solid security proof, is actually a very strong echo of the original solution for the pubkeys (i.e. it's also a kind of "delinearization").
Basically, each participant, rather counterintuitively, sends a *set* of nonces as their contribution rather than 1. Let's say M nonces each, then their *final* nonce will be such a "delinearized" combination of their starting nonces.
The reason this defeats Wagner's attack (which would otherwise allow the attacker to dupe the other signer(s) ...
.. delinearization. More specifically it meant hashing *the entire keyset* into each key to prevent this "separating out".
The problem was that for a rigorous security proof, the first stage of the mulitisignature protocol had to include a fixing of all participants' nonces (R_i) ahead of time. That made the protocol three rounds of communication which is pretty icky (albeit, fine in some scenarios).
The approach this new paper takes to remove that first round, while still retaining a (3/n)
It's very interesting. A crude description:
The problem with the naive musig design was that: trying to fix each participant's public key by hashing it: H(Pi)Pi didn't prevent an attacker separating it out in such a way that they could perform Wagner's attack on it (you can see a description of this near the end of my blog post: https://joinmarket.me/blog/blog/avoiding-wagnerian-tragedies/ ), which is an attack that is extremely practical for large numbers of keys.
Now, that was fixed up with what is vaguely referred to as (2/n)
If you're at all interested in the fiasco of Facebook's despicable measures to dominate and control VR, this article is worth a read, and I particularly recommend reading the comments too:
Bitcoin Optech newsletter #119 is here:
- relays LND security warning
- summarizes LN upfront payments discussion
- describes taproot bech32 addresses thread
- links to proposal for alternate way to secure LN payments
- details the signet PR Review Club
The crowd only has wisdom when it's not really a crowd.
The central limit theorem states that the variables have to be *independent* (and identically distributed in the basic form, but not necessarily).
What makes crowds of humans strong is their direct feedback and interconnectedness. However this also takes away the otherwise magical discriminatory power you get from the law of large numbers; such crowds are actually stupid, not wise.
Modern society is now governed by crowds, on social media.