[prev in list] [next in list] [prev in thread] [next in thread] 

List:       bitcoin-dev
Subject:    Re: [bitcoin-dev] Towards a means of measuring user support for Soft Forks
From:       ZmnSCPxj via bitcoin-dev <bitcoin-dev () lists ! linuxfoundation ! org>
Date:       2022-04-30 6:14:45
Message-ID: kfX31euUWC2GP3A1aUwRECN4R9G-hTAmB2sOrvmwnOT3ChmO4G1SOje88cTu53JZqHRw-3pjrQp3s8M5r8unxDlcClV62QZiW48t1NRa1J0= () protonmail ! com
[Download RAW message or body]

Good morning Billy,

> @Zman
> > if two people are perfectly rational and start from the same information, they \
> > *will* agree
> I take issue with this. I view the word "rational" to mean basically logical. \
> Someone is rational if they advocate for things that are best for them. Two humans \
> are not the same people. They have different circumstances and as a result \
> different goals. Two actors with different goals will inevitably have things they \
> rationally and logically disagree about. There is no universal rationality. Even an \
> AI from outside space and time is incredibly likely to experience at least some \
> value drift from its peers.

Note that "the goal of this thing" is part of the information where both "start from" \
here.

Even if you and I have different goals, if we both think about "given this goal, and \
these facts, is X the best solution available?" we will both agree, though our goals \
might not be the same as each other, or the same as "this goal" is in the sentence. \
What is material is simply that the laws of logic are universal and if you include \
the goal itself as part of the question, you will reach the same conclusion --- but \
refuse to act on it (and even oppose it) because the goal is not your own goal.

E.g. "What is the best way to kill a person without getting caught?" will probably \
have us both come to the same broad conclusion, but I doubt either of us has a goal \
or sub-goal to kill a person. That is: if you are perfectly rational, you can \
certainly imagine a "what if" where your goal is different from your current goal and \
figure out what you would do ***if*** that were your goal instead.

Is that better now?

> > 3. Can we actually have the goals of all humans discussing this topic all laid \
> > out, *accurately*?
> I think this would be a very useful exercise to do on a regular basis. This \
> conversation is a good example, but conversations like this are rare. I tried to \
> discuss some goals we might want bitcoin to have in a paper I wrote about \
> throughput bottlenecks. Coming to a consensus around goals, or at very least \
> identifying various competing groupings of goals would be quite useful to \
> streamline conversations and to more effectively share ideas.


Using a future market has the attractive property that, since money is often an \
instrumental sub-goal to achieve many of your REAL goals, you can get reasonably good \
information on the goals of people without them having to actually reveal their \
actual goals. Also, irrationality on the market tends to be punished over time, and a \
human who achieves better-than-human rationality can gain quite a lot of funds on the \
market, thus automatically re-weighing their thoughts higher.

However, persistent irrationalities embedded in the design of the human mind will \
still be difficult to break (it is like a program attempting to escape a virtual \
machine). And an uninformed market is still going to behave pretty much randomly.

Regards,
ZmnSCPxj
_______________________________________________
bitcoin-dev mailing list
bitcoin-dev@lists.linuxfoundation.org
https://lists.linuxfoundation.org/mailman/listinfo/bitcoin-dev


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic