Holochain Forum

Debate on Trust with Hybrida from Bitlattice

Re: strawman, I am not refuting yours and @Brooks argument because I do not agree on that there is a dispute. You are both right, you just argue for two separate types of social organization. I agree with you both, to call that a strawman is not correct.

The conversation that came from that, is just about that you claim that BitLattice does not rely on social consensus (trust), and that I have no access to proof of that claim, while you do. Instead of meeting around that (or debating that or whatever), you actually used a few strawmen. I have replied to those, but, they are not the debate.

To avoid those strawmen dancing around the argument, I replied to the rest here, https://textuploader.com/1r68p, as it is not that relevant to the debate.

We debate about two separate, mutually exclusive, approaches to trust. They neither intersect, nor coexist in the same context. Use of one cancels the other.

We often say that we trust someone half to half. I even used a similar valuation in one of above posts using percent measure. But when one breaks down (which I did) a perception of a complex phenomenon the internalized trust is will find that it’s composed of atomic operations with binary choice only - trust/proof. They may aggregate into said half to half, but on a base level trust cannot be fractioned.

Therefore, you cannot agree with both stances in general. You may agree that the stances that we represent with @Brooks exist. You may even agree that they are applicable in their contexts. That way however you bring no useful input to the debate. We know that the phenomenons we discuss objectively exist (as far as we can tell) and that they are applicable. You just confirm the fact - an act of zero value. The only benefit from your replies is subjective to me as I have fun writing replies.

For reasons unknown to me, while you have nothing of value to add you try to redefine our debate (“No I disagree with the trust/proof dichotomy”). I don’t know whether you do it intentionally or not. But building a straw man is a fact. Remember “it’s just a meme”? Okay, as it’s fun to watch it let you have your pet dummies.

The conversation that came from that, is just about that you claim that BitLattice does not rely on social consensus (…)

Do you ever read what you wrote before (not to mention my replies)? But really read, not only scan letters. As you claimed that consensus mechanism first used in Bitcoin is a social consensus I countered that that while it can perform without any society involved therefore it’s a machine consensus. Just that. I further clarified that Bitlattice extends that mechanism further abstracting any human input from inner workings of the network, because you had to compulsively refer to real world applications as if it would matter.
You stubbornly bring BitLattice even if you know I consider that a rather bad behavior given a place we are in and the subject of this debate. That further convinces me that you read our debate through glasses I have no access to.

you actually used a few strawmen

You’re a master of floating on a surface and avoiding giving any actual arguments. You often mention something, but seldom justify it with any prerequisites. However, I’d love to hear more about straw-men I used. With arguments substantiating that they were straw-men in fact.

As to the content you moved outside into a separate document - yes, it’s irrelevant to the debate, so I ignore it.

Nevertheless, I enjoy debating. However I’d love to discuss trust/proof more than just fencing over irrelevant matters.

I am generally very interested in this discussion. But it ran into different directions, making it hard for me to remember its starting point. So I went back and re-read the original post. @Hibryda you stated on twitter:

I do like your mathematical approach here. And I do see your point: If it is impossible to cheat, we are forced into a regime of honesty.

To use an everyday example (a bad one though, since its mainly emotional): In the UK people adhere to the rule of queuing up. If someone tries to jump the queue, they will be made aware of their transgression immediately. This “forced honesty” gives me a great deal of relaxation, since I can simply rely on being treated fairly.

When @Brooks replied with his tweet…

… I believe he was taking a more emotional standpoint, wanting to point out, that we should strive to bring more humanity to the internet of the future.

I do see where those two definitions are clashing and I have a feeling those are two different topics. I will push Brooks’ angle to the side for now and leave a few comments on “trust” that might be close to @Hibryda’s definition.

Absolutely. Thats why we should try to take as much guesswork out of the equation and make the trust-algorithm as precise as possible.

We should try to gather as much verifiable data as possible and treat subjective data with care.

When ordering something for example, several datapoints can be tracked rather objectively. Time of payment and time of delivery are two easy ones.

With others we could try to “embrace subjectivity” by asking non-judgemental questions. For an Uber car ride, don’t ask “Was the temperature too cold?” but have a slider instead “Cold - Hot”
Then we might even introduce deliberate “confirmation bias”… If I found one particular car to be cold and another rider found the same car to be cold, we seem to have a similar perception. If more and more shared data-points align, my app could favour this user’s opinion…

Now I am getting sidetracked though. Its going beyond trust now…

At the very least:

To sum up my ramblings. Yes, I am with you @Hibryda: The less trust we need (trust being educated guesswork) the better.

And I am with you @Brooks: It is so important to get more humanity into the online world and try to foster real, human connections.
I am not sure though, if “trust” is the best word for it.

2 Likes

I side with both yours and @Brooks point of view, based on that I claim that BitLattice has a social coordination mechanism. I do not have access to proof of that, while you do. Instead of presenting verifiable proof, you divert into strawmen.

I respect your integrity as an inventor, and as a thought leader. I also respect my own integrity, and therefore, I’m clear that the reason this conversation is not a conversation, I do not have access to proof, while you do.

To keep focus on the debate, I put my replies to the diversions here, https://textuploader.com/1rjrt

Whatever.
As to Bitlattice - you don’t have to have a proof as:

  • this debate isn’t about Bitlattice;
  • this debate isn’t about whether any blockchains and derivatives incorporate social or machine consensus (and what actually that means);
  • apart from the point above I already inferred in logically consistent manner that said consensus is a machine one;

As to purported straw-men I used - predictably you cannot provide any consistent evidence.

I should also remind you that we correspond since April 2018 and then I tried to explain you how the consensus in BL is achieved. Apparently to no avail. I doubt that repeating the same process would yield a different outcome. Then and now you are rather more interested in implementation of your protocol than in how consensus is reached in a particular network.

I respect your integrity as well, so I suggest that we come back to the subject of the debate, without constantly derailing it. Consequently I ignore your unrelated input in a linked document.

It’s encouraging that at least one person finds this thread somehow inspiring.
While our debate got a little out of track I’m really into getting back to the subject.

I’ll start from commenting what you wrote at the end of your post.
The word “trust” can suit us very well as long as we are able to define it as precisely as possible and all agree as to the definition.
So, let me start.
The “trust” is a social interaction tool specific to more developed animals that exhibits the following properties:

  • its aim is to provide a fair assessment of opportunities related to another member of society or other entity with minimal waste of resources (time including);
  • it depends on hints. Hints can have a form of:
    • singular or repetitive behavior of the other party confirming expectations;
    • perceived events matching expectations;
    • feelings/intuition.
  • it’s subjective and irrational. “Irrational” here means that there is no requirement to have hints provably true.
    Of course, I’m open to have the definition extended and refined.

Solely by analyzing the above, one could easily infer that simple, algorithmic, deterministic computer systems cannot provide any good interface for that trust mechanism. They can be used to store and retrieve some trust related indices, but their use in no manner makes the mechanism itself more reliable.
On the other hand, neural networks could cope with it. Again, it changes little, as together with an ability to deal with somehow fuzzy logic comes also a fuzzy result. Neural networks are also prone to disturbances and harder to verify due to stochastic behavior.

We, me and @Brooks, possibly started from different standpoints, but standpoints related to the same phenomenon. I still favor the view that we represent two sides of trust=/=proof equation.

I also understand the need to have more humane technology, I’m not a robot (or at least I pass the Turing test). However, what I often observe, people favor all-or-nothing approach to the subject ignoring obvious facts and limitations. While the Bambi effect with respect to other humans can be considered heartwarming it often leads to a complete blindness when it comes to real issues. That has an important negative side effect - people don’t differentiate between what they can and what they cannot do within the limits set by a system of choice.

As I said above, it’s impossible to cast/emulate trust related issues in simple deterministic systems blockchains actually are. Therefore, any such representation would always serve only as an approximation, not real data. Let’s illustrate some issues. If we are to store a reputation score (for illustrative reasons I’ll refer to reputation systems previously discussed) in a blockchain database, what type of variable should we use to depict it? Should it be 64 bit integer or float, if one of the two - why? One factor only or many? Should we limit its range, if so to what span? When should we expect that float related accumulative errors start to play a role? Do we need any normalization performed from time to time, if so how strong and why? And so on, and so on. Those questions must be both asked and answered. Answered with care given to facts, researches, good practices.

There’s more. Storage is just the easiest part of the equation. Further we have algos. How should they understand user’s input? How should they quantify it. Should they try to detect malicious actors by analyzing convergence of scoring or just leave the system open to manipulation?

The deeper the worse. The community. As we know from researches analyzing the wisdom of the crowd phenomenon, it’s very sensitive to composition of the crowd in question, internal and external influence, complexity of tasks. Therefore, we should answer questions regarding how could we take care about proper composition, limiting influence, properly defining tasks.

To sum up my ramblings, I consider dependence on trust recording/processing computer solutions harmful to our ability to properly deal with reality and social issues. However, I cannot ignore that devoted believers exist, and they are a part of society I live in. Thus, if they must depend on tools as reliable as lotteries I feel compelled to discuss what could we do to make them as good as possible. That’s why I’m genuinely interested in a solution proposed by @sidsthalekar and how will it evolve.

However, the necessary condition to start really planning a concise and effective approach to the subject is to enter my domain of cold reasoning, leaving behind emotions and beliefs, instead of trying to defend logically something that cannot be defended that way (yep, Aquinatus). Because the interface must be implemented on machine side. One doesn’t have to embrace it, even like it, but it’s a hard requirement.

A monumental book I’ve found exceedingly useful around this subject is ‘The Speed of Trust’ by Stephen M.R. Covey. Anyone following this thread with interest, but unfamiliar with this book, would do themselves (and eventually all of us! :wink: ) a favour by reading and practising the advice.

@prosidual I spent some minutes reading the content of the website and watching clips with the author. My first impression isn’t even near your enthusiasm.

To sum what I got so far:

  • the author repeats “trust” constantly, but never goes any deeper in the meaning and mechanism. Maybe he does in his book, however as he seems to deeply believe that trust answers all questions it’s rather certain that he won’t destroy his narrative by trying to uncover less favorable sides of the phenomenon;
  • both supporting companies and the style of presentation suggests that it’s about conditioning the workforce, a collection of HRM tricks. Companies don’t need questioning workers, they need obedient ones - trust is a great complementary tool to eliminate dissenters;
  • putting innovation and trust together, what the author does repetitively, is a kind of abuse of logic. Innovation takes its root in a lack of trust, in asking questions, in trying to change the status quo.

All the above makes me think that buying the book would be the worst spent $16 ever.

Do you have any arguments why that book isn’t a mob’s fodder it seems to be?

What I mean is, you base your stance against my stance on the claim that BitLattice does not have a social coordination mechanism. I agree with both your viewpoint and @Brooks viewpoint, and you disagree with that and do so based on information that only you have access to. I respect your integrity.

The reason Bitcoin and Ethereum have fragmented (socially), is because their social consensus mechanism has not been good enough. To me, it seems like the “proof-of-structure” in BL is consensus across shards (not needed in a blockchain because only one dimension. )

The Nakamoto consensus uses popular vote. In the correspondence since two years, I have not seen how BL selects which state to follow, what the popular vote mechanism is in BL (in the same way that there is one in Bitcoin and Ethereum. )

I said once that it seems to me like you have invented everything but the social consensus mechanism. The “hash-linking” (lattice structure + proof-of-structure) seems like a leap forward from Bitcoin and Ethereum. To me, it is beautiful, even tough I have only had it described. The lattice cryptography, also seems like a leap. But those are leaps in technology that predates Bitcoin, Craig Wright invented the Nakamoto consensus, his focus was social, his invention is the leader of a tribe.

Anything remotely related to the subject of this debate maybe?

Having a second thought related to the proposal by @prosidual I decided to suggest some books, possibly more relevant.
It’s going to be a kind of situational joke, but only partly. I’ll further explain why.

I’d suggest two books, both treat extensively about the trust as a tool, from a perspective of those who use it to control an organization. They refer to a many use cases confirmed by further experiments. They are used in certain academias as textbooks. They are also free, as IP rights to them are no longer in force.
So, the books are:

As you can see, I use a reductio ad absurdum here, to emphasize certain issues.
While the book suggested by @prosidual claims to have a support of 100 CEOs from biggest companies, the books I mentioned are supported by millions of bones scattered around the globe that once belonged to guys whose superiors never read those books or never understood them.
More importantly though, their authors no longer had to pretend or hide their true thoughts as upon writing their works they already made sure that most of their enemies took a fast lane into eternity. Therefore, they offer an insight into maybe cynical, but devoid of correctness, description of tools in use of which they excelled.

Every hierarchical, totalitarian organization, like army, church, bigger companies, political movements need to use two tools - violence and trust. The first one is obvious, although it may differ in form - from physical violence, to economical and/or psychological pressure.
The second is more interesting to us. And from here the reductio ad absurdum is no longer so absurd. Trust is a vital tool, the most prominent use of it can be observed in armies, where it’s induced with force or manipulation.

There are many reasons why trust is irreplaceable, however three are the most important:

  • there’s no possibility to ensure that all members have the same access to information. It could even be detrimental to the organization, as it would put the hierarchy in question. Therefore, with absence of knowledge, trust is the only option;
  • it’s a built-in mechanism of our brains and therefore easy to use. Because even in a diverse organization most people are “equipped” with that feature to some extent;
  • there is a disparity in number between “trusters” and “testers”, in favor to the former (I mentioned it before). Thus, trust is useful in homogenizing an organization as dissent gets calmed down.

A view into how armies use that tool gives an insight how it works in less extreme cases.
For instance, armies execute search and rescue operations seemingly against logic, with high risk and cost. Even if they know they go to collect a mangled body. Because soldiers must trust that someone will come, no matter what. The same applies to churches (for instance, shielding pedophiles), companies (shielding fraudsters). When banks in a year of crisis took another path and there was a famous year of window jumpers, it backfired later with a wave of whistleblowers (recent money laundering scandal).

My point here is simple. Bambi effect projected on humans is naive (as naive as projected on other animals). When planning a system that is meant to reflect trust it should be always kept in mind that this tool is a tasty bite for many groups of interest. Therefore, to be sure that it’s hard to abuse, designers must take into consideration all vulnerabilities, that span from technical matters to sociological ones. Or, if one specifically wants to design a tool of control, studying extreme cases may be useful as well.

I totally agree with designing systems that are resilient against abuse. In my limited understanding and belief, the first step is to not think in hierarchies. That is just what my gut and 39 years of life experience told me :slight_smile:

1 Like

Your gut tells you right. The ideal society would be composed of similarly conscious, independent individuals being able to infer their decisions through reason. Without any hierarchy.

Sadly, biology acts against that. Animal herds are a way in which certain species maximize survival with minimization of expenses. Herds to work efficiently must have a hierarchy, as the flow of information is always a bottleneck and the decision-making process cannot be entrusted to an assembly of all the members. Simply because their cognitive abilities differ along a bell curve. Democracy, while an attractive idea, works poorly in terms of efficiency. Apart from the fact that it’s seldom a real democracy in real life.

So, your gut is right. But that doesn’t solve a thing. Because our world is far from ideal. Non-hierarchical structures can grow to a limited scale. One of the largest examples was Github, before being taken over. But to look for a working solution I’d suggest looking at possibilities to force participants of a system, via specific stimuli, to choose hierarchy-less approach. Because naturally they’ll tend to look for a leader or someone that tells them what to think.

1 Like

Good to hear my gut is still functional :slight_smile:

Robert Sapolsky did some research into this area. Will have to look it up again. Could temporary hierarchies offer new solutions? I can imagine this will run into the same scaling issues. And how can you design a system with this in mind?

EDIT:
When a hierarchy functions, the people on top are in a good state and the people on the bottom in a worse state. When a hierarchy is unstable, the people on top experience more stress and the people on the bottom see more opportunities. Something I just read…kind of interesting and makes sense in a way. I wonder how this oscillation works with temporary hierarchies. :thinking:

Good question. Temporary hierarchies can be a potential solution, however I’m afraid people will tend to make them permanent.

Below are few ideas I can share at the moment. They all depend on context and have both advantages and disadvantages. Besides, the issues I mentioned in my previous posts should also be taken into consideration, as they are no less important than the hierarchy one. Eliminating hierarchy is just a step in good direction in some select use cases.

  • pushing for full anonymity. By full I don’t mean random nicks. I mean random identity designator on every contact/voting/whatever. That, however, creates a risk of abuse by creating fake identities (the most common way in rating systems atm). That risk, when constant designators are in use, is lower, as other members can spot manipulation by observing patterns in behavior;
  • push toward non conformist behavior. So, a user is rewarded by being apart from the community. The further the better. First impression is that people will go into extremes. But, if a proper measure is used (dense means with local, narrow scope for instance), going into extremes en masse won’t be beneficial. Then people will be forced to think about their choices several times, to find a way to exploit the system while also expressing their point of view in a most true way. A kind of a game. That approach is very tricky, but also very promising. Even if counter-intuitive.
  • push toward skepticism and rational thinking. The hardest approach, because people are lazy. While I can envision a system where every decision must be logically substantiated (using for instance a simplified language to manage predicates) I doubt that it would attract people. Too much things to do, before I can push on a “the guy is an idiot” button. Too much thinking. On the other hand, it’s the only way to actually equalize cognitive differences in population.

Disclaimer - the above are just rough ideas. Missing many factors, oriented on the hierarchy issue you find important.

As to your edit - with temporary hierarchies the frequency changes :smile:. I don’t think anything else. But we have the Tensorflow for instance, maybe someone will spare some time and link that variable with population metrics and run it through some genetic algo?

1 Like

To expand a little on a model that could shed some light into temporary hierarchy.

First step would be to define some boundaries that the population works within. I’d suggest the following axioms:

  1. constant population size;
  2. evolutionary modeled dependence between satisfaction of followers and triggering change (say initially leaders-80/followers-20 and boom);
  3. constant followers’ satisfaction
  4. constant “life-time” of each run. Otherwise, as there may be some stable situations the whole thing will produce an infinite frequency;
  5. variable frequency that is our goal to optimize.

The model of population and internal interactions in Tensorflow (training data generated). Genetic algo for every round inheritance and random adjustments of the second point. The fitness function should depend on a steady frequency of change of leadership. (so two layers, TF and GA)

It’s a very simplified model, but could act as a hint when it comes to a question, what are the best conditions for the temporary hierarchy solution.

Thanks, this is pretty cool and could scale when temporary hierarchies can somehow create temporary clusters. I like the life-time and satisfaction rules. Each hierarchy can dissolve after a goal is reached or if the satisfaction level is diminished. I can imagine this could be gamified within Holochain on application level. Thanks for the great feedback. I am going to play with this idea for a bit and think of use-casus where this could work.

With respect to modeling I’d go even further. Given the debate uncovered some problems with trust and techniques trying to contain it (I hope) and given that your Hackaton is going to take place in Prague in a week from now, I suggest you use the chance and prepare at least a theoretical basis of what you actually want in terms of at least minimally reliable system based on trust. A conceptual model. Further I’ll describe what should you focus your attention on. Then you have two possible paths.

First, to prepare a model and a mutation mechanism on your own. I don’t know whether there are members of your community capable of doing it, but if there are let them write first a simplified version during your event. Simplified, as the more interconnected dimensions are involved the more computing power is needed. But even with a simplified one you could possibly spot some regularities.

The other path is to have some professionals interested in it. Your event may be a chance to meet them. By professionals I mainly refer to academics. They may come from the Uni in Prague for instance. My suggestion would be to invite guys from MPI in Tübingen. In particular their “Intelligent systems” division. Of course, you can invite guys from different academic centers around. They often have resources and knowledge you don’t, maybe some will be willing to have a weekend trip to Prague. That approach is more reasonable, as any more complex model will require machines you probably have no access to. Also, a cooperation could provide you with a solution that at least has some merit. Not to mention a label of being scientifically substantiated.
But to take that path you must prepare something more than just a wish list.

So, let’s see what you need to have your problem being properly described.
Before you start reading further please see this clip, as I’ll refer to it.

First of all, decide what base system you’d like to model. I mean the most abstract one, the one that could potentially be a base for others, more use case oriented systems. In the clip I wanted you to see it would be a bipedal walker.

Next step should address defining a set of things the system must cover. So, whether there should be a rating (I assume that yes), should it address hierarchy and leadership (@AdriaanB), should it be as tamper resistant as possible, or only to a certain extent limited by usability, should it force behaviors or respond to them (or both), etc. Analogy in the clip - two legs, erected body, muscles attached to joints, locking joints, etc.

Next step - defining goals. It’s your fitness function. Analogy in the clip - it walks firmly, doesn’t topple, is able to navigate obstacles. In your case it could be the fact that a user feels some kind of satisfaction. But pay attention to the fact that you must be able to somehow measure it. So, maybe the time spent on using it? The number of interactions? It’s your job. But the set of values must be clearly numerical and as concise as possible. So, no social sciences’ mumbo-jumbo and no multitude of factors.

Next step - defining an environment. Your environment is our population. Because it’s complex, you must build a simplified model. In the clip these are bumps on a path. In your case these could be popularity fluctuations for instance.

Next step - defining your environmental threats. In the clip - boxes thrown at walkers. In yours - malicious actors, aspiring leaders, non diverse community, etc.

Next step - defining interactions between different elements of a puzzle. In the clip these are constraints and routing (dependencies) of muscles. In your case… So, we talked about that. Human behaviors aren’t very simple, but when broken down into atomic operations, as I tried to explain above, some factors are clearly dependent on others. So, for instance, an attitude toward a person, relation with that person impacts on our willingness to trust. It’s probably not linear, but there is a correlation. Maybe there are some papers trying to grasp a scale of that correlation. Look for them. There are other relations as well, for instance between stratification and satisfaction (hierarchy). And many more. Some were researched. Find numbers.

Next step - define a size of your population. Start from small numbers. Analogy in the clip - muscles and bones. Mind that you must take into consideration malicious actors as well.

Next you should decide which factors are constant, which are variable and which a GA should fill with optimal values. Analogy in the clip - the value to look after was a sequence of firing muscles (or its generalized model). You can prepare different scenarios with different constants, variables and values to be looked after.

Before you call it an end, one more caveat. Whenever possible try finding some figures in research papers. There are lots of papers focused on social issues, but focus on those that contain numbers and analytical approach. No mumbo-jumbo, remember? If there is no paper on a particular matter, look for things that may be related. For instance, trust isn’t deeply researched, but belief is. From social, neurological, biological perspectives. Another example - the wisdom of the crowd was researched in many papers. While not directly related to your use case, some rules and values might be useful.

After completing the above steps you’re more or less ready to prepare your model and its evolution. Evolution, because you’d like to find a form that works the best way possible (optimally satisfies the fitness function).

Then you just write a code, run it and depending on equipment and complexity wait from hours to weeks to see the result.

I address this post to all of you, notably to @Brooks who seemed to care about the subject, to @jakob.winter who certainly cares, to @AdriaanB who wants to solve at least one issue and to whoever is interested in the subject of this debate. You don’t have to like me, you can even be emotionally distressed by what I wrote so far, but this advice is free. There is nothing better than a free candy, right?

What I mean with that I respect your integrity, is that I do. You can reject my perspective, I still think BitLattice could benefit from a better social consensus mechanism, Bitcoin and Ethereum have both fragmented socially, and I think you make a false argument when you claim that Bitcoin and Ethereum (and BitLattice most likely) do not rely on trust in their social consensus mechanism, because they do.

To give a longer answer, https://textuploader.com/1r4d8, but since this is a social disagreement and not a technical one, not that relevant.