Debate on Trust with Hybrida from Bitlattice

Mine and @Brooks debate is about understanding each other and present arguments supporting our points of view. There is no need to meet one another in the middle. It’s even close to impossible as a skeptic and a believer can seldom work out a mutually satisfying solution. We debate (apart from the etymology, “argument” sounds more aggressive than “debate”) to present arguments, understand each other, maybe find out something about own stance that wouldn’t be possible to spot in a homogenous community.
It’s possibly tribal, possibly basal, but trust is also basal and close to our instincts. You have your own tribe - the one that tries to approach everything in a universal way, even if it goes against logic.

Blockchains, BL, can be managed by bots. Seemingly you overestimate complexity of our behaviors and underestimate algorithms. As to the upload to the cloud - silly idea at the moment and possibly hard to perform in coming decades. But not impossible - it’s just a matter of tools.
Machine consensus has little to nothing to do with our social organization, as there must be an interface between society and machines and such interfaces are far from being ideal. Bitcoin was invented to mitigate financial system related deficiencies, nothing more (there was a social factor involved, but detached from the tool). Blockchains and derived technologies are just databases, agnostic when it comes to the environment they operate within.

How have you managed to link microtubules, limbic system and neurosis escapes me. However, as it changes literally nothing when it comes to information processing I can simply ignore it. I already explained, why the issue of microtubules (which you greatly exaggerate by the way) is unimportant when it comes to a question whether we can simulate neural networks. We can and we do. The efficiency is just a matter of tools and scale.

You are both just arguing for benefits of either system you are focused on, BitLattice on one end and Holochain on another, finding a dispute where there is none. The “sceptics vs believers” dichotomy is a belief in itself, it is just a meme, a story that enforces social stratification (having no other function besides that. ) The human condition is to have belief, memes. Cannot get around that and keep the ability to reason needed to practice the “art of reason”, λογῐκός.

The Nakamoto consensus is not machine consensus. Bitcoin added a social mechanism, the rest, digital signatures and hash-linking, was invented long before. BitLattice seems to have a new “hash-linking” in the lattice structure, and new digital signatures with the lattice cryptography, both machine-consensus, I have not had access to how it does social consensus.

Neural networks are computer algorithms, loosely inspired by the human brain, no? It is reasonable that people have assumed that neurons were transistors, I did until a few years ago when I discovered the idea that microtubules are computers. I was sceptical first, because my conditioned belief was that they are the cytoskeleton, I was taught that in med. school. That CAMKII fits perfectly onto the tubulin lattice, providing a mechanism for switching the state by phosphorylization, was an example of evidence that got my attention. I was also taught in med. school that CAMKII is used to form long term memories, and that it magically did so by being activated. So I would say that my interest in microtubules is grounded.

1 Like

What a deep insight into our motivations. Let me however correct you. My motivation to be a part of this debate is mostly related to a pleasure that I feel while discussing, analyzing facts, countering arguments. In old times it was called rhetoric (still is, but as it’s a dying art, referring to old times is justified). My other motivations I already stated before. The only person who brought to light the issue of our networks was you (and I quickly countered your attempt).

It seems that you love building straw-men - you do that every time when you’d like to change a course of a dialogue. Usually when your arguments aren’t heard by your interlocutor. In the previous post I explained you why we debate (saying “we” I present my stance and a perceived stance of @Brooks, I assume he’d correct me if I misunderstood his attitude). Apparently, that wasn’t something you wanted to hear (or read). So, you grabbed a bale of hay, some old clothes and started painstakingly creating your next straw-man.

Your new creation is “no skeptics/believers dichotomy - just a meme”. I could counter it quickly saying that memes doesn’t have to be detached from facts/phenomenons. But then you’d continue your tirade trying to convince me that it’s not a case in this situation. As I cannot watch when someone tortures a hay stuffed dummy let’s burn it down to let you build another one.

First of all, as with most straw-men, you present a true prerequisite. There usually must be one with others carefully unsaid. So, yes, ONE of human conditions is to have beliefs. You seemingly forgot that it’s not the only condition. There are of course many more. As animals, we exhibit an affluence toward beliefs. Let it be trust or higher forms like religious beliefs or believing in ideologies. Higher, because they engage our ability to think abstractly, which we have the most developed among animals. But trust isn’t our, humane, domain. Other complex animals trust as well. It shouldn’t be surprising - trust is a shortcut that saves energy.

However, both we and complex animals, exhibit another condition as well. It’s vigilance. We analyze the environment to spot threats and opportunities. It’s probably a root cause of our intelligence (plus some random mutations and environmental pressure). The vigilance leads to reason.

There are more conditions as well, but the two I mentioned fit into this debate. In animal herds, for instance in human society, a balance between trust and vigilance is shifted toward trust. Which has very good reasons behind, one being a herd stratification. In single members of a herd the balance is intra-individual. Some trust more, some are more vigilant and inclined to analyze more and believe less.

The dichotomy is natural. It is a meme, but it doesn’t change anything. Αιτιολογία :smile:.

The Nakamoto consensus is not machine consensus.

The fact that you wrote something doesn’t make it more true by even a bit. Especially when what you wrote further can’t even count as an argument. The time frame of creation of Bitcoin and its components doesn’t matter. What matters is that the algorithm could be performed without humans at all. I could write a simple program in probably less than some minutes that would send transactions back and forth through a blockchain. Indefinitely, as long as there will be zero fee, free electricity and indestructible machines. Mining blocks and performing proofs would work without conscious actors. Therefore, it’s a machine consensus. The fact that it can play a social role is secondary.

Consequently, Bitlattice doesn’t perform a social consensus, only a machine one. It goes even further by abstracting that process from human influence via having internal mid actors. But, as I said, I’m here to discuss trust. If I’m to advertise something it will be Holochain, as it seems that this thread is the most visited one on this forum (apart from an introduction, hello all one). So we attracted to Holochain a good deal of guys.

Neural networks are computer algorithms, loosely inspired by the human brain, no?

The key word here is “loosely”. Your interest in microtubules can be grounded, your far-reaching conclusions about our ability to build complex neural networks and AIs aren’t. That’s true that memory can probably be stored in a more sophisticated manner than we thought before. Maybe in microtubules. Maybe they also play a role in processing.

No matter however how it actually works in our heads, it has no impact on how it can be performed on machines. The base logic is always the same and is derived from causality of our universe, from entropy and other factors (that I do not wish to discuss here, just sayin’).

Let me show you a simplified, naive even, analogy. Birds fly up to 404199 furlongs per fortnight. Fastest (officially confirmed) plane flew 5895300 fpf. It’s ~14.5 times more. Birds have constraints. They must conserve energy, they must survive. Planes don’t have all the same constraints. Actually, the only constraints are the laws of physics, the same as for birds. But if we need we can waste energy (and we do it way too often). So, while birds had to evolve many energy saving, performance limiting solutions, our machines aren’t so limited and can achieve way better results. The same applies to computers. Not very efficient, but we can put megawatts into them, stack them into miles long racks and I assure you that we’ll surpass human brains by many lengths. Not now, but in the near future.

Re: strawman, I am not refuting yours and @Brooks argument because I do not agree on that there is a dispute. You are both right, you just argue for two separate types of social organization. I agree with you both, to call that a strawman is not correct.

The conversation that came from that, is just about that you claim that BitLattice does not rely on social consensus (trust), and that I have no access to proof of that claim, while you do. Instead of meeting around that (or debating that or whatever), you actually used a few strawmen. I have replied to those, but, they are not the debate.

To avoid those strawmen dancing around the argument, I replied to the rest here,, as it is not that relevant to the debate.

We debate about two separate, mutually exclusive, approaches to trust. They neither intersect, nor coexist in the same context. Use of one cancels the other.

We often say that we trust someone half to half. I even used a similar valuation in one of above posts using percent measure. But when one breaks down (which I did) a perception of a complex phenomenon the internalized trust is will find that it’s composed of atomic operations with binary choice only - trust/proof. They may aggregate into said half to half, but on a base level trust cannot be fractioned.

Therefore, you cannot agree with both stances in general. You may agree that the stances that we represent with @Brooks exist. You may even agree that they are applicable in their contexts. That way however you bring no useful input to the debate. We know that the phenomenons we discuss objectively exist (as far as we can tell) and that they are applicable. You just confirm the fact - an act of zero value. The only benefit from your replies is subjective to me as I have fun writing replies.

For reasons unknown to me, while you have nothing of value to add you try to redefine our debate (“No I disagree with the trust/proof dichotomy”). I don’t know whether you do it intentionally or not. But building a straw man is a fact. Remember “it’s just a meme”? Okay, as it’s fun to watch it let you have your pet dummies.

The conversation that came from that, is just about that you claim that BitLattice does not rely on social consensus (…)

Do you ever read what you wrote before (not to mention my replies)? But really read, not only scan letters. As you claimed that consensus mechanism first used in Bitcoin is a social consensus I countered that that while it can perform without any society involved therefore it’s a machine consensus. Just that. I further clarified that Bitlattice extends that mechanism further abstracting any human input from inner workings of the network, because you had to compulsively refer to real world applications as if it would matter.
You stubbornly bring BitLattice even if you know I consider that a rather bad behavior given a place we are in and the subject of this debate. That further convinces me that you read our debate through glasses I have no access to.

you actually used a few strawmen

You’re a master of floating on a surface and avoiding giving any actual arguments. You often mention something, but seldom justify it with any prerequisites. However, I’d love to hear more about straw-men I used. With arguments substantiating that they were straw-men in fact.

As to the content you moved outside into a separate document - yes, it’s irrelevant to the debate, so I ignore it.

Nevertheless, I enjoy debating. However I’d love to discuss trust/proof more than just fencing over irrelevant matters.

I am generally very interested in this discussion. But it ran into different directions, making it hard for me to remember its starting point. So I went back and re-read the original post. @Hibryda you stated on twitter:

I do like your mathematical approach here. And I do see your point: If it is impossible to cheat, we are forced into a regime of honesty.

To use an everyday example (a bad one though, since its mainly emotional): In the UK people adhere to the rule of queuing up. If someone tries to jump the queue, they will be made aware of their transgression immediately. This “forced honesty” gives me a great deal of relaxation, since I can simply rely on being treated fairly.

When @Brooks replied with his tweet…

… I believe he was taking a more emotional standpoint, wanting to point out, that we should strive to bring more humanity to the internet of the future.

I do see where those two definitions are clashing and I have a feeling those are two different topics. I will push Brooks’ angle to the side for now and leave a few comments on “trust” that might be close to @Hibryda’s definition.

Absolutely. Thats why we should try to take as much guesswork out of the equation and make the trust-algorithm as precise as possible.

We should try to gather as much verifiable data as possible and treat subjective data with care.

When ordering something for example, several datapoints can be tracked rather objectively. Time of payment and time of delivery are two easy ones.

With others we could try to “embrace subjectivity” by asking non-judgemental questions. For an Uber car ride, don’t ask “Was the temperature too cold?” but have a slider instead “Cold - Hot”
Then we might even introduce deliberate “confirmation bias”… If I found one particular car to be cold and another rider found the same car to be cold, we seem to have a similar perception. If more and more shared data-points align, my app could favour this user’s opinion…

Now I am getting sidetracked though. Its going beyond trust now…

At the very least:

To sum up my ramblings. Yes, I am with you @Hibryda: The less trust we need (trust being educated guesswork) the better.

And I am with you @Brooks: It is so important to get more humanity into the online world and try to foster real, human connections.
I am not sure though, if “trust” is the best word for it.


I side with both yours and @Brooks point of view, based on that I claim that BitLattice has a social coordination mechanism. I do not have access to proof of that, while you do. Instead of presenting verifiable proof, you divert into strawmen.

I respect your integrity as an inventor, and as a thought leader. I also respect my own integrity, and therefore, I’m clear that the reason this conversation is not a conversation, I do not have access to proof, while you do.

To keep focus on the debate, I put my replies to the diversions here,

As to Bitlattice - you don’t have to have a proof as:

  • this debate isn’t about Bitlattice;
  • this debate isn’t about whether any blockchains and derivatives incorporate social or machine consensus (and what actually that means);
  • apart from the point above I already inferred in logically consistent manner that said consensus is a machine one;

As to purported straw-men I used - predictably you cannot provide any consistent evidence.

I should also remind you that we correspond since April 2018 and then I tried to explain you how the consensus in BL is achieved. Apparently to no avail. I doubt that repeating the same process would yield a different outcome. Then and now you are rather more interested in implementation of your protocol than in how consensus is reached in a particular network.

I respect your integrity as well, so I suggest that we come back to the subject of the debate, without constantly derailing it. Consequently I ignore your unrelated input in a linked document.

It’s encouraging that at least one person finds this thread somehow inspiring.
While our debate got a little out of track I’m really into getting back to the subject.

I’ll start from commenting what you wrote at the end of your post.
The word “trust” can suit us very well as long as we are able to define it as precisely as possible and all agree as to the definition.
So, let me start.
The “trust” is a social interaction tool specific to more developed animals that exhibits the following properties:

  • its aim is to provide a fair assessment of opportunities related to another member of society or other entity with minimal waste of resources (time including);
  • it depends on hints. Hints can have a form of:
    • singular or repetitive behavior of the other party confirming expectations;
    • perceived events matching expectations;
    • feelings/intuition.
  • it’s subjective and irrational. “Irrational” here means that there is no requirement to have hints provably true.
    Of course, I’m open to have the definition extended and refined.

Solely by analyzing the above, one could easily infer that simple, algorithmic, deterministic computer systems cannot provide any good interface for that trust mechanism. They can be used to store and retrieve some trust related indices, but their use in no manner makes the mechanism itself more reliable.
On the other hand, neural networks could cope with it. Again, it changes little, as together with an ability to deal with somehow fuzzy logic comes also a fuzzy result. Neural networks are also prone to disturbances and harder to verify due to stochastic behavior.

We, me and @Brooks, possibly started from different standpoints, but standpoints related to the same phenomenon. I still favor the view that we represent two sides of trust=/=proof equation.

I also understand the need to have more humane technology, I’m not a robot (or at least I pass the Turing test). However, what I often observe, people favor all-or-nothing approach to the subject ignoring obvious facts and limitations. While the Bambi effect with respect to other humans can be considered heartwarming it often leads to a complete blindness when it comes to real issues. That has an important negative side effect - people don’t differentiate between what they can and what they cannot do within the limits set by a system of choice.

As I said above, it’s impossible to cast/emulate trust related issues in simple deterministic systems blockchains actually are. Therefore, any such representation would always serve only as an approximation, not real data. Let’s illustrate some issues. If we are to store a reputation score (for illustrative reasons I’ll refer to reputation systems previously discussed) in a blockchain database, what type of variable should we use to depict it? Should it be 64 bit integer or float, if one of the two - why? One factor only or many? Should we limit its range, if so to what span? When should we expect that float related accumulative errors start to play a role? Do we need any normalization performed from time to time, if so how strong and why? And so on, and so on. Those questions must be both asked and answered. Answered with care given to facts, researches, good practices.

There’s more. Storage is just the easiest part of the equation. Further we have algos. How should they understand user’s input? How should they quantify it. Should they try to detect malicious actors by analyzing convergence of scoring or just leave the system open to manipulation?

The deeper the worse. The community. As we know from researches analyzing the wisdom of the crowd phenomenon, it’s very sensitive to composition of the crowd in question, internal and external influence, complexity of tasks. Therefore, we should answer questions regarding how could we take care about proper composition, limiting influence, properly defining tasks.

To sum up my ramblings, I consider dependence on trust recording/processing computer solutions harmful to our ability to properly deal with reality and social issues. However, I cannot ignore that devoted believers exist, and they are a part of society I live in. Thus, if they must depend on tools as reliable as lotteries I feel compelled to discuss what could we do to make them as good as possible. That’s why I’m genuinely interested in a solution proposed by @sidsthalekar and how will it evolve.

However, the necessary condition to start really planning a concise and effective approach to the subject is to enter my domain of cold reasoning, leaving behind emotions and beliefs, instead of trying to defend logically something that cannot be defended that way (yep, Aquinatus). Because the interface must be implemented on machine side. One doesn’t have to embrace it, even like it, but it’s a hard requirement.

A monumental book I’ve found exceedingly useful around this subject is ‘The Speed of Trust’ by Stephen M.R. Covey. Anyone following this thread with interest, but unfamiliar with this book, would do themselves (and eventually all of us! :wink: ) a favour by reading and practising the advice.

@prosidual I spent some minutes reading the content of the website and watching clips with the author. My first impression isn’t even near your enthusiasm.

To sum what I got so far:

  • the author repeats “trust” constantly, but never goes any deeper in the meaning and mechanism. Maybe he does in his book, however as he seems to deeply believe that trust answers all questions it’s rather certain that he won’t destroy his narrative by trying to uncover less favorable sides of the phenomenon;
  • both supporting companies and the style of presentation suggests that it’s about conditioning the workforce, a collection of HRM tricks. Companies don’t need questioning workers, they need obedient ones - trust is a great complementary tool to eliminate dissenters;
  • putting innovation and trust together, what the author does repetitively, is a kind of abuse of logic. Innovation takes its root in a lack of trust, in asking questions, in trying to change the status quo.

All the above makes me think that buying the book would be the worst spent $16 ever.

Do you have any arguments why that book isn’t a mob’s fodder it seems to be?

What I mean is, you base your stance against my stance on the claim that BitLattice does not have a social coordination mechanism. I agree with both your viewpoint and @Brooks viewpoint, and you disagree with that and do so based on information that only you have access to. I respect your integrity.

The reason Bitcoin and Ethereum have fragmented (socially), is because their social consensus mechanism has not been good enough. To me, it seems like the “proof-of-structure” in BL is consensus across shards (not needed in a blockchain because only one dimension. )

The Nakamoto consensus uses popular vote. In the correspondence since two years, I have not seen how BL selects which state to follow, what the popular vote mechanism is in BL (in the same way that there is one in Bitcoin and Ethereum. )

I said once that it seems to me like you have invented everything but the social consensus mechanism. The “hash-linking” (lattice structure + proof-of-structure) seems like a leap forward from Bitcoin and Ethereum. To me, it is beautiful, even tough I have only had it described. The lattice cryptography, also seems like a leap. But those are leaps in technology that predates Bitcoin, Craig Wright invented the Nakamoto consensus, his focus was social, his invention is the leader of a tribe.

Anything remotely related to the subject of this debate maybe?

Having a second thought related to the proposal by @prosidual I decided to suggest some books, possibly more relevant.
It’s going to be a kind of situational joke, but only partly. I’ll further explain why.

I’d suggest two books, both treat extensively about the trust as a tool, from a perspective of those who use it to control an organization. They refer to a many use cases confirmed by further experiments. They are used in certain academias as textbooks. They are also free, as IP rights to them are no longer in force.
So, the books are:

As you can see, I use a reductio ad absurdum here, to emphasize certain issues.
While the book suggested by @prosidual claims to have a support of 100 CEOs from biggest companies, the books I mentioned are supported by millions of bones scattered around the globe that once belonged to guys whose superiors never read those books or never understood them.
More importantly though, their authors no longer had to pretend or hide their true thoughts as upon writing their works they already made sure that most of their enemies took a fast lane into eternity. Therefore, they offer an insight into maybe cynical, but devoid of correctness, description of tools in use of which they excelled.

Every hierarchical, totalitarian organization, like army, church, bigger companies, political movements need to use two tools - violence and trust. The first one is obvious, although it may differ in form - from physical violence, to economical and/or psychological pressure.
The second is more interesting to us. And from here the reductio ad absurdum is no longer so absurd. Trust is a vital tool, the most prominent use of it can be observed in armies, where it’s induced with force or manipulation.

There are many reasons why trust is irreplaceable, however three are the most important:

  • there’s no possibility to ensure that all members have the same access to information. It could even be detrimental to the organization, as it would put the hierarchy in question. Therefore, with absence of knowledge, trust is the only option;
  • it’s a built-in mechanism of our brains and therefore easy to use. Because even in a diverse organization most people are “equipped” with that feature to some extent;
  • there is a disparity in number between “trusters” and “testers”, in favor to the former (I mentioned it before). Thus, trust is useful in homogenizing an organization as dissent gets calmed down.

A view into how armies use that tool gives an insight how it works in less extreme cases.
For instance, armies execute search and rescue operations seemingly against logic, with high risk and cost. Even if they know they go to collect a mangled body. Because soldiers must trust that someone will come, no matter what. The same applies to churches (for instance, shielding pedophiles), companies (shielding fraudsters). When banks in a year of crisis took another path and there was a famous year of window jumpers, it backfired later with a wave of whistleblowers (recent money laundering scandal).

My point here is simple. Bambi effect projected on humans is naive (as naive as projected on other animals). When planning a system that is meant to reflect trust it should be always kept in mind that this tool is a tasty bite for many groups of interest. Therefore, to be sure that it’s hard to abuse, designers must take into consideration all vulnerabilities, that span from technical matters to sociological ones. Or, if one specifically wants to design a tool of control, studying extreme cases may be useful as well.

I totally agree with designing systems that are resilient against abuse. In my limited understanding and belief, the first step is to not think in hierarchies. That is just what my gut and 39 years of life experience told me :slight_smile:

1 Like

Your gut tells you right. The ideal society would be composed of similarly conscious, independent individuals being able to infer their decisions through reason. Without any hierarchy.

Sadly, biology acts against that. Animal herds are a way in which certain species maximize survival with minimization of expenses. Herds to work efficiently must have a hierarchy, as the flow of information is always a bottleneck and the decision-making process cannot be entrusted to an assembly of all the members. Simply because their cognitive abilities differ along a bell curve. Democracy, while an attractive idea, works poorly in terms of efficiency. Apart from the fact that it’s seldom a real democracy in real life.

So, your gut is right. But that doesn’t solve a thing. Because our world is far from ideal. Non-hierarchical structures can grow to a limited scale. One of the largest examples was Github, before being taken over. But to look for a working solution I’d suggest looking at possibilities to force participants of a system, via specific stimuli, to choose hierarchy-less approach. Because naturally they’ll tend to look for a leader or someone that tells them what to think.

1 Like

Good to hear my gut is still functional :slight_smile:

Robert Sapolsky did some research into this area. Will have to look it up again. Could temporary hierarchies offer new solutions? I can imagine this will run into the same scaling issues. And how can you design a system with this in mind?

When a hierarchy functions, the people on top are in a good state and the people on the bottom in a worse state. When a hierarchy is unstable, the people on top experience more stress and the people on the bottom see more opportunities. Something I just read…kind of interesting and makes sense in a way. I wonder how this oscillation works with temporary hierarchies. :thinking:

Good question. Temporary hierarchies can be a potential solution, however I’m afraid people will tend to make them permanent.

Below are few ideas I can share at the moment. They all depend on context and have both advantages and disadvantages. Besides, the issues I mentioned in my previous posts should also be taken into consideration, as they are no less important than the hierarchy one. Eliminating hierarchy is just a step in good direction in some select use cases.

  • pushing for full anonymity. By full I don’t mean random nicks. I mean random identity designator on every contact/voting/whatever. That, however, creates a risk of abuse by creating fake identities (the most common way in rating systems atm). That risk, when constant designators are in use, is lower, as other members can spot manipulation by observing patterns in behavior;
  • push toward non conformist behavior. So, a user is rewarded by being apart from the community. The further the better. First impression is that people will go into extremes. But, if a proper measure is used (dense means with local, narrow scope for instance), going into extremes en masse won’t be beneficial. Then people will be forced to think about their choices several times, to find a way to exploit the system while also expressing their point of view in a most true way. A kind of a game. That approach is very tricky, but also very promising. Even if counter-intuitive.
  • push toward skepticism and rational thinking. The hardest approach, because people are lazy. While I can envision a system where every decision must be logically substantiated (using for instance a simplified language to manage predicates) I doubt that it would attract people. Too much things to do, before I can push on a “the guy is an idiot” button. Too much thinking. On the other hand, it’s the only way to actually equalize cognitive differences in population.

Disclaimer - the above are just rough ideas. Missing many factors, oriented on the hierarchy issue you find important.

As to your edit - with temporary hierarchies the frequency changes :smile:. I don’t think anything else. But we have the Tensorflow for instance, maybe someone will spare some time and link that variable with population metrics and run it through some genetic algo?

1 Like

To expand a little on a model that could shed some light into temporary hierarchy.

First step would be to define some boundaries that the population works within. I’d suggest the following axioms:

  1. constant population size;
  2. evolutionary modeled dependence between satisfaction of followers and triggering change (say initially leaders-80/followers-20 and boom);
  3. constant followers’ satisfaction
  4. constant “life-time” of each run. Otherwise, as there may be some stable situations the whole thing will produce an infinite frequency;
  5. variable frequency that is our goal to optimize.

The model of population and internal interactions in Tensorflow (training data generated). Genetic algo for every round inheritance and random adjustments of the second point. The fitness function should depend on a steady frequency of change of leadership. (so two layers, TF and GA)

It’s a very simplified model, but could act as a hint when it comes to a question, what are the best conditions for the temporary hierarchy solution.