The War on Sense-making - Daniel Schmachtenberger

11111111111111111111

3 Likes

IS there a transcription of this? Asking for a Deaf friend.

1 Like

I liked the above podcast very much. Also, there’s another podcast from Daniel Schmachtenberger on YouTube Videos.

1 Like

Just saw this & thought I’d link to https://otter.ai/ as discovered it a couple of months back & tried it out, it’s pretty good & has a free level of 3 audio to text transcriptions per month. Any further questions about it please direct to them, I’m just adding link, the above is all I know lol!

Thanks for posting this @kanta - wow, what an incredible discussion! My visual brain is a little annoyed by the image having the names of the people on the opposite person, I guess if you know the show or the people then you probably wouldn’t notice lol!

With great minds like these I often transcribe parts I want to understand further so as it’s pretty long I thought it might help posting here so others gain a glimpse of the discussion. Of course it’s going to be the small part I select however at the time I listened to this the Holochain community was very much on my mind so hopefully some relevance. I picked up on three quotes:

  • “The model is never the thing - it’s the best epistemically we can do at that moment”
  • “Wisdom is the difference between the optimisation function and the right choice”
  • “They’ll do it and they won’t even realise necessarily that that’s what they’re doing“

Here’s my transcript from those - don’t laugh at my guesses on the names they were referencing, I’m not that educated on Real World stuff, spent my life playing with zeros and ones :wink: Thanks again @kanta!

1:57:29 - “The model is never the thing - it’s the best epistemically we can do at that moment”

Markets can do a good job with the “how” but not the “what” which is the “is”/“ought” distinction that comes up in science

Science can do a good job at what “is” but not what ought which means applied science i.e. technology i.e. markets can do a good job with changing “is”, but not in the direction of “ought”, and so that is ethics which is to be the basis of jurisprudence and law which is why you bring those things together. And it’s because “is” is measurable. Third person measurable, verifiable, repeatable. [host] “It’s objective” [guest] It’s objective, right. Whereas “ought” is not measurable - you can do something like Sam Harris does in moral landscape and say it relates to measurable things but it doesn’t relate to a finite number of measurable things, there’s a (grotal) proof that whatever finite number there are some other things that we end up finding later that are also relevant to the thing that weren’t part of the model that we were looking at. And so, erm, the thing that is worth optimising for - you talked about the blue and the fast would be part of the same thing - the thing that is worth optimising for is not measurable. It includes measurables, but it is not limited to a finite set of measurables that you can run optimisation theory and have an AI optimise everything for us.

[host] Yeah I agree. You will have a long list of characteristics that you can measure and as you go from the most important to the least important you will eventually drop below some threshold of noise where you’re not noticing things that contribute so yes you’ve got a potentially infinite set of things that matter less and less and you will inherently concentrate on the biggest most important contributors up top and that’s natural, it’s an issue of precision at some level but one that we shouldn’t convince ourselves that we’re solving the puzzle completely at a mathematical level. An engineering solution is not a complete mathematical solution.

[guest] Right - OK, so now I’m coming back to the waxing mystical thing. I don’t think it has to be thought of that way. I think the way that Einstein was doing it - an he says (Spinosas) God is my God I’m happy to do it that way. So the first verse of the Tao de Ching is the Tao that is speakable is not the eternal Tao, right. “The optimisation function that is optimisable with a narrow AI is not the thing to optimise for” is a corollary statement. And the Jewish commandment about no false idols is that the model of reality is never reality so take the model as “this is useful, it’s not an absolute truth”. The moment I take it as an absolute truth and become some kind of weird fundamentalist who stops learning, who stops being open to new input and then optimising the model where the model is different than reality I can harm reality, and then defend the model. So I always want to hold the model with “this is the best we currently have and in the future we’ll see that it’s wrong. And we want to see that it’s wrong, we don’t want to defend it against its own evolution. And so what we’re optimising for can’t be fully explicated, and that’s what wisdom is. Wisdom is the difference between the optimisation function and the right choice.

[host] Oh I love this, this is great!

2:06:01 - Everyone who has a vested interest in the increasing asymmetries has an interest in decreasing peoples comprehensive education in the science of government.

So now let’s look at the education changes that happened following World War II in the U.S. There is a theory - there’s a story that I buy that the U.S. started focusing on STEM education - Science, Technology, Engineering, Math - super heavily partly because it was an existential risk because look what happened with the STEM the Germans did and now we know that a lot of the German scientists that we didn’t get in Operation Paperclip the Russians got and Sputnik and so it’s an existential risk to not dominate the tech space so we really need to double-down on STEM and we need all the smartest guys and we need every (von norm) and (Turin) and find them there is so the smarter you are the more we want to push you into STEM so you can be an effective part of the system. That’s part of the story - but also the thing that Washington said - the education and the science of government we started cutting civics radically and I think it was because social philosophers of the time like Marx were actually problematic to the dominant system. And I’m not saying that Marx got the right ideas, I’m saying that OK we have a system where let’s have the only people who really think about social philosophy be children of elites who go to private schools who learn the classics and otherwise let’s have people not fuck the system up as a whole but be useful to the system by being good at STEM. I think this is a way of being able to simultaneously advance education and retard the kind of education that would be necessary to have a self-governing system.

[host] That’s fascinating. That’s fascinating. Because of course if you have the elites effectively in charge of governance they can do exactly what you would imagine the elites would hope for, which is to govern well enough that the system continues on no matter what, but to continue to look out for the distribution of wealth and power and make sure nothing upends it, right. They’ll do it and they won’t even realise necessarily that that’s what they’re doing.

Maintain and progress

[host]Something marvellous requires very careful arrangement of conditions in order for it to survive, and I’m wondering what you make of that in light of this discussion. I guess it’s not hard to make an argument for why those two things go together - capacity and fragility - but what are to do about it going forward because surely we are trying to build these states but do so in a robust form.

[guest]Again it’s because of synergy - you have properties of your cells that none of them on their own have - you as a whole. There’s a synergy of those cells coming together that creates emergent properties at the level of you as a whole thing, but if I run all the combinatorial possibilities of the way those 50 to 100 trillion cells together, very few of them produce the synergy of you. Most of them are just piles of goo, right, and so it’s a very narrow set of things that actually has the very high synergies and it’s lots of things that are pretty entropic. Entropy is also obviously easier - I can take this house down in five minutes with a wrecker ball and it took a year to build and I can kill an adult in a second but it takes 20 years to grow one. So this is why the first ethic of Hippocrates and so many ethics system is first try to do no harm, then try to make shit better. But first do no harm. If you can succeed at the maintenance function you can maintain your progress functions.