At odds with agent-centricism!

using a server API is also a protocol and server APIs have many authentication and authorisation techniques

just because you know a protocol doesn’t mean you have unlimited access

If you’re saying in relation to this:

Then by public data sacrificed, I mean the public data not just of the cheater, but of everyone!

But the web APIs you’re talking about do so via a centralized firm running those API servers that serve those requests strategically! That’s the good old world, not our Holochain’s world!

i’m not sure what your question is

most applications compose many services with different utility, access patterns and security

the same is true here, some data makes sense to be open and some makes sense to have restricted access, and the combination is what achieves a good user experience

yes, once you trust someone to have access to something they have access to it and could share it, i personally have had access to plenty of databases full of user data across many systems

Okay, in brief:

how do you implement collaborative filtering on Holochain (or any agent-centric system)?

i’m explaining some tools

for example you can:

  • encrypt data for a specific recipient
  • encrypt data based on a shared key
  • send data directly to a user over an encrypted channel
  • use any encryption scheme that compiles to wasm
  • allow users to RPC call into your device behind an access token
  • save data on your chain such that it is not shared to the network
  • create private networks with trusted participants behind a security challenge
  • implement validation rules restricting who can write data and the shape of the data
  • create a short-term DHT that participants use for a specified activity or length of time, then stop using

and mix and match these

1 Like

And I’ve given it hours of thought but still couldn’t realize how to collaboratively filter relevant data for oneself in an agent-centric system using the above tools alone without making the private browsing history of the app’s users public! It should be obvious! But isn’t that a horrible idea?

Yeah, and I guess that’s how you’re aiming to garbage-collect a bit of stale data (like of async conversations)… Clever! Keep up the good work!

i personally haven’t designed a collaborative filtering system before

i don’t want to pretend to know all the deep nuances there

my naive thinking is that you have a challenge that filters out garbage bots ‘at the door’ but then the participants each only copy in anonymised and whitelisted history that they want to contribute and have cross-referenced in return for better recommendations

1 Like

i’m also guessing common techniques for this assume no privacy at all and that’s part of what makes them so good at recommending things :sweat_smile:

It’s private to the extent that those private companies are. Well, at least it’s not totally public! I mean, what’s the probability of waking up today and finding out that facebook’s data center’s were breached and all private stuff is out in the open? Nill!

But yeah, at the algorithmic level, the said algorithms have access to everything private with no anonymity concerns.

it’s not nil

i know of at least one case where facebook had insufficient access controls on their CDN and private photos were leaked via. a cache

here’s the top hit from google for another breach https://www.nytimes.com/2018/09/28/technology/facebook-hack-data-breach.html

1 Like

if you know an algorithm that can do recommendations from anonymous or at least psuedonomous data i think we can make that work :slight_smile:

otherwise it seems by the nature of the algorithm you need to trust something or someone somewhere (which we can also make work, but it’s not quite as exciting)

1 Like

Yeah, I remember reading a paper about doing collab in a distributed system some months ago… Don’t know which exactly.

However, if I know you visited nytimes today, my chance at finding your anonymous counterpart gets higher! And I also know you visited this forum, and BOOM! I got you! Now everything you do online, I’ll know… You get the idea…

Yup, right. That’s the conclusion I’ve reached thus far…

holochain can give you roughly as much decentralisation or centralisation as you are comfortable with for a given algorithm

in this case it seems the algorithm itself is invasive, not sure if we can ‘repair’ it at the network level :frowning:

but we can at least enforce standards on trusted processors, e.g. like medical records and payments etc.

e.g. so many companies out there just process credit card payments without proper audits and compliance, or they get an audit once instead of annually and just let it lapse, etc.

we could implement a network that you can only connect to if a registered auditor crypto signs your devices once a year as per an established international standard - actually this could be done at the holo level too

so actually we can do trusted servers better too :wink:

2 Likes

The design pattern to do such stuff that I envisioned is (don’t mind please) a bit comical at best!


A (h)app has multiple agent-types in the DNA rules.

  • Plain-old users. They include both news publishers and viewers, and any other role you might have in your (h)app.

  • Collab_Gods: This is whom you submit your continuous private history in return for recommendations. It’s essential that you trust them.

Now, to create a collab_god agent, you need a private key that only the trustworthy organization that developed the (h)app has. If the agent can “sign” a random number that the prove_youre_a_god function asks you to sign, then great! You’re a collab_god now! The corresponding public-key is simply specified in the DNA!

The organization spawns (much like microservices) all those collab_gods (yeah, multiple gods for load-balancing) with some shell-script that they pass the top-secret private-key to. And voila! The gods are now running on holo-ports all over the world. And as part of their functionality, they do all the hard work of collaborative-filtering, encrypting any public data that they produce in the process with the secret that only they know. The users just pick a random collab_god to send their browsing data to and receive recommendations from, and they do so every time they come online. And if let’s say the original organization’s key gets sacrificed, then worry-not! It’s not the end of the world! Someone else whom the community trusts enough can create a new breed of collab_gods with a new private-key for that purpose; in fact multiple gods’s breeds should be able to coexist (as per the DNA). Just publish (as an entry) the public key of the new breed and that’s how others can refer to it (via its corresponding public key on the DHT).


Feels like I’m making a fun of myself bringing greek ancient mythical Gods in the discussion, but yeah… Maybe it’s a good idea? In fact, it can even provide the company a revenue stream if each call to recommend_me(pub_key_of_god_breed_to_consult) can charge a small fee for the service (after all it’s the company that pays for the holo-hosting, not just for the God-agents but also for the users)! What’s your opinion on this? Should agents be restricted to human beings (as is evident from the great many people trying to build biometrics and other rubbish into Holochain apps) or should we have such Gods of Thunder in the app?

And it can be expanded much beyond that. You can have a God_Of_Time to do time-based routine jobs on the network; the possibilities are endless! Representing such features as full-fledged agents can allow us to mitigate the limitations of purely-human-agent-centric systems!

Or is this a silly idea? Does it break any implicit invariant that I’m unaware of that Holochain assumes? @thedavidmeister ?