Capabilities

This thread is a continuation of a discussion started by @pospi and David Hand to discuss how to use the capabilities API and what the best patterns for access control, governance and delegated authority are in Holochain apps.

The original thread: https://chat.holochain.org/appsup/channels/dev-capabilities

Sharing the key comments to bring visibility to the topic.

From @pospi: I think @david-hand was planning on leading the fact-finding we need for this at the moment, but to start perhaps a summary of our current thinking to pull apart & discuss is the most worthwhile thing:

A common use-case in REA networks is to have application-specific behaviour in other modules coded to respond to different types of AgentRelationships. For example, a Person who is a member of an Organisation may have special privileges that non-members do not.

This module deals with People, Organisations and recording the AgentRelationships between different entities- nothing further. Having a separate collaboration space to record this information allows for organisational structures to be shared with other modules without the logic for managing such structures needing to be replicated in each module. It may be accessed via other modules to perform access checks; though it is more likely that such access checks would reference the local Holochain capability register rather than data in other modules.

The expected parameters needed to allow for delegated authority are detailed in this Github issue but the details are pending decisions to be made by the HDK core team.

We probably want this module to orchestrate permissions and bridging for agents who join & leave managed groups, and filter those permissions down through the connected / deployed networks via way of capability grants. In implementing this we may want a split- one zome for the governance processes & agent relationships, another for applying REA-specific capabilities to the rest of the network. The latter would be reusable with other governance frameworks, the former would not. Capabilities (and access control logic) are always local to each module- to modules other than agent relationships, the agents who interact with them are all just anonymous people who happen to have permission to do certain things.

^referenced GH issue: https://github.com/holo-rea/holo-rea/issues/9

Related, more direct questions:

  • Can you have an agent key which no single person has access to? NO.
    • (failing that:) can you generate a capability token for a non-agent identity?
      • (failing that:) can you generate arbitrary capability tokens on the fly & validate them by hand within a zome callback?
  • Is it possible to create a capability token and lock it up within a DNA somehow, so that it can only be used in operations that are triggered by the DNA (rather than triggered directly by an agent)?

From David-Hand: What I would like to know is as follows:

  • Is there a way for one app to generate a capability token for another app?
  • Are the capability tokens specific to an agent or by agent-app?
    • if app A issues a capability token, then in app B on the same agent, does that token still exist?
    • Can app B query that token somehow to validate that it exists?
  • If app A requests a token from app C, can app A then pass that token to app B? Can app B still use it to access app C?
2 Likes

Thanks for initiating this thread @dhtnetwork.
@pospi we’ve been thinking about this too for our interchange. Correct me if you think it’s not related.
We need to keep a record of which apps the users have ‘staked their reputation’ into. This is important because it gives the user read access (and in some cases write access) to information in other DNAs. The easy option is to record this table within our DNA, but we think it might be possible to keep this record in the manner you have described above.

@pospi I forget exactly where that conversation left off, and how old it was. I presume you have more questions about capability tokens, cross-DNA privileges, and how a set of modules like HoloREA could use those primitives — if so, ask away!

Hint: for now it seems like a capability grant/claim is just a pattern that’s embodied in a special data structure with some convenience functions to make it easier to operate on it. I think the reason for this is that it was originally a low-level feature that got exposed in the HDK because it could be useful for app devs. My hunch is that you will want to take our pattern of cap tokens and reimplement it in a way that serves your needs — partly because cap grants/claims are private entries that don’t get published to the DHT. I’m learning that, in proper capability-based security, the agent granting access to a resource must have full control over it. For Holochain this means that capability claims are exercised by asking the grantor to perform an action for you. In multi-agent environments where the DNA can ‘trigger events’ this doesn’t hold in the same way, although I still think it’s true that you could design something… perhaps you don’t have any one agent strictly controlling access to the resource, but you do have the DNA and the DHT enforcing how resource access is exercised. Would be a productive/fun topic to get into, and it would help inform my documentation writing too!

1 Like

This sounds accurate :slight_smile: The more I dive into CBAC the more I see it as the elegant general-purpose solution to access control in distributed systems. I think @thedavidmeister shares that opinion; indeed he’s the main one who sold me on this.

So what we’re thinking in terms of capabilities mainly relates to how you implement RBAC (roles) on top of CBAC (capabilities); at a technical level we’re pretty sure it’s about grouping sets of capabilities into roles to lessen configuration overhead.

We think there’s probably need for a use-case specific “role configuration” mechanism that groups might want to author on their own; coupled with a general-purpose capability mapping zome and/or library. Short version-

We probably want this [RBAC] module to orchestrate permissions and bridging for agents who join & leave managed groups, and filter those permissions down through the connected / deployed networks via way of capability grants.

What level of abstraction do we think this lives at? Is facilitating group management functionality part of Holochain’s core app infrastructure, or something to be done on a per-project basis?

1 Like

I’m never sure what belongs in core-land vs library-land — sometimes I think CRUD and even source chains should’ve stayed out of core and been part of the HDK instead :sweat_smile:

When I learned about our implementation of capability-based security, and how basically all that we’ve opened up (so far) is sugar around committing capability grants/claims, I thought, “hm, we could do so much more with this concept!” But it grew out of the need to enforce access restrictions on zome calls, and it shows that heritage in the fact that the grant entries are always private. What if we enforced capabilities at the validation level as well as the agent level?

I’ve been chewing on a general-purpose application of the pattern to publicly enforced privileges in How to do validation on a custom function of a module? - #2 by pauldaoust Imagine the implementation described there, only with a lot fewer built-in assumptions and more generic params. Something like…

pub struct PublicCapabilityBody<T> {
    // Optional -- if None, anyone holding the token may make the claim, which
    // might be the wrong thing to do when all grants are public on the DHT.
    subject: Option<Address>,
    // Application-specific.
    terms: T
}

// This is kind of like a certificate.
pub struct PublicCapability<T> {
    body: PublicCapabilityBody<T>,
    authorization: holochain_core_types::signature::Provenance
}

Then your terms T could be anything you like:

  • A string containing a role
  • A tuple consisting of an entry hash and the privilege being granted
  • A ‘wrapper terms’ which adds an expiry date to any granted authority (important for revocation; maybe it ought to be part of PublicCapability). Expiry date is important because validation functions shouldn’t be checking the current state of the DHT, so all the information needed for determining the validity of the entry should be supplied at validation time (either by embedding it in the entry, committing it as a previous entry and supplying it as part of a validation package, or specifying it as a validation dependency (not supported yet)).
  • etc…

Can you tell I like composition? It may be better, for the purpose of developer ergonomics, to bake in a bunch of assumptions about what developers will want in their terms:

public struct PublicCapabilityBody<T> {
    subject: Option<Address>,
    resource: Option<Address>,
    expiry: Option<DateTime>,
    extra_terms: T
}

The thing I’m having a hard time wrapping my mind around is that true capability-based security depends on one component having completely control over a resource and mediating access to it through the token and nothing more. In a distributed system where the controller of a resource may be offline for a long time, what does ‘control of a resource’ mean? The DHT itself can enforce whether an action is valid or not, but it can’t control a resource that everybody knows about. Maybe capability-based security is the wrong pattern for this use case, and we need to reframe this as some sort of portable, certificate-based proof of privileges. Sort of like the W3C’s Verifiable Claims, which has some very very agent-centric language.

1 Like

@pauldaoust because it isn’t a resource that everybody knows about if it is private behind a cap token?

@thedavidmeister yeah, that’s the problem I’m struggling with — is it possible to take the concept of capability tokens (control of a resource + access granted via a token) and somehow project it into a public sphere? My current thoughts are (a) no, but (b) collectively-enforced use of collectively-granted use of collectively-held resources (in other words, verifiable claims + validation rules) can have some similar data structures and solve the same need to govern resource access. Maybe we need new patterns with new names. I feel like Elinor Ostrom’s work could inform us here.

@pauldaoust i’m not 100% sure what problem you’re trying to solve, is it data availability for things sitting behind a capability?

@thedavidmeister more about a general pattern for controlling access to public things (which for Holochain means write access to the DHT). I’m thinking out loud, so my thoughts probably seem scattered as my understanding evolves.

My criteria, assumptions, and understanding of the problem space so far:

  • Capability-based security is a very clean way of thinking about access control privileges.
  • But capability-based security is only applicable when access to a resource is completely controlled by the owner of that resource (e.g., an agent has exclusive control over their own instance, zome functions, source chain, etc)
  • The DHT is a commonly held resource, which means that capability-based security isn’t appropriate. Therefore, we need something else.
  • Because the DHT is held in common, read privileges can’t be controlled, only write privileges can. That means that you can’t control access to a resource simply by hoping that its hash won’t be discovered.
  • A good solution would have the same cleanness of capability-based security and avoid the messiness of other solutions like ACLs, RBAC, etc. Essentially: possession of a valid token constitutes power to make a claim against the privileges that token represents.
  • A good solution would also ‘feel’ like HC’s capability-based security so that developers don’t have to do too much cognitive shifting.
    • Corollary: a good solution would also feel distinct enough to make it clear to developers that this is a different construct.
  • It would have to be built using validation callbacks, which means that it would have to be deterministic (all information needed to verify the token would have to be presented as a validation dependency; no contacting an ACL structure on the DHT to see if a privilege exists and is still valid).
    • Fortunately, a capability token is sort of a ‘bearer instrument’ that confers the privileges it represents.

To my mind, the only thing that can satisfy the criteria given my assumptions is some sort of portable, verifiable claim/credential. This would look like a certificate signed by the agent who has authority to grant a particular privilege. More thoughts:

  • Token revocation: Because all information has to be supplied when the validation dependencies are being collected, validators couldn’t check a revocation list. Therefore, revocable privileges should probably be implemented with some sort of validity period appropriate for the privilege being granted. This’d mean a token holder would have to keep renewing their token, á la SSL certificates.
  • We wouldn’t want to make too many assumptions about how grantors, subjects, resources, and conditions should be defined. Some thoughts about specific scenarios:
    • A certificate that’s only valid when it’s signed by a quorum of n authorities.
    • A certificate that never expires.
    • A privilege that a subject can or can’t delegate to another subject.
    • An edit/delete privilege that’s granted for a particular entry or link.
    • A create privilege that’s granted for a particular entry or link type.
    • etc…
1 Like

@pauldaoust i did some work with @philipbeadle around a JWT style analogy that could go somewhere near revocation by bridging into a deepkey DHT

this still only controls writes through an expiry model, either against a trusted source of timestamps or “this token allows up to N writes before you need to get a new one”

this is about writes tied to an authenticated identity rather than a key, against some other system, it might cover some of what you’re reaching for but not all

the thing is that if you start putting arbitrary logic on reads then monitoring network health would become (i think) infinitely difficult as per the design intent of rrDHT

i don’t know how you’d enforce such a security model either… you can revoke access to new things but not things that already exist in the public space

my gut is that you’re better off controlling who can join the network and open/maintain connections using membranes (could be token based) and slicing your DHTs along the lines of who should be participating with that public data within that network, than deliberately creating “sub-DHTs” within a single network

is there a specific use case you have in mind that breaks this model?

Not necessarily, I’m in that dangerous territory of considering possibilities without specific use cases.

To clarify, I’m not interested in solving read access — as you say, membranes are the right place to control that sort of thing. When I referred to the inability to control read access, I just meant that it wouldn’t be sufficient to control write access to something simply by hoping that nobody would discover its hash, because everyone can eventually read everything.

It sounds like you’re saying that it’d be much simpler to control write access by controlling who can join the DHT, than implementing some sort of privileges system within the DHT. Fair enough, and I think that’s probably true for most applications.

Actually, I am thinking about a use case where you would want to have different privileges in a DHT… there are probably many others, but this one is salient because it relates to HoloREA.

Suppose I participate in a co-op. Within our little co-op DHT, all of us have permission to do whatever we like. But in interfacing with the outside world, only some of us have the authority to represent the co-op. And those people may leave the org and new ones may replace them.

So now let’s consider a global marketplace DHT. Those who are purchasing from / selling to me have to know that I have the right to represent the co-op. So in that global marketplace, the co-op has a record that makes its entity-ness known to the world, signed by its founders. This record indicates that anyone presuming to represent the co-op must carry a claim with them that has the signature of at least one of the founders, either directly or as part of a chain of claims.

I read a bit about JWT, and it sounds like they’re very similar to W3C Verifiable Claims/Credentials — in fact, you can format a VC as a JWT. So it sounds like we’re on the same page there (and that’s probably because I got the idea from you, many months ago). It strikes me that it’s not that different from an SSL cert too — it’s just a cryptographically signed way of communicating that some authority recognises some other entity and claims that they should have the right to do such-and-such a thing.

I did notice that JWT payloads are encrypted, though, which wouldn’t work for this use case because the details of the claim needs to be readable in order to be validated.

@pauldaoust JWE payloads are encrypted while JWT are just base64 and plaintext

even more, the JWT spec allows to specify the alg used to sign, so we were looking at whether we could specify the deepkey-compatible DNA used to be the identity provider

this is all authentication though, on top of which authorization can be built

this all sounds exactly like what @philipbeadle is working on right now

this has nothing to do with capabilities though :slight_smile:

that’s about protecting access to semi-private data

that needs to sit with the owner of the data because once something becomes shared, the network needs to monitor its health in an agent centric way

i have no idea how you’d build efficient heuristics to monitor the health of data that you don’t know who is allowed to see it, until after you interact with them directly 1:1 and then run a function that may or may not be time sensitive…

so yeah:

identity tokens for internal and external authentications :+1:

capability logic for semi-private owned data :+1:

slicing DHTs across membrane/access boundaries :+1:

trying to create sub-DHTs on the fly :woman_shrugging:

oh yeah, you’re right — just base64. That’s what I get for skimming!

Sounds like I’m gonna have to talk to @philipbeadle some more!

When you say “semi-private” do you mean private source chain entries that I share through N2N messaging after you present a valid claim? I thought capabilities were meant as a primitive to facilitate this sort of thing, around which app devs build all the communication logic.

oh! Curious to hear why that’s a :woman_shrugging: — is it starting to feel like an anti-pattern?

@pauldaoust yes, that’s what i mean by semi-private

it’s a :woman_shrugging: because i don’t know how to do it

ah. got it. Re: on-the-fly sub-DHTs, my thinking was that the UI could just call admin/dna/install_from_file and pass a param (maybe a UUID, maybe a property) that would fork the DNA into a separate space. Then it would call all the admin functions to create/start an instance from that new DNA and bind it to an interface, etc. Then of course the UI would be responsible for handling all the flow between instances; you can’t set up a conductor bridge at runtime, can you? I can’t imagine how it would be possible, because the DNA would have to know about the bridge at compile time, and also you might have many mini-spaces to bridge to (thinking of private chat channels for instance).

Membrane control could involve passing a "progenitor" key/value into install_from_file that contains the public key of the root authority that signs all the claims that people bring with them. This’ll only work when we’re able to put arbitrary data (such as the aforementioned claims) into the agent ID entry though, because validate_agent is what creates the membrane.

Anyhow, that’s the plan I’ve mapped out for the tutorial that accompanies my Core Concepts article on ‘Spawning new DNA instances from a template’ (not yet written). Let me know if you think this is wacky or bad :wink:

@pauldaoust that’s something different again, that’s not a “sub-DHT” building new apps from templates is still very much a complete, standalone DHT with its own network health heuristics and validation, etc.

the bit you can’t do is within a single DHT say “some of you get this set of data and some of you get this other set of data”, outside the standard neighbourhood logic

1 Like

Oh, I get it. Thought you meant sub-DHT as the many forked-mini-DHTs pattern we’ve already established. The SSB folks are working on something that would control propagation (or at least visibility) of data using double ratchets and tangles, but that’s not something I think we should think about supporting.

Anyhow, I’m tempted to delete my previous post because I think it takes away from the conversation at hand. Mind if I do that and delete your follow-up as well?

@pauldaoust i think it’s nice to continue the train of thought :slight_smile:

i kind of like how messy the forums are, feels very organic

1 Like

Creating new DHTs and joining new DHTs is on my roadmap for Hylo. When you create a new Community it will created new DHts for Community, Post, Comments. My current issue is managing the instanceID so the UI knows which DHT to be calling.

3 Likes