Bridging to the outside world "oauth" style?

I came to think about a design pattern that could be useful for bridging to services outside of Holochain.

Is this doable? Is this already being done? Or is something in the hc architecture stopping this kind of approach.

Example case

An hApp that stores files. A Dropbox clone.

Folders, permission information, comments, whatnot are stored in holochain.

File BLOBs are stored in IPFS.

We don’t want the user to have to run an IPFS node, we need an IPFS bridge server similiar to the one is running.

If Holochain and our DNA could act similar to an oauth server we could have something like this:

Request access token

The client is already authenticated but needs a token to prove it to the external service. Can the client request something similar to a unique oauth access token from the DNA in a secure way?

Validate access token

The external service would need to check in with the DNA to validate the token, how can this be achieved?

For this there has to be an http API endpoint somewhere that serves as a bridge from the outside world back in to holochain. Possibly will/should provide this kind of functionality for apps hosted there?

@pauldaoust, @pospi, @guillemcordoba and others, what do you think, is this doable? The pattern would open up a LOT of new possibilities.


Hi @kristofer, very interesting… So different things to unpack here I think:

  • File storage: I actually think that file storage will be totally possible and even better with holochain than with something like IPFS (basically since it has smart redundancy negotiation of blobs, i.e. you don’t have to maintain pinned ipfs blobs in gateway server, it automatically maintains them over the network). Here is a PoC of the frontend for this, although it’s basically non-usable until WASMER upgrade lands (currently it needs 7 mins to upload a 3MB file), but it works, and it deduplicates blobs and so on.

But I realize that your post doesn’t really concern this exact use case, but using Holochain as an identity provider, in a way. I think that different use cases require
different solutions, for me it’s hard to see a unique perfect solution for everything. Different possible patterns:

  • Using the holochain in-house private key to provide access to certain encrypted data. If I store on IPFS blobs encrypted with the holochain private key, you have everything you need (this brings its own suite of problems, but since you already need to encrypt the data on IPFS otherwise everything is public we can simplify it by using holochain’s private key to do it).
  • Genetaring a holo custom private key on the fly to integrate with other technologies: this is how we plan to maybe integrate DAOStack’s wiki into holo in the _Prtcl.
  • Using DIDs and holochain as a DID resolver:

The last thing is that holochain still does not provide any kind of token auth to the UI it’s talking to (I think this will be implemented in the real future though). When it does, we’ll see the specifics of that and in which cases we can use it.

You are outlining an interesting pattern here though. I think it would only work with holo, since otherwise the server that has to validate the token basically needs to comunicate with the local node on your PC. But going through holo we have a major obstacle to overcome I think, which is that holo does not store your private key, but that’s generated on the fly in the client. So, to validate a token going to holo, we would need to have it already generated and signed by the client. Maybe that’s a way to go… We’ll see.

1 Like

Great response, thank you!

Yes, I think there are good use cases for using Holochain as sort of an identity provider. That would enable hApps to bridge to external systems without having to create external accounts, generate API keys, etc.

Ideally I just would like to be presented with a simple authorization message before bridging happens:

HoloCRM would you like to send 10000 email notifications using external 
email provider X. You will be charged 1 holofuel.

[OK] [Cancel]

Yes! I will be exploring that as well.

hdk::sign and hdk::verify_signature could work well in simpler cases.

I was no aware of DID, will definitely be looking into that going forward!

I imagine the verification often is a onetime thing, so no performance issue. The verification is valid until revoked, like an api key. But yes, the node doing the initial signing would have to be reachable when the verification occurs.

Hmm! Actually I don’t think this is possible in the way you have described it. My understanding is that the Holochain security model explicitly forbids DNA code from accessing the outside world, except via bridging to other Holochain networks.

So all this would have to be UI layer. The second diagram you’ve posted may be possible with HOLO (a pingback from the bridge server to the DNA), or a pingback to a page in the user’s local Holochain conductor which then forwards the signed hash on to the DNA.

But either way it doesn’t really matter. DNA code is just as untrusted as UI code really, given that all of it runs on an individual’s local machine. For this use-case I would expect waiting for IPFS and then saving the hash to Holochain from the browser would be totally fine.

Mmm interesting, I’ve never come accross any limitations about holochain going to different platforms, except for in validation rules.

I generally agree with you, but if you’re trying to authenticate a user identified with their public key, a signature of the appropriate data will be sufficient no? Actually coming from anywhere. And in this case it is stored privately on the DNA of that user’s machine.

Hmm maybe we need @artbrock or some core devs to chime in here. IIRC Art told me that this was a core part of the Holochain security model and an important aspect of preventing non-determinism in DNA code.

if you’re trying to authenticate a user identified with their public key, a signature of the appropriate data will be sufficient no?

Sounds right to me!

Ah that determinism part definetly sounds like he was talking about validation rules, and in that case I totally agree. Outside of them, there should not be any problem…

I really don’t think so :grimacing: even in standard zome code it would lead to nondeterminism. What if an endpoint is dependent on an NTP server? Two different nodes would get two different values for the same input payload and write different data to the DHT. Isn’t that an issue?

I’m pretty sure WASM runtimes are sandboxed in at least some ways…