Knowing what I know about Holochain here’s my thoughts…
Since hApp DNA is technically Rust compiled to WebAssembly, WebAssembly is the fundamental language we need to consider.
WebAssembly however does not have direct access to any system resources, like Input/Ouput.
I think the only way that it would be possible, is if Holochain itself implemented an HDK function that would run on the machine itself, compiled from Rust code to binary, and called by the compiled WASM.
Perhaps if enough hApp developers requested this, the team would consider adding such a method to the core implementation of Holochain, as well as to the HDK.
A long-running client could mediate this sort of thing. Here’s how it might look:
The zome function emits a signal to the listening client, saying “please tell the Slack API that xxx event has just happened”.
The client does the lookup, then calls the Slack API on behalf of the zome.
Keen eyes will notice that this is async; the zome function that emits the signal doesn’t have the ability to block while it waits for the client to give a response; it has to handle the response in a separate function. This is fine for webhooks which usually trigger an action and don’t need to analyze the HTTP response, but it makes execution flow kinda weird when you do care about the response’s value.
I’ve also been thinking about dependency management and coupling… to me it makes sense for the zome to make no assumptions about what the client is capable of, because that would tightly couple the DNA to the client. I feel like clients should be tightly coupled to DNAs (they should depend on the DNAs’ zome functions), but not vice versa. That’s what’s nice about signals. A less tightly coupled signal would look like “hey, if any client cares, event xx has just happened”, and Slack-aware clients would say, “oh, I should tell the Slack API about this.”
From talking to Zippy, it would seem that the DNA’s sole responsibility should be storage, retrieval, and validation of received data, nothing else. He sees it as the client’s responsibility to actually interact with the outside world — gather data from web services, trigger actions in web services, etc.
Ah, yes. One problem though. User run client are not usually “long-running”. When you close the lid of your laptop, that’s it. That’s when the server needs to step in and do stuff on your behalf.
But, there may be an ok design pattern solving this.
First, a regular client: The user_client and the user_agent
Then, another type of client: A notifier_client client and notifier_agent
The notifier_client is similar to what you are describing.
Many apps will most likely need of the following layers:
Top - user client / ui
Middle - business logic / long running client / server - stuff that happens even when you are offline.
Bottom - data logic and storage
Example flow: Someone comments on your blog post:
The user_agent registers the notifier_agent as a recipient of signals/node to node messages
The notifier_client is a long running client, running on a server somewhere.
A signal is sent from user_agent to notifier_agent when someone comments your blog post.
The notifier_client picks up the signal and posts to Slack
But. Since there is no (easy) way to run a (per user) server, someone else will have to run the server for you, you giving the server agent capabilities to act on your behalf.
This represents one BIG drawback. This invites centralisation back into the game. We wanted to kick that out the door didn’t we? It comes with the usual stuff of centralisation, bottlenecks as well as risks for single point of failures.
Another drawback. The middle layer is outside of the controlled sandbox which is the compiled DNA. That means - no trust between data layer and business layer, no shared playing rules and all the other stuff that make holochain awesome.
Or am I thinking about this in the wrong way? It is difficult to see how a majority of use cases could do without that middle layer, only relying on fat clients and a p2p data layer with validation rules.
My sense is that the following two small additions would allow a majority of use cases to find ways around setting up centralised servers somewhere:
Cron like functionality allowing zome functions to be called without first being initiated by a client
@kristofer I really like this user/notifier pattern; it could work really well. I suspect that it’s impossible to get away from some sort of centralisation if you want to have long-running processes that are always available to receive or trigger events, simply for the reason you describe: closing your laptop lid, turning off your phone’s WiFi for the night, etc.
If the app understood the concept of a notifier agent, it could restrict privileges on that agent to the things the agent really needs to do its job. That way, I wouldn’t be worried about allowing a centralised server to hold keys for me, because I know it wouldn’t be able to do anything evil on my behalf — well, not do anything evil on the DHT at least.
Cron jobs that trigger state transitions (e.g., call a weather service, write the weather data to the DHT) are a bit more tricky, because then I’m entrusting the server with write privileges on my behalf. The app designer would have to think hard about privileges, as well as the consequences for invalid data. I’d think that if a corrupt server were using my cron agent’s keys to publish invalid data on my half, I’d want it to be banned from the network, but I wouldn’t want it to affect my own reputation.
Re: “no trust between data layer and business layer”, I think that should always be assumed actually. The DNA and its validation rules have the final say on what can and can’t be done in an app; the client/middleware (whether it’s on the user’s device or a central server) should always be held at arm’s length as potentially suspect. The job of the DNA + Holochain is to receive information from the outside world (software outside the sandbox + data from other DHT nodes) and decide for itself whether it’s correct or not. True, the DNA and Holochain instance on that central server are outside of the user’s control, but the validating DHT is supposed to catch that.
The implementation questions are exciting and non scary to me; I think this is totally doable. My big question is the business models that can support this. I’d love to see HoloPorts offering this as a value-added service; that would decentralise it at least somewhat. Maybe the HoloPortOS could allow middleware/client software to run inside a Docker/LXC sandbox; I dunno.
Agreed. Application use case will have to determine what level of security is required. Middleware, “external agents” or not, an agent would always have to think carefully before handing out capabilities.
Valid point, agreed.
That would make a really strong offering indeed! The Holo pricing model would become more complex though. Without having a clue as to how the current pricing model looks I guess this would add more parameters to the mix - CPU usage etc.
A similar pattern could be used to achieve the cron functionality.
user_agent giving a cron_agent capabilities to call certain zome functions on the behalf of user_agent at certain intervals. cron_agent is managed by an external server running a middleware that is always online. Would it be wise to grant a cron agent to act on your behalf with personal data etc - probably not. But for house cleaning tasks, notifications, etc it could make sense.
Folks, first of all thank you for these awesome discussion, you saved me. I’m here reading this in 2021, after searching and thinking quite a bit about this external calls. Second, I’m curious about the other point @kristofer made about supporting external calls besides a crob job: wouldn’t be possible for zome functions to make synchronous HTTP requests?
Also, do you know any update about these questions, is anything supported now (Holo, cron, sync requests, …)?