I’d like to pick up a discussion that has been circulating before: GraphQL and Holochain. I came across the Junpier GraphQL library for Rust and decided to give it a try integrating it into a simple DNA. Conclusion: It seems to be working great! No obvious performance loss or excessive memory consumption, even though that would have to be investigated more in detail. Juniper adds only ca 150 K to the final size of compiled DNA. And ofcourse, client development becomes really straightforward when Holochain can be acessed as a GraphQL endpoint, no translation in the client needed.
My initial plan was to use Relay in the client because they offer cool stuff such as support for React Suspense and Concurrent mode. In the end I went with Apollo Client instead. But, I still tried to implement and support the GraphQL Server Specification proposed by Relay. Following that spec and other best practices for GraphQL I see as one of the major benefits of integrating GraphQL on the Conductor level. It forces you / nudges you to consider the clients use of the DNA interface at all times.
Hi @kristofer, first of all this is really great It’s awesome to start seeing new experiments popping up, and of this quality! Props for that.
Just so you know, I have dropped graphql from my tech stack in the open source reusable modules, and it felt awesome. I still think it’s great for large projects with multiple zomes and dnas, but not worth it at all for small reusable pieces - I will be updating the documentation and writing some guides for those shortly.
On the other hand, one of the reasons that I didn’t really support the approach of doing graphql on the server side is that eventually you will want to query accross multiple dnas, (eg: personas&profiles, mutual-credit, etc.). If you do the graphql handling on the server side, that’s going to add some friction: is there an endpoint for every dna? If not, how do we support queries in which we don’t really know the dnas we are going to be querying until we do the query itself (I’m also thinking of cloned dnas)? Also, it is a bit more difficult to develop reusable zomes and then compose them in a unified graphql endpoint: do the zomes provide the zome code and the resolvers separately, and then the dna aggregates all of them? What about zomes that don’t provide the resolvers themselves?
I know these are not complete blockers, so have you thought of some solution for them?
In any case, I keep catching myself thinking that we have found the best practices for architecting these things and later on seeing that there is a better way. So I guess we are still in early exploring of the territory, and in that sense it is fantastic that there is diversity of approaches.
Most likely, yes, there would have to be. Unless… GraphQL was supported by the conductor directly which then could merge the schemas. Which in turn leads to a new avenue of potential name collisions and other conflicts.
Juniper takes a code first approach and creates the schema for you based on traits you place on objects. Like so:
#[derive(GraphQLObject)]
#[graphql(description = "Information about a person")]
struct Person {
#[graphql(description = "The person's full name, including both first and last names")]
name: String,
#[graphql(description = "The person's age in years, rounded down")]
age: i32,
}
I see a couple of options:
If a “third party zome” implements the Juniper traits the zome schema is auto-merged with that of the rest of the dna.
If the zome doesn’t implement the graphql traits
Resolvers can be written by app the developer doing the integration.
GraphQL implementation could be provided as a separate crate for apps that need it:
But, issues with potentential naming conflicts remain.
Agreed! How will the needs look going forward? A “standardised” backend format that simplifies building custom frontends (GraphQL → React)? Or, pre-built frontend component legos that easily can be assembled into apps (WebComponents)? I guess the answer differs depending on who you ask. And different solutions will be best choice for different apps. Only one way to find out, more exploring and prototyping!
Perhaps we should book an app architecture session? Where we look more in detail on how a real app could be assembled in part out of reusables zomes / dnas / ui components combined with custom app-only logic.
@kristofer I take it you would have seen the GraphQL and Holochain post? Looks like the gory details were back in Mattermost days, but these are the key elements I’d point to (which I don’t believe have changed)-
There is no real performance impact for Holochain users between an in-browser GraphQL query adapter, and a GraphQL query adapter that runs within the conductor.
There is a negligible performance impact for HOLO users when using an in-browser GraphQL query adapter, as it results in more packets going via the wider internet through the hc-web-client connection.
For these reasons I’ve preferred the route of in-browser GraphQL. There is no real difference in terms of authority / responsibility because really all code is executed on the local machine (regardless of whether by the browser or by Holochain). And as @guillemcordoba points to, you can do cross-DNA GraphQL more easily in JS- of course you could do it in Rust too, it’s just more verbose and syntactically awkward to aggregate a bunch of call responses in a zome callback.
Thank you for the example! The idea of a unified graphQL endpoint for communicating with Holochain is really intriguing. I want to attempt a similar thing but use Relay for the reasons you mentioned. Is there a reason you changed your mind and went with Apollo instead?
I started out using Relay and think I have made sure the backend fulfills the requirements Relay have. But, what was the reason? I don’t remember. There was some technical issues not worth spending time on fixing. That is how I remember it. Switching to Apollo was the quicker option.