"Standardised" common zome traits

Standardised common traits would allow for third party apps to better access and visualise an agents data (DNA-instances/zomes/entries) but also to interact with the data in a meaningful way. It would also faciltate better interopability between apps as well as speed up development.

Topic for discussion: What possible standard traits are there and how could we get a process going where we develop a few of these?

Working doc: https://hackmd.io/QSVLhFcVRsmkeuhEAOIcfw?both


List which other standard traits (and possibly trait versions) a zome uses. Or Rust possibly provides that functionality already? But the information has to be accessible not only for the app developer but for third party apps calling zome functions.

(Similar to IERC165, supportsInterface(interfaceId) https://docs.openzeppelin.com/contracts/2.x/api/introspection)


The notion of “ownership” over a digital resource is a basic pattern many Holochain applications will need. With ownership comes the user expectance of being able to transfer that ownership, to renounce ownership, etc.

A common way to manage ownership would faciltate better interopability between apps as well as speed up development.

Doc for collaboration here: https://hackmd.io/kWfnGNR4Ta-K1h0jpK71Hw?both


Both zome level fields and entry level fields. Standard info found in zome.json (zome_description, …) as well as zome/app specific.


Renders a human readable preview of an entry or list of entries, possibly in the form of a webcomponent. Example: embed a nice looking representation of a chat entry in a humm blog post @nickmitchell?

Editor / Plugin?

Returns a plugin editor that can be embedded in other apps. Example: Every Humm blog post embeds a “Comments Editor” from the “HoloComments” app allowing for users to comment posts.


@ViktorZaunders and @guillemcordoba, possibly this is something for you to sink your teeth into at the Portugal hackathon?? I Know… finishing MailBoox… but one could always hope :slight_smile:

1 Like

Great start, thanks. Would you prefer I suggest things in this thread or the HackMD? Some come to mind right away:

  • BlobStorage: Supports the storage and retrieval of chunked blobs and their manifests.
  • ClaimsPermissioned: Actions are allowed or denied based on the possession of valid claims/credential (cf W3C Verifiable Credentials).
  • MembersOnly: A special application of ClaimsPermissioned that requires a valid credential to join the network.
  • GroupAgency: Agents can form recognisable groups with persistent identifiers; group membership (= authority to act as group representative) can be verified by third parties. Related to ClaimsPermissioned. (ping @pospi, cf How will Holochain handle group agents? )
  • Timestamped: Entries carry metadata that give varying levels of proof that they happened at a certain moment in time. (cf Mixin zome(s) for entry timestamping )
  • Timestamping: Able to provide a timestamp for external entries, at varying levels of proof. Naturally, this can support Timestamped.
  • DIDResolver: Able to take responsibility for resolving W3C Decentralised Identifiers for the data this DNA’s DHT contains.
  • Annotation: Able to annotate other data, either in the same zome/DNA, in some other DNA, or outside of Holochain entirely. Should probably be polymorphic, e.g., Annotation<BlogPost>.
  • Identity: Able to make assertions about facets of an agent’s identity, whether in a legally recognised (e.g., driver licensing office) or an informal capacity (web of trust).
  • Presence: Shows whether an agent is online, offline, available, busy, typing, idle, etc.
  • Scheduling: Shows whether an agent is available or occupied at a given date in the future.
  • Events: Calendars, etc. Can be composed with Scheduling to do some cool things.
  • Messaging: Chat, IM, DM, comments, etc.
  • Posting: Similar to Messaging but meant for syndication. May just be an application of Messaging.
  • Taxonomy: provides a list of categories for other things.
  • Notification: Allows agents to subscribe to things they want to be notified on; emits notification signals to clients.
  • MultiNodeAgent: this DNA is capable of combining multiple agent public keys into one human representation. Similar to GroupAgency and may be able to take advantage of the same codebase.
  • Notary: Able to witness, and guarantee the uniqueness of, events.

The big ones we need that haven’t already been mentioned are:

  • Indexing patterns for browsing and retrieving other entries. I’ve seen anchors and DAGs (both have their use-cases), there are probably other flavours needed too.
  • Signing: something that allows arbitrary participants to sign other entries by hash, thus indicating that they have seen and agreed with the information contained therein.
  • LinkRegistry: zome that tracks links incoming / outgoing between the host DNA and other DNAs using XDI Link Contract schema. We would also want this as a standalone DNA that tracks links between two other DNAs.
  • ExistenceCheck: simple API that checks if entries with given hashes exist in the DNA, for use by other DNAs that need to perform referential integrity checks.

@pospi’s mention of XDI link contracts reminds me of something I meant to say yesterday: whenever possible, we should lean on existing standards for everything we create.


Great additions! I compiled all ideas to one list and tried to identify some groups.

Holochain commons

Zome traits
Design patterns
Standalone DNA / hAPP
Potential core functionality

Next steps? One could easily see this evolve into a community run thing where developers from different projects collaborate on common functionality to strengthen the ecosystem and increase interoperability.


@harlan, what library functions / zomes did you have in mind?

Awesome @kristofer ; thanks for compiling our ideas into that document.

@guillemcordoba are you aware of this thread yet? cuz I think your presence would be super valuable.

@kristofer if you have bandwidth to set these kinds of things up, a group of us have started a project on CoMakery where our goal is to list and fund possible contributions to our digital commons: https://www.comakery.com/missions/45

One possibility would be to list these works under a CoMakery project and to solicit crypto funds for their implementation. But a good framework for managing and governing contributions would need to be developed first :wink:


It also occurs to me that a group with a larger vision could be pulled into this — they are probably already thinking about conventions for data exchange that could be used as starting points. I’m thinking specifically of the Open App Ecosystem group on Loomio, who are seeking to define common language for interoperable apps. I think people in the ActivityPub world are big into this too.

I don’t think strict standards compliance is a useful goal, because that stifles innovation (I’m thinking about how ValueFlows are taking a radically different approach to an existing thing, in ways that could change the game known as economics). But it does pay to recognise and collaborate with the work already being done!

1 Like

:slight_smile: Yeah, that could be a good way to develop common tools and functionality actually. Start co-developing the tool needed to continue co-developing. Some basic DAO-functionality like managing funds etc. Could the foundations use the REA vocabulary / HoloREA? In the process of developing that framework, needs for common zomes, functional libraries etc will arise. Develop those in separate branches / sub projects and let them become part of the Holochain Commons.

In the process, ideas will also pop up that could be branched of as commercial enterprises. I’m thinking of stuff discussed in another thread, such as standard middleware functions not part of Holochain core:

  • Cron like functionality
  • Caching
  • Indexing
  • Search
  • etc

… stuff that would give someone willing to pay a monthly Holofuel fee access to more “enterprise like” functionality. Then it would be a good idea to set something up similar to Consensys or Infura from the ETH galaxy.

Maybe this is a talking point for the developer call?

1 Like

Btw @pauldaoust & @pospi, I think both the Open App Ecosystem and CoMakery groups sound really interesting!

But… I also get a sense that what you are describing, like “common language for interoperable apps”, that is quite high level stuff. Which is totally needed and vital for the long term development of the eco system. I am more of a plumbing type of person. I would like to build the actual tools and libraries needed today. The stuff that could facilitate the “cambrian explosion” of apps we are hoping to appear on holochain in the short to medium term.

1 Like

Hi @kristofer, @pauldaoust and @pospi! Great to be having these kind of conversations.

Regarding standards and traits between happs… I’m not sure :slight_smile:

With the holochain architecture being as it is, my first instinct is to bring all possible bridging-like functionality to the frontend, where dynamic piping is much easier. And actually, since all holochain entities are hashed, in most cases they can be referenced as if they existed in a global “namespace”, and this makes building generic zomes (as in: a “reviews” zome that can review any kind of entry, or a “comments” zome that can reply to any kind of entry) extremely easy. In the cases where you don’t need “statically typed” references (in those cases maybe rely on explicit and normal holochain links) you can have cross-zome or cross happ functionality without any bridging on the holochain side. In the _Prtcl where are building infrastructure for these cases here.

The cases I see where bridging needed are somewhat limited. Maybe when you need to share information between peers in two different DHTs (eg. Subscriptions), or to avoid ending up in an inconsistent entries state that could be possible if you only rely on the frontend. If your validation rules rely on entries on other happs you’ll need either:

  1. To have a hard dependency to the other app, since the rules need to be deterministic all peers need to have the happ installed.
  2. To rely on the agent that brings the information from one space to the other, and in this last case you may as well bridge through the frontend.

Maybe I’m wrong on this, would love to be proven so with examples :smiley: as I don’t have that much experience building complex multi-dna systems like REA.

Regarding the commonly needed zomes/mixins, I actually like this idea a lot, it’s great having these starting points on top of which to build (I would add roles and generic tagging, and also WebRTC discovery!). Actually I think this initial pack will be built by the community itself without much effort, most of them are not so hard to build.

However I’d personally focus more on helping/educating the community to build and design applications this way by default, by finding ways to come closer together as a community to see possible sharing points, or increasing the education resources, design trainings, etc.

This way, we would increase the factor by which we build on top of each other and have much more visibility on what we all need. We’d also follow the principal “build concrete first, generalize after, build adapters to make interoperable what’s left” which I personally prefer to standards.

Also the few existing standards I’ve wanted to play with directly assume http protocol and URL links in which the entities live, and http assumes client-server interaction. It just feels weird to build on holochain, and only really works when you add Holo to the equation (eg. OpenBadges, which I started building on Holochain but ended up with this). I don’t know to which point making a bridge-app that follows the standard but does things the holochain way is worth it, I guess it depends on each individual case.

I’m very grateful that you made visible these online groups/communities, did not know about them but will review for sure.

From now on, I’ll try to be more active in this forum (if I think the community should come closer together, I should to apply that to myself first :slight_smile: ). So see you around!



@guillemcordoba thanks for joining this conversation! I think you’re on the right track re: when bridging should be used — whenever greater integrity is needed. Basically, if a user trusts their conductor, they should be able to trust that their bridged apps are doing what they should, and not trying to do sneaky things on their behalf like accept or create invalid data. The user can’t have these same guarantees about the UI.

I personally think ‘ad-hoc’ bridging (in the UI) is a bit more flexible. If you want to extend or modify what comes out of a DNA before passing it to another DNA, but someone else wrote all the DNAs you’re building your UI on, you can do it without modifying the original DNAs. I also think that setting up runtime bridges in the UI is a bit easier, although it’s certainly possible for the UI to ask the conductor to set them up too.

I def agree with the idea of building community first, then allowing the community to build these things. Sounds much more sustainable, especially because (1) my available time to get involved with expanding the ecosystem of libraries has been scaled back (I’m juggling forum support + HoloPort tier 2 support), and (2) I don’t want to rely on any one community member to be the champion of this effort — that’s not fair to them and results in burnout. I remember when I was helping out in the Mattermost chat before I got on the payroll — it was demanding!

I hear what you’re saying re: data interchange formats that have URIs in their specification; however, I still feel like it could work. If we at least use them as the basis for our own designs (or, better yet, come up with a URI spec for Holochain entries both public and private), it’s not all that different from DHT links. The DHT is, after all, a public space meant to store consistently reference-able data. Of course, not all consumers of the data would be able to resolve Holochain URIs, but we can work on that :wink: It also could help with the interop story — I’d love to see Holochain participate alongside SSB, Dat, IPFS, and others in an interconnected DWeb, and open data formats with generally understood DWeb URI schemes could help immensely.


@pauldaoust yeah, about those URI schemes… in the _Prtcl we are working and discussing around this exact thing… We basically want to allow anyone to reference any holochain entry (or decentralized technology for that matter) with links of the like of holochain://DNA_ADDRESS/ENTRY_ADDRESS, we think it’s super powerful! If you’d like to know more about our approach to this, I can explain.

BUT, this is not publicly accessible from the internet (I’m thinking about private happs), and it requires some non-native JS layer to access that code from the browser (it’s not like a GET request to that address would work). We can provide a standard piping so that through Holo entries are available through an HTTP GET endpoint, but this doesn’t cover things like holoscape…

So, I don’t know how far we can go in saying that we are compatible with URI compatible schemas and standards, since all the assumptions those standards are making may not be compatible with holochain’s way of doing things. Again, this may depend on each case!

Anyway, great to have this discussions!
Cheers :slight_smile:


@guillemcordoba you’re right that URL resolution doesn’t work out of the box, which makes things tricky for sure. I’m glad you’re devoting some serious thought to this, esp re: interop between different DWeb tech.

One struggle I have right now is that data in a DHT is content-addressed, but the most likely consumer of a holochain: URI is the UI, which doesn’t have direct access to the DHT. Therefore the UI doesn’t know how to resolve this URI. I think three options are:

  • Any DNA that wants to honour holochain: URIs provides a generic get_entry() zome function. But this isn’t a universally usable solution, because it requires DNA devs to opt into this pattern.
  • Maybe a Holochain URI should look like holochain:<dna>/<zome>/<function>?<params>, which gives a specific means of resolving data through the mediation of a zome function. The nice things about this convention are (a) it works for both UIs and bridged DNAs, (b) it feels sorta RESTful, which is familiar to web devs.
  • Any client that consumes data containing a holochain: URI has enough domain knowledge about that data that when it sees { "article_url": "holochain:<dna>/<entry>" } it knows to turn that into a get_article() zome function call. This puts a bit of a burden on the UI author; not sure how much.

What are your thoughts?

Re: holochain: URLs not being publicly accessible from the internet, I presume you mean because the person clicking on the URL might not have Holochain or might not be part of that specific DNA’s network? I think that maps cleanly to HTTP’s access control mechanisms like the Authorization header, or lower-level access control mechanisms like firewall rules — as well as the fact that sometimes servers are down. Possession of a URL doesn’t guarantee that you personally can resolve it, and that’s just fine — although I guess UI authors have a responsibility to handle these issues graciously.

Actually, I think that the best course of action is not to figure out the best pattern is but to make it really easy to include and implement any one of those. Some of these types of links map really well to other decentralized technologies and the centralized world, others don’t. I think we don’t need a new strong standard if we can solve this a la Ceptr, by making it really easy to learn patterns you don’t yet know about.

For instance, in our infrastructure, any module can define its sources which are JS classes complying to an interface, and they can do whatever they want to go and get the entities, eg. going to any decentralized/centralized platform, or even in the filesystem (this would solve for your 3 option). This is only our first implementation of a type of a link resolver; we will also provide an easy way to dynamically load “new types of linking” that have their own method of resolving the entities.

But to add on to the patterns of linking, another very interesting one (actually the one we are using on our first iteration) is disconnecting the source/authority from the hash, and linking only with the hash. This way we can begin to think on changing the location of the data (to a different happ in holochain) without having to update all the links. To resolve the link you have two options:

  • Store the source in a separate table/zome (which is what we are doing), this can have validation/governance and spammer issues further along. This pattern is most suitable to deal with IPFS like platforms, in which content-addressable objects are the only possible thing. In holochain, it’s important to know which ones of the happs is the hash refering to since, since the entries can be updated using different validation rules.
  • Know before hand which source this object always points to.

(Fun coincidence: just yesterday I was writing two apollo client directives to cover for this two cases).

About the last point, you’re right, access-control maps quite nicely and my last point on this was not valid.

I hope this made sense, I feel like I’ve been mostly rambling :slight_smile:

1 Like

Hey, I like rambling; it’s how important considerations get discovered :wink:

I like this idea; theoretically you could swap out a Holochain backend with an IPFS backend. Ultimately all you care about is the hash (and hence the content), not where it came from. I think that in this context I understand what you mean when you talk about embedding custom resolvers into the app based on the app’s needs and understanding of the data.

It lines up nicely with something I’ve been reading lately. I don’t agree with all the opinions on this website, but I like the idea of separating content from delivery medium.

One thing to keep in mind with Holochain is that app entries’ hashes are actually based on the tuple of the entry type name and the content. Hence, an entry of type message with the content {"text":"hello","time":"2019-12-18T12:34:56"} is actually hashed as


rather than the expected

1 Like

@guillemcordoba this has been a fascinating read & thought experiment, thanks for sharing!

First off I completely agree with you on generic zomes that don’t need to interface with any particular kind of entry. You’re foregoing referential integrity of those links in doing things this way, but until we have more advanced cross-zome messaging traits I can’t see us achieving that. Eventually though there will be a need to solve for “you can’t create a comment that points to something that doesn’t exist”, and I expect zomes will need to use URI resolution to determine that the things they’re referencing really do exist. For use-cases like the economic ones we’re solving for, these constraints really matter— someone adding an orphaned record could really throw some calculations out in a way that has serious consequences.

Here are some other places where we have needed to use bridging so far:

  1. Cross-DNA index replication (see fulfillment as an example). In these cases we have “links with context” that need to exist on both sides of the network boundary, since participants may be isolated to only 1 of the collaboration spaces. This is essentially data replication, and you need bridging to achieve it with any degree of robustness. Note that lib_origin manages search indexes for the Commitment side of the relationship, and triggers API calls in lib_destination that manage search indexes for EconomicEvent. The identical Fulfillment entry is stored in both zomes. This allows participants who only have access to the events to follow references to the fulfilled commitment, and vice versa.
  2. When the attributes of a record (such as satisfaction) reference records in other DNAs, and the behaviour to take depends on which type of record they are.

There are also some use-cases coming up that we will need it for:

  • We will need to lock down a zome so that it must be bridged to a particular foreign zome, and code it so that it can only use identifiers from the records kept in that zome. The use case is for networks that have a particular restricted set of available resource types- they need to verify with the configuration zome that they provided parameters check out. So it’s actually a stricter kind of “does this hash exist” check.
  • Group agents. The only way I can see for this to work and to be architected sanely is to define a set of well-known zome traits that group agents should respond to, and then to bridge group agent configuration DNAs in to all the networks that group wishes to participate in. For these cases, the group-compatible zomes would have to be coded to perform access checks etc against the group agent DNA.

I imagine there will probably be a lot more things being implemented with this type of pattern as community members continue to experiment with pluggable modular behaviours… basically anything which requires robust business logic needs to be done within the DNA layer.

It just feels weird to build on holochain, and only really works when you add Holo to the equation

I’d push back on that… for our needs, HOLO is the least interesting thing in the equation. We want locally installed productivity tools that run inside the HC conductor, and we also want to default to agent-centric shared data sovereignty. HOLO has some minor utility for being able to put a project progress dashboard on our website, but little else.

I think your perspective here depends on how you envisage people using the technology down the line. The stuff you’re talking about is web tech, so naturally it’s built to talk to other web tech. But web tech is also a calamity of errors, soooo… is this kind of arbitrarily-linked, poorly-permissioned, non-sovereign, corporate-owned environment where you see people doing their daily work 5-10 years from now?

I don’t really have an answer but my hope is “no”. I’d hope to be integrating with data more directly, pulling more of it down onto my machine and running it through local analysis tools that make such tasks easy. I see it more as “distributed tech needs to replace the web” than “the web needs to become re-decentralised”. HOLO to me is just a way to make Holochain data accessible to people still working in the old paradigm- a compatibility measure.

I’m planning on running a code deep-dive on Holo-REA fairly soon. The _Prtcl project looks really interesting and I’d love for a session on that to be run next. I can see echoes of some of the work @wollum did early on in trying to create generic GraphQL interfaces into zomes in order to define entire apps declaratively, and I’m particularly interested in exploring whether your frontend libs could generate a lot of the boilerplate Holo-REA has been doing manually. I can see the project being very useful in creating generic UIs that span multiple apps / platforms.

The biggest question I have with the framework is: how many safety guarantees are you giving up? What I have with GraphQL now reduces the possibility of error quite dramatically… I’m not sure it would be possible to protect against invalid types / field names / data structures and to bind the frontend & backend together as tightly as I am able to…

You might be interested in connecting with one of the groups who has been posting in the OAE lately. Kendraio are building a schema-based UI / API interface that allows them to plug many services together arbitrarily. Their system seems to be evolving from JSON-schema and other DSLs into fractally composable grammars, so it’s quite interesting. May benefit a lot from _Prtcl.

FWIW @pauldaoust there is a proposed URI scheme for Holochain at the bottom of this document which has some legs. But what you’re saying about an RPC-based URI scheme might be more sensible given the way things are addressed. Or maybe we need to standardise traits for content-addressable addressing?

not publicly accessible from the internet

I think we’d need to translate the protocol scheme at the app UI layer, huh. So basically if you’re running a HOLO app you could show http(s) URIs that wrap the actual underlying Holochain entry data. Those pages could even expose RDF data. Would that work?

Conscious this is getting a bit long, and possibly going off topic. Feel free to split to new threads if you think it’s appropriate (;


Hi @pospi! So many thinks to unpack, yes…

  1. Very curious about the approach you are taking to validate cross dna events… Seems to me that REA is one of the few cases in which that’s a strong requirement. If I had to guess, maybe having the same zome on both dnas (to manage consistently the same type of data) and bringing one entry from one dna to the other, and then with that entry you can validate other entries or modify the behaviour as needed?

I’m also curious about the amount of (hard) dependencies that are popping up in your infrastructure, cross DNA dependencies as well as cross zome ones.

  1. About the needing Holo:

I wasn’t saying that we need Holo in every case as in contrast to holochain… I was agreeing to your last point:

This is basically how we could support standard protocols and specs that assume http. In these cases we need Holo as a bridge. I also want to bring the web to the next level, all good here :slight_smile:

  1. About the _Prtcl, actually in Lisbon I wanted to show Tom how we are doing things but we didn’t have time in the end… Basically since we are doing frontend module and configurable infrastructure and you with Holo-REA are doing the same in the backend… So I think it could be a great fit!

In what terms? Developer security, as in “type safety” and so on? I’d like to know how you are protecting this more with GraphQl on your side, it shouldn’t be difficult to write a module to support that.

Will be reviewing kendra.io, at first glance there may be too many different assumptions to make practical the adoption of _Prtcl, but it’s always good to challenge your infrastructure with edge cases to see if it holds :slight_smile:, how is that going for you? I think you are solving a great challenge.


1 Like

For @guillemcordoba @pauldaoust and any others watching, I’ve pulled part of this discussion out into a separate thread at Mixin zome and utility libraries for RESTful APIs and URI resolvers.

1 Like