Since starting to work on radicle-link (now abandoned), I wanted to arrive at something which would be usable under today’s network and economic realities.
In this world, peer-to-peer protocols have their niches, but generally fall short of traditional client-server architectures by virtually every measure of quality of service (QoS). Recent federated protocols have performed much better, although it remains to be seen if they can avoid standards proliferation. Neither approach has been able to answer conclusively how less popular data is kept available, which is quite a non-starter. It’s not very convincing to incur a premium, either as fee or as resource expenditure, when storage and bandwidth are commodities.
So no, I do not believe that there is any meaningful way to replace “The Cloud” with “The Network” - and yet, I want my FOSS project to be able to roam wherever my QoS needs are best met at any point in time. Didn’t we intend to embrace local-first software in the first place? Then how did we end up trying to design bespoke network protocols to move our data around?
In hindsight, I think we were too preoccupied with how Github does things to understand exactly how this model only works when every “fork” is in the same failure domain. We dismissed the idea of “patches” too early – ignoring that distributed version control has for decades been predicated on it – because either they were these brittle agglomerations of gibberish in a text file or sophisticated theory which, obviously, git could not be retrofitted with.
It turns out that, if we tilt our heads slightly sideways, we can make the git DAG suffice and reduce our “protocol” to a bunch of datatypes and essentially transfer of static files. I’ll take it as a compliment if you find this boring.
Disclaimer: I developed it on my own time, so don’t expect a “product” you can use, yet
Thanks @alexgood for reminding me that git always has another thing up its sleeves you either didn’t know before, or held it the wrong way.
Well you’ve been busy – I just finished reading through the spec; pretty interesting! Now I wonder if the bundles could be exchanged over something like nostr… Then you’d have all the pieces you need.
Curious to see where you take this. I think splitting the networking out is a good approach generally, even if one plans on having networking. It would open up other avenues for data propagation…
It’s not that it doesn’t have networking, it’s that any ol’ DHT will do if you prefer it the location-independent way. Or whatever du jour thing, as long as it can handle binary files.
The only thing I haven’t fleshed out is if it is preferable to use “native” encoding for the drop structure. Probably not, unless there is some compelling integration case (like, you know, treat a matrix room as a drop or something like that).
EDIT: I hadn’t heard of nostr, but from the docs I can’t see what it does that couldn’t be done using it’s mirrors/alternates. It also doesn’t seem to have any particular support for blobs.
I also found that things would actually be a lot nicer if “content addressing” in open networks would not actually depend on the block size. If there is ever going to be a custom routing protocol for it, then I think it would be to solve this.
I need to re-read the section on mirroring; but yeah, what I meant is that it doesn’t provide its own networking, it piggy-backs on whatever existing network you choose, hence why it is more of a self-certifying document format rather than a network.
Typically, “file sharing networks” (for lack of a better general term) split up arbitrarily sized files into chunks (or blocks) and compute an identifier from that. There may also be other parameters, but point is that we can’t compute the id from just the checksum of the original file.
That’s a problem if you consider long-term storage and addressability.
because a. the name is chosen at creation time (and can be altered by the user) and b. there is no way to compute the address on the hypothetical blackhole network (which became popular after IPFS became the MySpace of the decentralised web) without already knowing the target data.