Follow

We're heading into the weekend by announcing our intent to add support to

This means that media attachments can also be collectively shared across the fediverse, just like messages already are.

Initial funding page up on @opencollective if you want to support, and we'll dig in more on this next week! opencollective.com/fission/con

Want to chat more about this?

Rails engineer that feels like working on it?

Mastodon instance admin who wants to learn more?

Please use the tag to follow along and share ideas.

And yes, we’ll likely start with running patches to the Mastodon and codebases for the largest reach.

We’re interested broadly in + content addressing with . The folks at @spritelyinst have some ideas around this.

Got other ideas about an codebase that could do interesting things with ?

Want to add support to Pleroma, MissKey, or perhaps even more interestingly Pixelfed or Peertube?

Let us know your ideas!

@fission we are open to integrating whatever makes sense at cryptodon.lol (but will continue to primarily host our own media also for availability / speed / etc). one hard problem this network will have in the long term -- how to handle low-need-to-access/stale archival content as postgres clusters grow massive with adoption?

@phil @fission can be fast if you don’t use best effort shared services ;)

Just like Twitter started with Rails, Mastodon isn’t the only codebase or architecture.

I think having opt in per user archiving of older posts to static on IPFS might be an interesting direction — even on existing Mastodon codebase.

@fiee @fission
What are the guarantees for files put on IPFS? Will those be kept indefinitely, unless the creator removes them? Who is paying the bill for infra and how?

@fission @FrohlichMarcel @fiee heya, founder of Fission here.

Server admins will need to choose a provider or run their own IPFS node, much like they do in running a server or choosing S3 for files.

The advantage is, the whole network can help persist files.

Ideally we might have even per user options in the future for your own stuff.

@FrohlichMarcel not sure what you mean by guarantees. I answered already that server admins would need to decide how to run it.

With the IPFS protocol, if just one person keeps a piece of content online, it is available.

@boris Okay, i.e. as long as the server admin rents space or runs a node, everything is like an ordinary FS. All other features are on top.
Thank you.

@FrohlichMarcel one more thing: it doesn’t have to be just the server admin (as is the case with like the HTTP protocol)

So, users could persist their own content.

@fission I certainly would appreciate adding IPFS to Pleroma :blobcat:

though, as for PeerTube, isn’t the videos already shared on the BitTorrent network? what bonus benefit would it have for adding IPFS?

@xarvos great! We should get a separate bucket going for Pleroma and other codebases

and content addressing gives you self verifying content. We think that is a nice base layer that pairs with WebTorrent and other transport layers (yes and!)

@fission @spritelyinst
Seeing as many of the same issues are probably relevant for #ActivityPub + content addressing with #EthSwarm, perhaps we should team up?

@fission @opencollective
Meanwhile I have been discussing with EthSwarm and Fair Data Society about adding EthSwarm support to Mastodon in order to reduce the storage burden of the instances and allowing for easier and faster sharing of media files.

#swarm #ethswarm #ethereum #ipfs #mastodon #p2p

@cobordism great! I was in Berlin in September at the meetup. I hope that EthSwarm can also support compatible content addressing plus add persistence.

@fission

This means that media attachments can also be collectively shared across the fediverse, just like messages already are.

this is rather misleading: media attachments and messages are already shared across the fedi in a very similar way; someone’s remote instance creates them + your instance either receives them or pulls them, and then stores them either in local storage, s3, or in a database, and then serves them to users. The main difference currently is that messages use AP, whereas media transfer just uses good old http

You mention hometown, have you spoken to @darius about this already?

@dumpsterqueer @fission we have been in touch and I have indicated that I'm open to pull requests (basically, I'm happy to help provide a storage alternative that isn't owned by Amazon)

@darius @dumpsterqueer @fission

I'm interested in federated storage too, specifically to try and find something more practical to use as a backend for video.

Re object storage, the S3 protocol can be used as a defacto standard for some open source implementations. That's something I'm likely to implement on my network soon.

Re IPFS, I spent a lot of time experimenting with this for video streaming and vod distribution but my ultimate conclusion is that there's little more than hype.

@darius @fission @dumpsterqueer @scanlime would this mean that the traffic / storage for images attached to posts would be shared / not duplicated?

@jcn @darius @fission @dumpsterqueer

I don't think lack of duplication is really a goal anyone's working on? The ideal would be a controlled amount of duplication for redundancy and load balancing.

@scanlime @darius @dumpsterqueer @fission is that because “storage/bandwidth are cheap” or for allowing instances to maintain full autonomy? It seems like some deduplication would benefit the entire ecosystem, but I admit to not having thought through all the implications.

@darius @dumpsterqueer @fission @jcn @scanlime I think that sharing the load of content that scales from single user instances up to large instances, with a mix of strategies pointing to the same content address

If even one person cares about a piece of content, they can keep it online. So data portability one aspect.

@jcn @scanlime @dumpsterqueer @fission for hosting stuff like video with anything approaching what people expect from modern perf, duplication is basically the only way we do it. I used to work at Akamai, a huge CDN, and much of the business model was "we duplicate your video on servers around the world physically close to people so they can stream better"

@dumpsterqueer @scanlime @fission @darius for sure, and I assume if Akamai could have put the entire cache on everyone’s machine locally they would have. I was just wondering if there’s a balance between every instance having all the content, and it all being centralized.

@jcn @dumpsterqueer @fission yeah! as @scanlime said earlier, it's about a controlled amount of duplication, a happy medium

@jcn @darius @scanlime @fission @dumpsterqueer one of my goals is in fact to explore exactly that:

What if we, collectively, can use more edge techniques for users, devices, etc to cache content, including relying on the self-verifying nature of content addressed data

@fission @scanlime @dumpsterqueer @darius I’m personally not interested in cementing the S3 API as a pseudo protocol

Definitely use it as a bridge for software that already has interfaces for a quick hack.

I like leaning into what content addressing can give us as a flat open global namespace.

@boris @fission @dumpsterqueer @darius

So content addressing is appealing for its simplicity, but it's basically worst-case for all of your caches and network latency.

@dumpsterqueer @fission @scanlime @darius unless it’s already local ;)

I try not to kid people that an arbitrary instance HTTP gateway with best effort services is going to work / be fast.

Like anything else — people / groups are going to need to run community infra.

@boris @dumpsterqueer @fission @darius

The HTTP gateway is not even close to being the only problem for IPFS, in fact if you start moving away from gateways to local nodes you start to see the other problems more clearly because you aren't relying on the relatively central and long-lived gateway node for all lookups.

I did a whole prototype of a browser-based IPFS video streamer and it worked but IPFS was really not helping, and i decided it was the wrong direction.

https://github.com/scanlime/rectangle-device

@darius @scanlime @fission @dumpsterqueer streaming is hard!

Steaming over an http gateway to IPFS just adds extra steps.

More native protocol approaches need work. Interesting to me is local caching — eg Brave Browser turned on, only download large assets once.

@boris @darius @fission @dumpsterqueer

"extra steps" isn't the problem, fundamental efficiency problems with IPFS are. and local caching doesn't help for video unless your browser wants to keep a record of every video it's downloaded, which most people don't for both privacy and space reasons.

@fission @darius @dumpsterqueer @scanlime yep that’s where pinning comes in, where users can choose which pieces of content they care about and want to help preserve (I think of it more like favourites)

Both for themselves (offline support!) and others.

@boris @fission @darius @dumpsterqueer

Think through how this works though. Let's say a user pins a video. Do they do this before watching or after? When does the client determine which ipfs object hashes (a long list) make up the whole video? Does everyone's browser maintain a huge database of completely incoherent hashes?

@fission @dumpsterqueer @scanlime @darius I understand. The protocol needs work. My team @fission is working on improvements to the protocol for various use cases. Not sure if/when the video use case will make sense.

I’m personally committed to having content addressed commons networks that explore spaces that aren’t HTTP server centric.

@boris @fission @dumpsterqueer @darius

What parts of IPFS make it worth using? The DHT is bad, the gossip is bad, the latency is bad.

I've seen folks get sucked into the cult of hashing before and this seems like the same drive. Content addressing makes "anything possible" by making everything equally pessimal.

@dumpsterqueer heya, founder of Fission here. Not meaning to be misleading.

Lots of work to do, general approach with and content addressing is that anyone can help host as well as cache locally, from clients to instances to desktop users

@fission fwiw peertube, also on the fediverse, uses bittorrent/webtorrent for sharing the load of videos. Much more performant, usable and alive than IPFS

@fission @rakoo yeah I think bittorrent/ WebTorrent is great as a transport protocol. These things can be mixed and matched.

I think the property of unique, self verifying content addresses are an important property.

@fission @opencollective Yes, I've been talking about this for so long! #IPFS makes decentralized networks more cooperative and efficient!

@fission @opencollective this is great. One initiative that I think could be massively help many people is to set up a collective matchmaking service for inter- and intra city small transport tasks/moving/man+van services.

Sign in to participate in the conversation
PL Network

The Protocol Labs Network micro blogging server. Stay in touch with ecosystem of organizations, people, and projects around the world. Share your news, post personal updates, and connect with everyone.