Interesting to see something like this here. I've been scratching my own itch and writing a self-hosted hybrid of alternative to pocket + IPFS archiver as well. Source is here: https://bitbucket.org/lullis/nofollow. A browser extension to make link submission easier would be very handy
Yes, I am aware of Wallabag and it does some interesting things. It is just that I am allergic to PHP and would like to have a testbed for some other ideas I have:
- inter-instance link sharing, perhaps via activity pub.
- Some ACL later on top of IPFS to ensure you share links and content with people you trust only (also to avoid eventual copyright litigation)
- a sanitized way to read links from content farms that gets shared on social networks (this was one of the initial motivations for me to start this, and this is why I called it nofollow)
- a perhaps more robust way to have an internet archive.
It is unlikely I will get to do it all, but it is more likely I will get to do some of this in Python (or Go) than to ever pick PHP
Could you give me a little insight into your use case?
I'm personally excited for ipfs, but I've stopped following releases / am feeling burnt out by the slow pace of development. Still, good to see creative uses pop up in my internet bubble!
I was checking my old bookmarks and big part of them was outdated (404). Average lifespan of a website is 100 days https://blogs.loc.gov/thesignal/2011/11/the-average-lifespan... , but I would like to keep some articles to read and share even way later. I thought that it would be nice to have simple tool, which doesn't require setting up another account. Just backup and add a link to bookmarks. Because all technologies needed to fix the problem are already here, I just put them together.
Is it really a backup? Ipfs has no 'upload' so if you clear your local storage, everything will be gone. I think it's unlikely that multiple users will generate the same content hash.
Right, it's a local, sharable copy. Actually Readability cuts off so much HTML, that it's quite possible that 2 users will generate the same hash even from very dynamic website.
Unfortunately it's unlikely to be a backup, since everything will only live on your local computer. You might get the content from some other user if, by chance, they generated the exact same bytes, but it's rather unlikely.
To back things up, you'll want to pin to an external server like Eternum.io.
If you export your bookmarks you could autoparse them and ipfs pin and ipfs links to a longer lived archival node running on you main host or offsite/cloud.
I'm not sure whether this is a brilliant or a terrible idea. I have a tendency to collect my to-read stuff in tabs, and every once in a while I clean up my old tabs, read some of the stuff I intended to read, and discover many of them don't exist anymore. So a backup is great, but actually not having to read all the stuff I collect saves me a lot of time.
Sounds like a storage backend for your own Pocket or Evernote Web Clipper.
Coincidentally, I was (slowly) researching doing something similar with Scrapbook and some kind of synchronization a-la Dropbox. I guess this implementation sidesteps the whole issue of organizing files on the fs, though I'm not sure if that's entirely unproblematic.
Man, you should say slow more times. The thing hasn't evolved a millimeter in the last 3 years. It's so full of bugs it's impossible to use for any use-case besides first demo hello world cat photo. It's also hell of slow and eats all your computer resources.
I was an enthusiast until 3 weeks ago when I lost all the references to my objects (twice, in two different manners) because of "a minor bug in mfs" and wasn't able to recover them.
> Could you give me a little insight into your use case?
I'm not the OP but a use case I might find useful is if you have a blog, want to comment on a news article, and want to have a cached copy of that news article. Since web pages often go way after some time, making sure the news article stays up might be useful. This is particularly true if the entity writing the news article decides it is embarrasing to them and wants to flush it down a memory hole.
In addition, you could continue to reference that cached article even when you go offline - so pinning it to your local node effectively makes it available to any of your locally connected devices without needing to ping a distant server (aka - you can take your entire "toread" stack camping without one-off planning =] )
I was listening to this Hashing It Out Episode with the founders of Storj[1] yesterday. Apparently IPFS isn't even that resilient. The Storj founders said that if you take a look at the IPFS sub-Reddit, half of the links don't work. That's not to discount what you're asking about, but it seems like the gamification for keeping IPFS files up forever is not a huge issue at the moment.
Yeah, IPFS is still waiting for FileCoin to be implemented. For now, your best bet is to run your own IPFS gateway with your data pinned to it, and perhaps ask fellow IPFS enthusiasts to do the same.
Except the discovery of said works would give IPFS the advantage. If a website is taken down or removes the works, finding it somewhere else becomes the challenge, even though it still exists elsewhere.
With IPFS however, anyone with the file makes it available to anyone looking for it, regardless of who has it, or how many copies there are.
Let's say I write something -- a story, perhaps. And I publish it. I own the IP, and it is my only source of income. How do you consider that the equivalent of slave ownership?
The basic idea behind the parents argument is that you cannot own knowledge.
There are several arguments for this idea, and massive challenges to overcome if you actually try to act on this idea.
One of these challenges is how authors and similar content creators could possibly make a living in a world, where everything they produce can be copied by anyone.
There are several thought experiments which try to address this problem, none of which have been successfully implemented large-scale in real life (at least to my knowledge).
but the fact that challenges arise from such an idea does not discredit the validity of said idea.
it just means that we haven't found a sustainable way live by that ideal
> none of which have been successfully implemented large-scale in real life
to be honest, I've never read of a major example successfully utilizing that.
If you're just mentioning things which ever attempted to solve that issue: there have been a lot
UBI (universal basic income) comes to mind, just like services such as Patreon, on which people essentially donate to producers with negligible rewards.
Some music bands also give away their music for free and try to make their living with concerts... there have been countless attempts, honestly.
But they're ultimately of negligible impact if you compare their revenue to top selling products. that's what i meant with successful and large scale.
"Please note that content shared with this addon is just cached on IPFS servers. If you want to store the content permanently, you need to have IPFS node running on your computer."
It's a common misconception that ipfs is something you can "upload to." Ipfs makes links permanent in that they will never change. There is no guarantee that the content of the link exists unless someone explicitly stored it on a node.
Most people don't change the ports, I'm an outlier, don't sweat it.
If you can get it working with the protocol handler (ipfs://) that'd be awesome, but I went to the effort of changing my IPFS settings, so I can go to the effort of changing a setting for your addon.
Unfortunately ipfs:// didn't work, so I rolled back changes. I also don't have access to window.ipfs, so I don't know at the moment how to handle custom port.
Cool, I thought about building something like this years ago. I wish the idea could be taken further, that is, all the information I consume (reading, watching, listening, etc) should be available to me for recall. Changes and current availability of the source must not affect my copy.
> all the information I consume (reading, watching, listening, etc) should be available to me for recall. Changes and current availability of the source must not affect my copy.
I was playing with the idea of doing something similar. I.e. just the reader part, to allow me to read articles in a clean uncluttered form.
But I didn't really know where to start with the content extraction... Turns out it is indeed "somewhat" complicated if I look at your Readability.js code. :D
"Upload to IPFS" means save to your local IPFS repo and "pin" it. Means you'll never be able to find it again unless you keep the hash forever in a different, external database. Then you should never change computers ever, or you'll lose everything.
my buddy's friend makes $96 hourly on the internet. She has been with out artwork for five months however final month her charge emerge as $12747 really on foot on the internet for some hours. study greater on this net internet site... HERE.......www.moneytechs.com
Interesting to see something like this here. I've been scratching my own itch and writing a self-hosted hybrid of alternative to pocket + IPFS archiver as well. Source is here: https://bitbucket.org/lullis/nofollow. A browser extension to make link submission easier would be very handy
For self-hosted Pocket, there's already Wallabag, you might want to take a look.
It would probably be easier to add IPFS archiving to Wallabag than to roll your own Pocket.
Yes, I am aware of Wallabag and it does some interesting things. It is just that I am allergic to PHP and would like to have a testbed for some other ideas I have:
- inter-instance link sharing, perhaps via activity pub.
- Some ACL later on top of IPFS to ensure you share links and content with people you trust only (also to avoid eventual copyright litigation)
- a sanitized way to read links from content farms that gets shared on social networks (this was one of the initial motivations for me to start this, and this is why I called it nofollow)
- a perhaps more robust way to have an internet archive.
It is unlikely I will get to do it all, but it is more likely I will get to do some of this in Python (or Go) than to ever pick PHP
Could you give me a little insight into your use case?
I'm personally excited for ipfs, but I've stopped following releases / am feeling burnt out by the slow pace of development. Still, good to see creative uses pop up in my internet bubble!
I was checking my old bookmarks and big part of them was outdated (404). Average lifespan of a website is 100 days https://blogs.loc.gov/thesignal/2011/11/the-average-lifespan... , but I would like to keep some articles to read and share even way later. I thought that it would be nice to have simple tool, which doesn't require setting up another account. Just backup and add a link to bookmarks. Because all technologies needed to fix the problem are already here, I just put them together.
Is it really a backup? Ipfs has no 'upload' so if you clear your local storage, everything will be gone. I think it's unlikely that multiple users will generate the same content hash.
Right, it's a local, sharable copy. Actually Readability cuts off so much HTML, that it's quite possible that 2 users will generate the same hash even from very dynamic website.
Really?
What kind of hash are we talking about here? And generated from what?
I could imagine the opposite, generating 2 hashes from the 'same' article, due to different markup being stripped. Not what you suggest though.
Unfortunately it's unlikely to be a backup, since everything will only live on your local computer. You might get the content from some other user if, by chance, they generated the exact same bytes, but it's rather unlikely.
To back things up, you'll want to pin to an external server like Eternum.io.
If you export your bookmarks you could autoparse them and ipfs pin and ipfs links to a longer lived archival node running on you main host or offsite/cloud.
You could plug IPFS Cluster's proxy endpoint (:9095) instead of the IPFS daemon API endpoint and get everything you add replicated to a backup peer.
I'm not sure whether this is a brilliant or a terrible idea. I have a tendency to collect my to-read stuff in tabs, and every once in a while I clean up my old tabs, read some of the stuff I intended to read, and discover many of them don't exist anymore. So a backup is great, but actually not having to read all the stuff I collect saves me a lot of time.
Sounds like a storage backend for your own Pocket or Evernote Web Clipper.
Coincidentally, I was (slowly) researching doing something similar with Scrapbook and some kind of synchronization a-la Dropbox. I guess this implementation sidesteps the whole issue of organizing files on the fs, though I'm not sure if that's entirely unproblematic.
Considering how much money the people behind IPFS have raised through Filecoin IPO, I'm surprised the pace of development is still so slow.
Man, you should say slow more times. The thing hasn't evolved a millimeter in the last 3 years. It's so full of bugs it's impossible to use for any use-case besides first demo hello world cat photo. It's also hell of slow and eats all your computer resources.
I was an enthusiast until 3 weeks ago when I lost all the references to my objects (twice, in two different manners) because of "a minor bug in mfs" and wasn't able to recover them.
> Could you give me a little insight into your use case?
I'm not the OP but a use case I might find useful is if you have a blog, want to comment on a news article, and want to have a cached copy of that news article. Since web pages often go way after some time, making sure the news article stays up might be useful. This is particularly true if the entity writing the news article decides it is embarrasing to them and wants to flush it down a memory hole.
In addition, you could continue to reference that cached article even when you go offline - so pinning it to your local node effectively makes it available to any of your locally connected devices without needing to ping a distant server (aka - you can take your entire "toread" stack camping without one-off planning =] )
I wonder what IP lawyers think about republishing works without permission into a network where nothing can be deleted.
I was listening to this Hashing It Out Episode with the founders of Storj[1] yesterday. Apparently IPFS isn't even that resilient. The Storj founders said that if you take a look at the IPFS sub-Reddit, half of the links don't work. That's not to discount what you're asking about, but it seems like the gamification for keeping IPFS files up forever is not a huge issue at the moment.
[1] - http://thebitcoinpodcast.com/hashing-it-out-34/
StorJ is a incentivized (people get paid to make things available) and IPFS is not. They explain this in the podcast and mention it several times.
IPFS is perfectly resilient as long as the network participants have the interest in keeping the content available.
Yeah, IPFS is still waiting for FileCoin to be implemented. For now, your best bet is to run your own IPFS gateway with your data pinned to it, and perhaps ask fellow IPFS enthusiasts to do the same.
Ipfs doesn't bring anything new for this case. Probably the same as they think of any website containing republished works without permission.
Once everyone with a copy deletes it, it disappears. Both for ipfs and the internet on general.
Except the discovery of said works would give IPFS the advantage. If a website is taken down or removes the works, finding it somewhere else becomes the challenge, even though it still exists elsewhere.
With IPFS however, anyone with the file makes it available to anyone looking for it, regardless of who has it, or how many copies there are.
probably the same way a slave owner thinks when you tell them slave owning is immoral and is a violation other people's self-ownership.
Let's say I write something -- a story, perhaps. And I publish it. I own the IP, and it is my only source of income. How do you consider that the equivalent of slave ownership?
The basic idea behind the parents argument is that you cannot own knowledge.
There are several arguments for this idea, and massive challenges to overcome if you actually try to act on this idea. One of these challenges is how authors and similar content creators could possibly make a living in a world, where everything they produce can be copied by anyone. There are several thought experiments which try to address this problem, none of which have been successfully implemented large-scale in real life (at least to my knowledge).
but the fact that challenges arise from such an idea does not discredit the validity of said idea. it just means that we haven't found a sustainable way live by that ideal
There's the Street Performer Protocol: https://www.schneier.com/academic/archives/1998/11/the_stree...
> none of which have been successfully implemented large-scale in real life
to be honest, I've never read of a major example successfully utilizing that.
If you're just mentioning things which ever attempted to solve that issue: there have been a lot UBI (universal basic income) comes to mind, just like services such as Patreon, on which people essentially donate to producers with negligible rewards. Some music bands also give away their music for free and try to make their living with concerts... there have been countless attempts, honestly.
But they're ultimately of negligible impact if you compare their revenue to top selling products. that's what i meant with successful and large scale.
Should link to your source code instead: https://github.com/meehow/2read
Right, just wanted to clean it a little first :)
From the description
"Please note that content shared with this addon is just cached on IPFS servers. If you want to store the content permanently, you need to have IPFS node running on your computer."
It's a common misconception that ipfs is something you can "upload to." Ipfs makes links permanent in that they will never change. There is no guarantee that the content of the link exists unless someone explicitly stored it on a node.
Is there some misconception in that sentence?
Not in that sentence. It's the misconception that the sentence addresses.
> 2read will automatically "pin" your content if you have local node running.
How does 2read detect the local node? My IPFS node runs on different ports. Will it be detected?
Edit:
That would be a no. It does assume port numbers.Maybe have a failure condition where it then does:
Yes, it's a bit oversimplified. Thank you for the suggestion. I will test it tomorrow. I hope it can be done without creating settings.
Most people don't change the ports, I'm an outlier, don't sweat it.
If you can get it working with the protocol handler (ipfs://) that'd be awesome, but I went to the effort of changing my IPFS settings, so I can go to the effort of changing a setting for your addon.
Now it has a fallback to ipfs:// (in version 1.3). Thank for pointing it out. As I understand it will only work if you have IPFS Companion, right?
Awesome.
I think there's a couple other addons and things that will enable ipfs:// but Companion is the most common.
Unfortunately ipfs:// didn't work, so I rolled back changes. I also don't have access to window.ipfs, so I don't know at the moment how to handle custom port.
Cool, I thought about building something like this years ago. I wish the idea could be taken further, that is, all the information I consume (reading, watching, listening, etc) should be available to me for recall. Changes and current availability of the source must not affect my copy.
So kinda like a GitHub for your browsing history?
What in that description makes you think of GitHub specifically?
> all the information I consume (reading, watching, listening, etc) should be available to me for recall. Changes and current availability of the source must not affect my copy.
Reminds me of Github's forks
Cool Idea, can one use hashes like sha256 to prove that the article original content has not been modified? Is that already in IPFS?
It's already built-in. URLs in IPFS are Base32 encoded SHA-256 hashes https://docs.ipfs.io/guides/concepts/hashes/
That doesn't tell you anything about the original source, though, just the version that got put on IPFS.
which surely is the best possible without the original author signing it?
All that can be assured is that the content that the person mirroring it and linking you to it is as they intended.
You might also take a look at Polar:
https://getpolarized.io/
It does something similar but stores it locally. We have cloud support too so you can sync between your devices.
Thinking about IPFS at some point but it doesn't solve all the use cases/APIs I need supported yet.
I was playing with the idea of doing something similar. I.e. just the reader part, to allow me to read articles in a clean uncluttered form. But I didn't really know where to start with the content extraction... Turns out it is indeed "somewhat" complicated if I look at your Readability.js code. :D
Firefox already has built-in ‘reader’ mode. Afaik Readability is their implementation (possibly old)―it's on Mozilla's GitHub.
Firefox uses Readability.js for its reader mode
https://github.com/mozilla/readability
2read is using Readability.js from Mozilla with literally one line modification.
"Upload to IPFS" means save to your local IPFS repo and "pin" it. Means you'll never be able to find it again unless you keep the hash forever in a different, external database. Then you should never change computers ever, or you'll lose everything.
Chrome version is waiting for a review :)
Just wondering, does it convert the article into a readable form locally? Not saying this could be useful for paywalled content but...
It's converts article locally, exactly how Firefox reader does.
Just like “Switch to readable mode, print to PDF and save to your local S3 or Amazon S3 bucket”, only more complicated.
my buddy's friend makes $96 hourly on the internet. She has been with out artwork for five months however final month her charge emerge as $12747 really on foot on the internet for some hours. study greater on this net internet site... HERE.......www.moneytechs.com