There is a backup version of the the files in the server, like the articles, books and perhaps a mirror of the website itself to download by torrent and p2p? I think that it would to be very useful, not only to avoid the risk of data damage but also prevent and giving a possibility of countermeasure in case of cybernetic attacks. This could to be use to made the files in general accessible at p2p networks.Additionally, I want to suggest links and ads showing contents about TOR - The Onion Route - to provide anonymity for the users and recommendations to download it.
An example of what i suggesting could to be found here:
http://bookshelf.theanarchistlibrary.org/library/the-anarchist-library-on-torrent
https://www.marxists.org/admin/mirrors/
We could to see another
We could to see another example here, too:
https://www.marxists.org/admin/mirrors/
Not sure how well that would
Not sure how well that would work here as the website is very frequently updated, but it could be an interesting emergency backup.
I think it should be done if
I think it should be done if it's not too much work. In these days, copying/reproduction is the best form of storage and archiving. And I think it is worthwhile considering that in some places accessing this site may tracked or be blocked, some folks don't have access etc. hence having an offline library or a libcom usb stick to be passed around for copying may be a good thing?
I've been nagging about this
I've been nagging about this as well... Things is as I understand it libcom is just one huge blob database which means there's stuff in there you can't pass about. And it's a fair bit of work by trusted people to clean it up.
Pushing it all out into flat files would be great! Marxist.org is beautiful with it's clean simple html files.
Hi, yeah this is something we
Hi, yeah this is something we would like to do. But yes the problem is we are one big database, so we can't just turn that into a torrent otherwise people could download all of the things like private messages, personal information etc.
We should be able to at least bundle all of our file attachments together into a torrent file which would still be pretty good. Unfortunately we just don't really have enough tech people for the amount of work we need to do at the moment. So if anyone could help us out with it please let us know!
I tried to download using
I tried to download using WGET but unfortunately don't worked well. So, I trying now to download a mirror version from web.archive with Warrick My future projects is to put the .html and .pdf files directly to edonkey and anothers p2p networks and, at same time, offering a small version of the website at a server running directly at the Tor/I2P/Freenet.I would to be grateful if someone download the files and share this.
http://web.archive.org/web/20150907034517/http://libcom.org
https://code.google.com/p/warrick/wiki/About_Warrick
https://github.com/hartator/wayback-machine-downloader
Keeping a "live" archive with
Keeping a "live" archive with rolling updates would be difficult and unnecessary. Every 6-12 months, the new content can be added to the last version of the archive.
The main question is how large all library files here would be. Besides that I expect sheer volume to be high as well.
There are a number of ways to do this, but the project would require time, bandwidth and hard drive space. None of which are free in today's world. I'd do it pro bono if someone would be willing to contribute a decent hard drive for the initial process of downloading and storing a primary physical copy before the process of conversion and slimming down.
What happened using Wget?
What happened using Wget? Wget -r should do the trick, assuming bandwidth and hard drive space allow for it. There's also HTTrack, although I've never tried it.
There's also Google-Fu. Search with "filetype:pdf" and "site:libcom.org".
WGET says something like "the
WGET says something like "the file at the server is newer", so start to download them, several times.I tried to solve this using -nc but its lead to stop downloading after a time.
I suppose that using Google
I suppose that using Google keywords don't works properly because its not showing all .pdf files at the website.
Is there any work being done
Is there any work being done on this front?
I am currently seeding the anarchist library via bittorrent and it would be awesome if I can also start seeding libcom, even better if it's possible to host a mirror...