Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I've tried helping keeping and seeding some historical archives, but it gets tedious quickly downloading one by one (and there's not much activity in them anyway). It would be nice if there was a more organized way of helping archival! (like downloading a large chunk of their database and seeding it automatically)


On one of my high seas sites there is an archive team which uses the API to find "at risk" torrents (<3 seeds). People dedicate X TB and have a script that hits the API, finds some at risk torrents then grabs them to seed and repeats till it fills the space alotted to archiving purposes.

Seems like something the IA could implement with some help.


How do they distinguish those at risk because just unpopular from those at risk because better versions exist ?


They only archive original sources, not transcodes. With that said I'd guess they'd start archiving non sources if there was space.


This helps somewhat, but there might be many original sources of the same thing, more or less depending on what we are talking about ?


Without doxxing myself or the site it's video so they prioritise by format. So it would go something like.

UHDBD > UHDRemux UHDWebRip > HDBD > HDRemux > HDWebRip > DVD9 > DVD5

With multiple sources within each tier treated equally.

The general thought process is more about the site disappearing so the focus on sources makes more sense and transcodes can be created trivially if needed for a theoretical new site as needed.

With the IA this is unlikely to be an issue as they'd just want to make sure each torrent has seeds so no real need for prioritization around sources.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: