NZB indexers have been shaking up a little lately, and even some of the good ones are changing. So, I figured it might be time to see how easy it would be to run my own auto-indexer using Newznab. I’ve been running it on a small VPS with 512mb of RAM.
First lesson is that using Apache on such a small VPS probably isn’t going to work out very well. I’m not very good at trimming down Apache, but even the little I did had the memory usage hovering around 375mb. Not much left. Thankfully, Newznab gives you the rewrite rules to setup a vhost for Nginx. Under Nginx, memory on the whole system hasn’t crept past 240mb. Even when I’m running the scripts to backfill and process new releases, I’d be surprised to see system memory go higher than 225mb. Highest low average was 2.30. So it’s definietly doable even on a modest box.
I’ve only just started, and I’ve only backfilled one newsgroup on the server so far just to get an idea on how everything works. First impression is that you really want an Astraweb account if you want to do backfilling. Newznab says that compressed headers only works for Astraweb. I tried on Giganews and it wasn’t happening. It’s been slow going without using compressed headers, and that’s just with only trying to backfill one newsgroup.
First off, I tried backfilling one groups at 1,100 days. Without compressed headers and default max messages, it’s really slow. Increasing the max amount of messages to download at once should help out. The other possibility is to not worry about trying to get a huge backfill and just letting the index build up naturally. I doubt I’ll take that route, even if it is the best solution. Not having Newznab check for passworded releases on backfills is also a good idea, since it just adds so much extra time.
Download speed and size aside, it still takes a long time to add the releases. Even after you download 1,100 days of headers for most of the newsgroups, it might take days for the script to process all the releases. Hopefully my VPS provider won’t get upset from the constant use
Disk usage I can see being a problem down the road, but other people I’ve read doing the samething only have reported disk usage as a few gigs. As long as maintaining a full auto-index stays below 30 gigs once all the groups are indexed, I don’t see too much of a problem. We’ll have to wait and see.
For anyone who wants to run Newznab on Nginx. The rewrite rules they provide are slightly incomplete. Newznab has most of the site expecting SSL, so you’ll need to add:
listen 80 default_server;
listen 443 ssl;
And of course generate your own self-signed SSL keys.
I doubt I’ll ever open it to the public, but who knows, it could be a small experiment that I end up holding on to for a while. At least I’ll always have my own place for API calls for Sickbeard and Couchpotato.
I followed up this post later with more thoughts on running Newznab.