Rough day. It started early - I got up before 6am - and got to work on a new idea for solving the RSS problem.
I’d been storing feeds in S3 very recently, but getting NginX to silently redirect to those feeds. This meant that NginX was often busy funnelling RSS requests, downloading the entire feed from S3 and then pushing it out to the browser.
I knew I couldn’t do a real redirect as that could have disastrous effects going forwards, meaning I could end up losing control over people’s feeds if the redirect was interpreted as something permanent, or feed readers just didn’t bother to process them.
I’d always wanted relatively short and readable RSS feed URLs, and it finally struck me that using CloudFront, I could create a distribution that pointed to the
media/spoke directory in my S3 bucket, and just pick each feed file from there. That would mean feed URLs like
http://feeds.podiant.co/platform/rss.xml. Because a piece of code would no longer be handling the feed, I’d need to figure out how to refresh them when needed, and handle redirects. So I setup a crown job for refreshing the feeds, and researched setting 301 redirects for S3 objects. I’ve not yet found out whether this will work in CloudFront, but I’m hopeful. And there’s always the
new-feed-url tag that iTunes provides.
That helped a bit, but we still had problems throughout the day. I recorded Thread with Jon and then tried to get a few other pieces of work done while things were quiet, but couldn’t really focus and ended up going back to the site. I finally looked into something called pgBouncer, which is effectively a load balancer for PostgreSQL.
I’d been trying Gunicorn’s
gevent worker class on and off, but it often caused problems as it was processing lots of requests - which is great - but creating lots of connections to the PostgreSQL database, which meant PostgreSQL would reply with an error stating it was full. Increasing that number too much would cause memory problems, so I started by separating out Redis, Elasticsearch and MongoDB into their own servers. This helped a lot, but the big change came when I setup pgBouncer (with the right settings), as this has created a reusable connection pool that the Django app can communicate with, instead of establishing hundreds of connections. It does that anyway, but it so far hasn’t run out of connections and seems to be working fine.
Fingers massively crossed, but I feel like we may be out of the woods so far, in terms of dealing with concurrency. Five web servers, three worker servers, and dedicated servers for PostgreSQL, Redis, MongoDB, Elasticsearch and pgBouncer. Having set all that out, I was able to go to bed - after drinking a bit too much wine - feeling relatively OK about the stability side.
However, I’d been made to feel pretty shitty when I’d realised what I’d done had impacted other people. I got a pretty stern message from Todd Cochran - not to me, but about me - and I said I’d try and help deal with the “frenzy” of people moving from Podiant to Blubrry.
All in all it made me feel pretty beat, and like I’d made a big stinky mess on the Internet and had to clean it up. Felt a bit alone, not having anyone to talk to and not wanting to bother anyone, but people on the Podiant Slack were pretty ace.
We now have 30-odd Community applications to sift through, and no sign of them stopping right now.
I went to bed and watched an episode of Crazy Ex-Girlfriend , then tried to watch a bit of Mindhunter , but was falling massively asleep. Pretty dog-tired at the end of a long and very challenging day (both emotionally and mentally).