Sorry about the late reply; I took the day off to generate a trip, review the old code, and do some scaffolding.
So, physical disk errors… that's some serious stuff. If the bill goes up, send me the estimate and I'll foot it.
In other news, I reviewed the logs from early Sunday morning and I see that one point the number of simultaneous connections reached somewhere around 165 - all for the intensive task of transferring images. When I designed the code, I thought that the processing time would be n/x for x concurrent connections (and so released the cap on connections), but I see that it's actually n-x, so I'm going to cut that down to around 1-5. That should take care of the 520 issue on this side.
I'm planning to moving the code to a small-scale production server this weekend, and then to it's final home next weekend after some more testing. I'd like to schedule some calibration tests around your schedule (If that's okay with you), so I'll drop you a line about that soon.
I'm glad you mentioned the JSON. The database I'm using - CouchDB - requires everything to be stored in JSON, so I'm already using a JSON representation of the site. I bring that up because I'm planning on writing an inline jQuery module for many of the features in 4ChanX and AppChanX, so we could probably synchronize our efforts on that and any other forthcoming features around a shared interface. I'll send that schema over in the e-mail; while it works for me, it's definitely open for modification. I also remember somebody else asking for a JSON interface for a RSS feed. We should probably get him in on the conversation too.
Lastly, I haven't seen a solution that I consider elegant on the chans I visit for long threads, so what I'm considering is loading in chunks in tandem with infinite scroll. I don't know many people who could read 400 posts in 10 seconds, so it might work.
Sage because we're now looking down the rabbit hole.