Current version of Feed43 engine has been written in plain old Perl. As the number of feeds reached 7000+ and we have had several hits per second, our hoster couldn't help but disable the engine script. When we created Feed43 we didn't even imagine it would be so popular despite its "techie-ness" (regular expresions, you know).
We tried our best to make our paid users continue to use this service without interruptions. As a side effect of disabling free feeds they have good times now as their feeds are being processed almost instantly. :)
But we still want to have a free version of Feed43 (this is why we called it so). Now we are busy porting the engine under mod_perl. This is a tough task, but worth doing. Current estimates show that scripts run about ten times faster. This is a good news.
Once we have the engine ported, we will move to a special server that will host Feed43, and launch it for public access again.
Our estimates for completing migration are about 3..4 weeks, if everything goes well. So, take a deep breath and wish us luck. We know you will miss Feed43 during this period.
Subscribe to:
Post Comments (Atom)
4 comments:
Maybe you could try forcing feed fetching via i.e. feedburner only? It fetches source once per 30 mins and caches it locally.
psz, though FeedBurner fetches a singe feed once a 30 minutes or so, it, unfortunately, downloads all Feed43 feeds registered at FeedBurner at the same time in about 30..50 concurrent threads, which results in load spikes. So FeedBurner is not a solution to rely on. By the way, the same does the Google fetcher, but it's impact on Feed43 health is even worse, because, as far as I remember, it polls feeds every 15 minutes, but, if the feed gives an error, it starts polling the feed almost constantly.
I think Feed43 is perfect for "intermediate" techies -- real programmers might duplicate these functions locally. I know I'm very pleased as an RSS "newbie," which brings me to my point:
I've burned quite a few feeds of more-or-less static pages. (The feed community prefers the term "evergreen.") I don't need to re-scrape those pages every six hours -- or ever really.
I'm uploading them to feedburner, using their "xml source" tab, then uploading the evergreen content to googlepages.com. Too bad it's the Christmas rush, but I'm gradually reducing my "free" feeds and increasing reliability (hopefully on both ends.)
hey, it's pretty awesome for beyond-intermediate techies; I'm a regular regexp-slinger, with an arsenal of scraper scripts of my own -- I even wrote a general purpose scraper app, Sitescooper. Even given that, I've still been using feed43 regularly recently.
it's a nice product. Thanks for writing it!
--j.
Post a Comment