Freeze on Web Interface

We have screen-scraper running on a server, and we access it using the web interface. It is one of the more recent alpha releases, and we are getting some pretty consistent freezing. Basically what happens is that we have a few scrapes running at once (usually at least 3), then we try to import another one. It simply hangs and never imports. If we exit the page and try to reload the interface, the top of the GUI loads, but none of the actual scrapes load and it sits there, spinning. All running scrapes have stopped as well. I can't find any error messages in the logs. The only way I can fix it is going and killing the java process, then restarting the server daemon.

Any thoughts? If this isn't a quick fix, I would like to downgrade to the last stable version... is that possible without reinstalling?

Thanks,
Chris

One more possible hint... we

One more possible hint... we spawn new scraping sessions within our scrapes, using a JAR file. So, for example, we're collecting store locations, and then for each one, we run some java code that calls up scraping sessions to take that data and Geocode it. It creates a new entry in the "Running" tab, which fills up quickly with these spawned sessions. This may not have anything to do with it, but I thought I would mention it.

After more testing, I'm

After more testing, I'm fairly certain the freeze happens because of the afore-mentioned spawning of new sessions within scrapes (or rather, within a JAR library called by the scrape). I got the freeze without the import step. I was running 3 scrapes simultaneously, each of which spawned these other, shorter scrapes within it. One thing to note with this process: a scrape will not spawn more than one RemoteScrapingSession at a time; it will wait until one is finished before starting the next. So, for example, in the above scenario, the most session running simultaneously would be 6. And I do remember to call .disconnect() on each RemoteScrapingSession object in my code. I can send over my code if you would like to look at it, but I'd rather not post it to the forums.

One more thing. I looked at

One more thing. I looked at all the log files, and while none of them have errors logged in them, there are about 50 log files that are all marked as last modified at the exact time of the crash. But most of these log files represent completed scraping sessions; they all say "Scraping Session XYZ finished". It's as though they aren't being cleaned up fast enough. The error doesn't happen when we spawn the same number of scrapes, but at a slower pace... is there something I do in the code to clean up the objects more efficiently?

Hi Chris, We made a change

Hi Chris,

We made a change that might address this in one of the latest alpha versions. I'm not sure exactly which alpha you're using, but if it's not the latest (4.5.24a), could you try that and let us know how it goes? If you are using that version, could you send me your code so I can test on our side?

Thanks,

Todd

Thanks, I'll try that and let

Thanks, I'll try that and let you know.

It worked great on the same

It worked great on the same scenario that froze it before, so things are looking good. We'll try it full scale and let you know.

Thanks,
Chris