Error Trapping & bad links

Hi All,
I have been using the try...catch for my scripts in interpreted java. I'm wondering how to trap errors that happen when a link times out?

The other thing I'm having problems with is finding the command(s?) (new to java) needed to quit a script early. For instance, crawling through web page with many links..when there is a time out it seems that Screen Scraper continues to run through the scripts...reporting that no extraction patterns matched. I'd like to be able to trap the error at that point and transfer control back to the parent script so I could grab the next link instead of waiting for everything to time out.

Thanks for any help...
--jeremy

Error Trapping & bad links

Hi Jeremy,

I think you might find these two methods useful

scrapeableFile.wasErrorOnRequest()

and

scrapeableFile.noExtractorPatternsMatched()

Documentation on both can be found here http//www.screen-scraper.com/screen-scraper/doc/screens/using_scripts.htm.

You might also want to check the size of the response to see if nothing came back. For example

if( scrapeableFile.getContentAsString().length()==0 )
{
...
}

In terms of trapping these things so that you can transfer control to the parent script, I'm not sure how you have everything structured, but it may just be a matter of checking for those errors at the right times such that scrapeable files don't get invoked that shouldn't be.

Hope that helps. Please don't hesitate to reply back if you'd like anything clarified.

Best,

Todd Wilson