screen-scraper public support
Run a script after scraping session
I am having the following problem.
I need to run a db sync script after the scraping session has ended.
I think it has to be done by a script, but how do i tell the script only to
execute after the scraping session has ended?
Anyone ideas how to do this?
Varying numbers of images
Hi
I'm having problems scraping the images from a results page. The problem is that there is not always the same number of photos. For example one result could show:
another could be:
sendMail function and what is used as 'Sender??"
You can specify the body/attachments and recipients. But what becomes the sender for 'Mail From' on SMTP?
Once a scrape is started - how can you cancel/abort it?
Say I see problems in the log, may as well stop/fix/restart.
The problem is my scrape is many levels, and I saw a strange error last night, but the only way to stop it was shutdown, which loses the log, darn!!
Soap server request examples
Hi
I am after some examples for using the soap interface. The docos only talk about using libraries to interface to the soap server. I have an application that can make web service soap calls and wish to use this to call the soap server.
Has anyone any examples that show required header info soapaction and xml request and responses .
Regards
Stephen
Scraping session was either invalid or has not been set.
I get this error while scraping using the non-gui tarball of screen-scraper.
The same thing works fine with the GUI version. We need the non-gui version for our server. Any ideas ?
-agn
Can this handle multi level/multi record situations
Basically the page, contains a set of items, but within each item, there could be a list of locations, or possibly not.
I DO have some very elaborate scrapes done, and just don't seem to see an easy way to gather this type of data from a scrape.
Is it as simple as two patterns, using session variables, and in the second tier of data, using some data from first tier. Thanks.
Some problem with multipage search results...help!
I need to get all pages of the search results.
For example:Each page has five records,and there are 10 pages.
In the page 2or3/4/5/6...
there's only pageNumber just look like this:
http://******/***.do?pageNumber=2&atn=ATN_GOTO_PAGE&=
,and add this url to scraper session as a scraperable file.
it does not work again,it always get the error page of the website.
help! Some problem with the incorrect url.
There is a website that it always keep a session id in it's url.
And,generally,one session keep activated for a certain duration.
when the url contain a session id ,it won't work again after the duration.
I think there is something with cookies,but don't really know why?
Pls help me to solve this problem .
Thank you!
Pre-Buying Question
Hello,
is it possible doing exactly what this service does: www.feed43.com meaning to extract specific elements from various websites as Link Titles, links and other data and after that publishing them as RSS feed ?
Thaks for help
Daniel