screen-scraper public support
Where is the scraping session saved
Hi,
This is driving me mad. I am using windows 7 and I have a number of scraping sessions saved. But I cannot find where the current one is saved. Is there a way to find where it is. When I press save it just saves and does not say where it is saved to. Any help would be really appreciated.
Sorry just realised you export the sessions and everthing is saved in the database. Not able to delete post
Create DataSet Manually From Script From Session Variables
Hi,
I am just looking at this software again to see if it will work for my project. One issue I cannot seem to find the answer to on your site is regarding creating a dataset manually. I could automatically but I want to add in a column that is saved in a session variable. But I am getting an error. I have a script run after each pattern match.
if(session.getVariable("DATASET_CREATED") == null)
{
billPayPhonesDataSet = new DataSet();
session.setVariable("DATASET_CREATED", "1");
}
record.put("PHONE_COST", session.getVariable("PHONE_COST"));
Beginner Java Question
Hi there,
I'm trying to obtain the road speeds at various junctions on the M25 from the website below (bottom left table on the website).
I have all of the data I need in a scraped format (Junction & Mph) from the below website:
http://www.frixo.com/m25-anticlockwise.asp
Eg:
Seq 0
Junction 1
Mph 38
Seq 1
Junction 2
Mph 67
Same scrape on multiple URL
Hi there. I've got roughly 100 URL's that I need to scrape, e.g.
http://www.domain.co.uk/shelves/Breakfast.html
http://www.domain.co.uk/shelves/Desserts.html
Each URL is in the same format and I've created a single scraping session that works for all of them. Now I could create a scraping session for each of the 100 URL's, however this seems like the long way round as I'm basically repeating the same task 100 times!
Ideally what I want is a way to automate a single scraping session to do these 100 URL's in order, does anyone know a way?
Many thanks
CSV query results from site
A site I am scraping only produces results in a CSV document. Screen Scraper handles the results, but I can't seem to figure out a good way to parse the CSV results in an effective manner. My primary problem is that I'm not getting line breaks.
If I could at least use a session variable to grab everything between the body tags in the below sample scrape, I could then parse the content later. But the results I'm getting don't seem to be finding a character I can use to create a line break.
Any suggestions?
____________________________________________________
HTTP/1.1 200 OK
Attempt to invoke method: write() on undefined variable or class name
Hi,
I'm having some problems with outputing data from my scraping session. I am scraping a particular page for a list of sub-pages, then scraping the sub-pages. From the sub-pages I am extracting single elements, such as the date and time, as well as variable number of elements such as the names of commenters.
I would like to have the data from each sub-page printed to a unique file, but I am having problems with the opening or printing to the file.
What I have done thus far (I think) is to open a file before the sub-page is scraped, run the scraper, then close the file:
Extractor patterns break when run on linux server
Hi, I have some scraping sessions which work perfectly on Windows but when I export them to my linux web server, they only partially work. The logs say that the extractor patterns don't match.
Is there any likely place where I should expect to find the cause of this problem?
ERROR: Failed to save the file: C:\.... The error message was No route to host: connect.
Using ScreenScraper 2.6 professional edition on Windows XP.
When using session.fileDownload to download file from URL to local C: drive receive the following error in the ScreenScraper logs intermittantly. Does the error mean no route to the URL, or no route to the C drive? What, if anything, can I do to avoid the error?
Wrote file from: https://[URL]/servlet?filename=my1.doc
Wrote file from: https://[URL]/servlet?filename=my2.doc
Wrote file from: https://[URL]/servlet?filename=my3.doc
Safe uninstall on Linux/Ubuntu?
Hi
I installed the demo version of Screen Scraper Enterprise on Ubuntu.
Great piece of software but I need to uninstall it from this computer.
Just wondered if there was a safe way to uninstall?
Many thanks, Russ
session.downloadFile
Using ScreenScraper 2.6 professional edition.
Observing the following behavior.
Trying to figure out why I can use (session.downloadFile + File.renameTo) to (download a file locally + move to a networked storage device) in 6 seconds grand total, but doing session.downloadFile on exact same file to exact same networked storage device takes 12 minutes.