CSV Input File - looping through records and processing one record at a time in scrapeable file

Hi

My scraping session depends on the input of an initial url or several webpages. Ive created a csv as an input file utilising the script that reads the data from the csv. The only challenge I have is that the scrapeable file only kicks off once the entire csv has been read. This causes the scrape only to start with the last url. I have utilised the csv input script from your examples scripts.

So with 5 urls (webpages) the scrapefile starts utilising the last record and ends after it has succesfully extracted the information. The script runs before the scraping session starts and this is of course the reason why the variable ~#URL#~ is equal to the last record from the input csv.

This is probably something very simple but I cant figure out how to get the scrapeable file to utilise one record at a time and complete the extraction before moving on to the next working through each of the urls contained in the csv input file.

Starting scraper.
Running scraping session: SITE --urls--
Processing scripts before scraping session begins.
Processing script: "CSV Input File"
***Site URL = page1.aspx
***Site URL = page2.aspx
***Site URL = page3.aspx
***Site URL = page4.aspx
***Site URL = page5.aspx
Scraping file: "Generate URLs"
Generate URLs: Preliminary URL: http://site.com/~#URL#~
Generate URLs: Using strict mode.
Generate URLs: Resolved URL: http://site.com/page5.aspx
Generate URLs: Sending request.

Your assistance would be appreciated.
Regards
oOze

Need to see input file and script

Ooze,

Please post an example of the CSV file and the script that you're using. It's helpful if you surround any code or log files with the tags, too.

Thanks,
Scott