Next Link extractor pattern

Hi friends

Am using a Screen Scraper Basic Version

Am following the steps in E-Commerce Site tutorial

While scraping a job portal web site. The error Occurred at Creating the Next Link Extractor pattern.

It is Showing only one DataSet Record when i clicked on Test Pattern Button

The Url is

Next >>

I had extracted the some portion i.e.,

Next >>

I made some changes in the above Text i.e.,

Next >>

what changes i should make to get 2 record in the DataSet's

please provide me required changes...

I'm sorry. I don't understand

I'm sorry. I don't understand the question.

I went your link at

http://federalgovernmentjobs.us/jobsearch.html?q=new&B1=Search&form=HomePage&L=1&JobLocation=NA-US&Dist=35&Zip=Zipcode&Pub=0&email=&P=2

It shows there are 4,331 jobs found, and it shows 10 per page, so what I would do is

  1. Make a scrapeable named "Search results", and it requests the URL
    http://federalgovernmentjobs.us/jobsearch.html?q=new&B1=Search&form=HomePage&L=1&JobLocation=NA-US&Dist=35&Zip=Zipcode&Pub=0&email=&P=~#PAGE#~

    The PAGE will be set in a script as 1 the first time.
  2. Scrape the number of jobs found, and on that extractor run a script (once is pattern matches)
    makeNum(num)
    {
            if (num!=null)
            {
                    num = String.valueOf(num);
                    num = num.replaceAll("[\\D]", "");
                    if (!num.isEmpty())
                            num = Integer.parseInt(num);
                    else
                            num = 0;
            }
            else
                    num = 0;
           
            return num;
    }

    if (session.getv("PAGE")==1)
    {
            numJobs = makeNum(dataRecord.get("JOBS"));
            jobsPerPage = 10;
            pages = numJobs/jobsPerPage;
            if (numJobs%jobsPerPage>0)
                    pages++;
            session.log("There is " + numJobs + " jobs on " + pages + " pages");
            for (i=2; i<=pages; i++) // Start at page 2
            {
                    session.setv("PAGE", i);
                    session.log("Scraping page " + i + " of " + pages);
                    session.scrapeFile("Search results");
            }
    }
    else
            session.log("Already in a page iterator");
  3. That should iterate all the pages with the least memory usage.