Multi-threading and performance

Hello guys,
please help me to understand some basics.

1) When I'm going to start concurrent running scrapping sessions (server mode, from php), let's say 5 sessions, should i create 5 scrapping sessions with different names and own scripts, like Shopping site1, Shopping site2, etc. , or can i run 1 session 5 times but with different init parameters. SS running in server mode, initialize parameters passing from PHP.

2) I'm trying to scrape huge number of data. Content ordered like this:

Search Page:
- page title
- page title
- page title

Detail Page:
- desc
- website

There is only 10 result on search page, but total 50000 pages. What is the best option in that case if I'm going to run 5 sessions:
a) run 5 session to scrape Search Page and each Detail Page. So the first session will scrape 1 page, 2 session 2nd page, etc. Then stop, iterate page in PHP, then start again from 6 page ()

Am i right? Or should i use some other algorithm.

3) Using server mode and PHP. Can scrapping works 24/7. How can I do so if i need to run initialize PHP script, like shopping.php in tutorial. Should i call shopping.php from shopping.php again with POST parameters.

Please help with this, I've read all topics here but didn't find a clear answer.

In your place, I would just

In your place, I would just run the same scrape 5 seperate times. I would set the scrape to accept parameters that would direct them to scrape by category (or whatever your analog is) so that no 2 would ever hit the same pages.

There are a number of other tricks that can be of use, but I don't know what will help you without more information on your scrape. If you can get to a details page from a regular URL (without login or POST data), you could make a scrape that just goes and gets each unique URL to the details page, then a seperate scrape that will query each, and mark them done as they go. If that latter process is interrupted, you easily resume where you left off.

Forbidden problem

Thanks, I got it and run 5 sessions in parallel. Each scrapping different category. But I faced with another problem.

After some time, website I'm trying to scrape, began return 403 Forbidden page. I suppose my IP was banned.
I'm not using logins, etc. Just simply scrape data.

What would you recommend?

Thanks

That does happen sometimes,

That does happen sometimes, and HTTP 403 is access forbidden. You can look here for some basics:

http://blog.screen-scraper.com/2007/03/01/how-to-surf-and-screen-scrape-anonymously/