There are several ways to crawl more than one page with kimono - please note that kimono institutes limits on crawls. The type of crawling you want to do depends on the format for the list of pages you want to scrape
If you have a webpage where you can scrape several links and you want to get the detail behind each link (e.g., search results for amazon where you have a list and want to click to retrieve the detail behind each) then you should source URLs to crawl from another Kimono API
If you want to scrape the information from several predictable URLs (e.g., scraping AirBNB information for London, New York and Berlin) then you should use URL generator to create those links
If you have a list of specifc or unique URLs to scrape data from then you can manually input a list of URLs to crawl.
If the webpage you want to extract data from provides a link to the next, similar page , you can use pagination to follow through to the next page (if you are trying to extract from a page with infinite scroll, you can read more about that here). Note that pagination will not work if there is no URL provided
You can also combine strategies to paginate + crawl - so let's say generate a list of URLs to crawl and then paginate through the generated results.
If you want to sub-in new parameters to modify the source URL that your API is scraping, you should use url parameter pass-through.
Note that for all these methods, the pages to be crawled must have the same structure. Kimono will fail if it is trying to crawl several pages with different structures.