basically how to do Web Scraping?
Consider you are browsing a website with very many pages in it, you kind of like it and now you want to have all that content/data for yourself.
Saving it manually would be a very hectic task or next to impossible.
You see a similarity in the webpages, basic Client-Server-Database interactions.
Consider the links of webpages are in sequence
Each and every link/webpage has same html id/class
How can you as a client PROGRAMME to save the necessary data from each webpage
the programme should generate links, send request to server, receive request & data from server, save relevant data (by extracting) on client memory