To do simple web scraping using jQuery, we can use the jQuery’s $.get
method to make GET requests to the web pages we want to scrape.
Then in the success callback, we can parse the HTML string obtained from the GET request and parse the elements we want with jQuery.
For instance, we can write:
$.get('https://jsfiddle.net/', (html) => {
[...$(html).find("div")].forEach((el) => {
const text = $(el).text();
console.log(text)
})
})
We call $.get
with the URL we want to get data from and the callback that’s run when that succeeds.
In the callback, we parse the HTML result into a DOM object with $(html)
.
Then we spread the div elements returned by the find
method into an array.
Finally, we call forEach
on the array and get the element from the el
parameter.
We then call get the text content of each element with $(el).text()
.
It is likely that we will run into CORS issues if we try to use $.get
to scrape data from a web page unless the page is in the same domain as where the code is hosted.
2 replies on “How to do Simple Web Scraping Using jQuery?”
It’s not working. CORS block request 🙁
This only works on sites that allows CORS.